id
stringlengths
11
20
paper_text
stringlengths
29
163k
review
stringlengths
666
24.3k
iclr_2018_B1EVwkqTW
While deep neural networks have shown outstanding results in a wide range of applications, learning from a very limited number of examples is still a challenging task. Despite the difficulties of the few-shot learning, metric-learning techniques showed the potential of the neural networks for this task. While these methods perform well, they don't provide satisfactory results. In this work, the idea of metric-learning is extended with Support Vector Machines (SVM) working mechanism, which is well known for generalization capabilities on a small dataset. Furthermore, this paper presents an end-to-end learning framework for training adaptive kernel SVMs, which eliminates the problem of choosing a correct kernel and good features for SVMs. Next, the one-shot learning problem is redefined for audio signals. Then the model was tested on vision task (using Omniglot dataset) and speech task (using TIMIT dataset) as well. Actually, the algorithm using Omniglot dataset improved accuracy from 98.1% to 98.5% on the one-shot classification task and from 98.9% to 99.3% on the few-shot classification task.
Make SVM great again with Siamese kernel for few-shot learning ** PAPER SUMMARY ** The author proposes to combine siamase networks with an SVM for pair classification. The proposed approach is evaluated on few shot learning tasks, on omniglot and timit. ** REVIEW SUMMARY ** The paper is readable but it could be more fluent. It lacks a few references and important technical aspects are not discussed. It contains a few errors. Empirical contribution seems inflated on omniglot as the authors omit other papers reporting better results. Overall, the contribution is modest at best. ** DETAILED REVIEW ** On mistakes, it is wrong to say that an SVM is a parameterless classifier. It is wrong to cite (Boser et al 92) for the soft-margin SVM. I think slack variables come from (Cortes et al 95). "consistent" has a specific definition in machine learning https://en.wikipedia.org/wiki/Consistent_estimator , you must use a different word in 3.2. You mention that a non linear SVM need a similarity measure, it actually need a positive definite kernel which has a specific definition, https://en.wikipedia.org/wiki/Positive-definite_kernel . On incompleteness, it is not obvious how the classifier is used at test time. Could you explain how classes are predicted given a test problem? The setup of the experiments on TIMIT is extremely unclear. What are the class you are interested in? How many classes and examples does the testing problems have? On clarity, I do not understand why you talk again about non-linear SVM in the last paragraph of 3.2. since you mention at the end of page 4 that you will only rely on linear SVMs for computational reasons. You need to mention explicitely somewhere that (w,\theta) are optimized jointly. The sentence "this paper investigates only the one versus rest approach" is confusing, as you have only two classes from the SVM perspective i.e. pairs (x1,x2) where both examples come from the same class and pairs (x1,x2) where they come from different class. So you use a binary SVM, not one versus rest. You need to find a better justification for using L2-SVM than "L2-SVM loss variant is considered to be the best by the author of the paper", did you try classical SVM and found them performing worse? Also could you motivate your choice for L1 norm as opposed to L2 in Eq 3? On empirical evaluation, I already mentioned that it impossible to understand what the classification problem on TIMIT is. I suspect it might be speaker identification. So I will focus on the omniglot experiments. Few-Shot Learning Through an Information Retrieval Lens, Eleni Triantafillou, Richard Zemel, Raquel Urtasun, NIPS 2017 [arxiv July'17] and the reference therein give a few more recent baselines than your table. Some of the results are better than your approach. I am not sure why you do not evaluate on mini-imagenet as well as most work on few shot learning generally do. This dataset offers a clearer experimental setup than your TIMIT setting and has abundant published baseline results. Also, most work typically use omniglot as a proof of concept and consider mini-imagenet as a more challenging set.
iclr_2018_SJTB5GZCb
The biological plausibility of the backpropagation algorithm has long been doubted by neuroscientists. Two major reasons are that neurons would need to send two different types of signal in the forward and backward phases, and that pairs of neurons would need to communicate through symmetric bidirectional connections. We present a simple two-phase learning procedure for fixed point recurrent networks that addresses both these issues. In our model, neurons perform leaky integration and synaptic weights are updated through a local mechanism. Our learning method extends the framework of Equilibrium Propagation to general dynamics, relaxing the requirement of an energy function. As a consequence of this generalization, the algorithm does not compute the true gradient of the objective function, but rather approximates it at a precision which is proven to be directly related to the degree of symmetry of the feedforward and feedback weights. We show experimentally that the intrinsic properties of the system lead to alignment of the feedforward and feedback weights, and that our algorithm optimizes the objective function.
The manuscript discusses a learning algorithm that is based on the equilibrium propagation method, which can be applied to networks with asymmetric connections. This extension is interesting, but the results seem to be incomplete and missing necessary additional analyses. Therefore, I do not recommend acceptance of the manuscript in its current form. The main issues are: 1) The theoretical result is incomplete since it fails to show that the algorithm converges to a meaningful learning result. Also the experimental results do not sufficiently justify the claims. 2) The paper further makes statements about the performance and biological plausibility of the proposed algorithm that do not hold without additional justification. 3) The paper does not sufficiently discuss and compare the relevant neuroscience literature and related work. Details to major points: 1) The presentation of the theoretical results is misleading. Theorem 1 shows that the proposed neuron dynamics has a fixed point that coincides with a local minimum of the objective function if the weights are symmetric. However, this was already clear from the original equilibrium propagation paper. The interesting question is whether the proposed algorithm automatically converges to the condition of symmetric weights, which is left unanswered. In Figure 3 experimental evidence is provided, but the results are not convincing given that the weight alignment only improves by ~1° throughout learning (compared to >45° in Lillicrap et al., 2017). It is even unclear to me if this effect is statistically significant. How many trials did the authors average over here? The authors should provide standard statistical significance measures for this plot. Since no complete theoretical guarantees are provided, a much broader experimental study would be necessary to justify the claims made in the paper. 2) Throughout the paper it is claimed that the proposed learning algorithm is biologically plausible. However, this argument is also not sufficiently justified. Most importantly, it is unclear how the proposed algorithm would behave in a biologically realistic recurrent networks and it is unclear how the different learning phases should be realized in the brain. Neural networks in the brain are abundantly recurrent. Even in the layered structure of the neocortex one finds dense lateral connectivity between neurons on each layer. It is not clear to me how the proposed algorithm could be applied to such networks. In a recurrent network, rolled-out over time, information would need to be passed forward and backwards in time. The proposed algorithm does not seem to provide a solution to this temporal credit assignment problem. Also in the experiments the algorithm is applied only to feedforward architectures. What would happen if recurrent networks were used to learn temporal tasks like TIMIT? Please discuss. In the discussion on page 8 the authors further argue that the learning phases of the proposed algorithm could be implemented in the cortex through theta waves that modulate long-term plasticity. To support this theory the authors cite the results from Orr et al., 2001, where hippocampal place cells in behaving rats were studied. To my knowledge there is no consensus on the precise nature of this modulation of plasticity. E.g. in Wyble et al. 2003, it was observed that application of learning protocols at different phases of theta waves actually leads to a sign change in learning, i.e. long term potentiation was modulated to depression. It seems to me that the algorithm is not compatible with these other experimental findings, since gradients only point in the correct direction towards the final phase and any non-zero learning rate in other phases would therefore perturb learning. Did the authors try non-optimal learning rate schedules in the experiments (including sign change etc.) to test the robustness of the proposed algorithm? Also to my knowledge, the modulatory effect of theta rhythms has so far only been described in the CA1 region of rodent hippocampus which is a very specialized region of the brain (see Hanslmayr et al., 2016, for a review and a modern hypothesis on the role of theta rhythms in the brain). Furthermore, the discussion of the possible implementation of the learning algorithm in analog hardware on page 8 is missing an explanation of how the different learning phases of the algorithm are controlled on the chip. One of the advantages of analog hardware is that it does not require global clocking, unlike classical digital hardware, which is expensive in wiring and energy requirement. It seems to me that this advantage would disappear if the algorithm was brought to an analog chip, since global information about the learning phase has to be communicated to each synapse. Is there an alternative to a global wiring scheme to convey this information throughout the whole chip? Please discuss this in more depth. 3) The authors apply the learning algorithm only to the MNIST dataset, which is a relatively simple task. Similar results were also achieved using random feedback alignment (Lillicrap et al., 2017). Also, the evolutionary strategies method (Salimans et al., 2017), was recently used for learning deep networks and applied to complex reinforcement learning problems and could likewise also be applied to simple classification tasks. Both these methods are arguably as simple and biologically plausible as the proposed algorithm. It would be good to try other standard benchmark tasks and report and compare the performance there. Furthermore, the paper is missing a broader “related work” section that discusses approaches for biologically plausible learning rules for deep neural architectures. Minor points: The proposed algorithm uses different learning rates that shrink exponentially with the layer number. Have the authors explored whether the algorithm works for really deep architectures with several tens of layers? It seems to me that the used learning rate heuristic may hinder scalability of equilibrium propagation. On page 5 the authors write: "However we observe experimentally that the dynamics almost always converges." This needs to be quantified. Did the authors find that the algorithm is very sensitive to initial conditions? References: Bradley P. Wyble, Vikas Goyal, Christina A. Rossi, and Michael E. Hasselmo. Stimulation in Hippocampal Region CA1 in Behaving Rats Yields Long-Term Potentiation when Delivered to the Peak of Theta and Long-Term Depression when Delivered to the Trough James M. Hyman. Journal of Neuroscience. 2003. Simon Hanslmayr, Bernhard P. Staresina, and Howard Bowman. Oscillations and Episodic Memory: Addressing the Synchronization/Desynchronization Conundrum. Trends in Neurosciences. 2016. Tim Salimans, Jonathan Ho, Xi Chen, Szymon Sidor, Ilya Sutskever. Evolution Strategies as a Scalable Alternative to Reinforcement Learning. Arxiv. 2017.
iclr_2018_SJIA6ZWC-
Machine learning models are usually tuned by nesting optimization of model weights inside the optimization of hyperparameters. We give a method to collapse this nested optimization into joint stochastic optimization of both weights and hyperparameters. Our method trains a neural network to output approximately optimal weights as a function of hyperparameters. We show that our method converges to locally optimal weights and hyperparameters for sufficiently large hypernets. We compare this method to standard hyperparameter optimization strategies and demonstrate its effectiveness for tuning thousands of hyperparameters.
*Summary* The paper proposes to use hyper-networks [Ha et al. 2016] for the tuning of hyper-parameters, along the lines of [Brock et al. 2017]. The core idea is to have a side neural network sufficiently expressive to learn the (large-scale, matrix-valued) mapping from a given configuration of hyper-parameters to the weights of the model we wish to tune. The paper gives a theoretical justification of its approach, and then describes several variants of its core algorithm which mix the training of the hyper-networks together with the optimization of the hyper-parameters themselves. Finally, experiments based on MNIST illustrate the properties of the proposed approach. While the core idea may appear as appealing, the paper suffers from several flaws (as further detailed afterwards): -Insufficient related work -Correctness/rigor of Theorem 2.1 -Clarity of the paper (e.g., Sec. 2.4) -Experiments look somewhat artificial -How scalable is the proposed approach in the perspective of tuning models way larger/more complex than those treated in the experiments? *Detailed comments* -"...and training the model to completion." and "This is wasteful, since it trains the model from scratch each time..." (and similar statement in Sec. 2.1): Those statements are quite debatable. There are lines of work, e.g., in Bayesian optimization, to model early stopping/learning curves (e.g., Domhan2014, Klein2017 and references therein) and where training procedures are explicitly resumed (e.g., Swersky2014, Li2016). The paper should reformulate its statements in the light of this literature. -"Uncertainty could conceivably be incorporated into the hypernet...". This seems indeed an important point, but it does not appear as clear how to proceed (e.g., uncertainty on w_phi(lambda) which later needs to propagated to L_val); could the authors perhaps further elaborate? -I am concerned about the rigor/correctness of Theorem 2.1; for instance, how is the continuity of the best-response exploited? Also, throughout the paper, the argmin is defined as if it was a singleton while in practice it is rather a set-valued mapping (except if there is a unique minimizer for L_train(., lambda), which is unlikely to be the case given the nature of the considered neural-net model). In the same vein, Jensen's inequality states that Expectation[g(X)] >= g(Expectation[X]) for some convex function g and random variable X; how does it precisely translate into the paper's setting (convexity, which function g, etc.)? -Specify in Alg. 1 that "hyperopt" refers to a generic hyper-parameter procedure. -More details should be provided to better understand Sec. 2.4. At the moment, it is difficult to figure out (and potentially reproduce) the model which is proposed. -The training procedure in Sec. 4.2 seems quite ad hoc; how sensitive was the overall performance with respect to the optimization strategy? For instance, in 4.2 and 4.3, different optimization parameters are chosen. -typo: "weight decay is applied the..." --> "weight decay is applied to the..." -"a standard Bayesian optimization implementation from sklearn": Could more details be provided? (there does not seem to be implementation there http://scikit-learn.org/stable/model_selection.html to the best of my knowledge) -The experimental set up looks a bit far-fetched and unrealistic: first scalar, than diagonal and finally matrix-weighted regularization schemes. While the first two may be used in practice, the third scheme is not used in practice to the best of my knowledge. -typo: "fit a hypernet same dataset." --> "fit a hypernet on the same dataset." -(Franceschi2017) could be added to the related work section. *References* (Domhan2014) Domhan, T.; Springenberg, T. & Hutter, F. Extrapolating learning curves of deep neural networks ICML 2014 AutoML Workshop, 2014 (Franceschi2017) Franceschi, L.; Donini, M.; Frasconi, P. & Pontil, M. Forward and Reverse Gradient-Based Hyperparameter Optimization preprint arXiv:1703.01785, 2017 (Klein2017) Klein, A.; Falkner, S.; Springenberg, J. T. & Hutter, F. Learning curve prediction with Bayesian neural networks International Conference on Learning Representations (ICLR), 2017, 17 (Li2016) Li, L.; Jamieson, K.; DeSalvo, G.; Rostamizadeh, A. & Talwalkar, A. Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization preprint arXiv:1603.06560, 2016 (Swersky2014) Swersky, K.; Snoek, J. & Adams, R. P. Freeze-Thaw Bayesian Optimization preprint arXiv:1406.3896, 2014 ********* Update post rebuttal ********* I acknowledge the fact that I read the rebuttal of the authors, whom I thank for their detailed answers. My minor concerns have been clarified. Regarding the correctness of the proof, I am still unsure about the applicability of Jensen inequality; provided it is true, then it is important to see that the results seem to hold only for particular hyperparameters, namely regularization parameters (as explained in the new updated proof). This limitation should be exposed transparently upfront in the paper/abstract. Together with the new experiments and comparisons, I have therefore updated my rating from 5 to 6.
iclr_2018_Hyg0vbWC-
Published as a conference paper at ICLR 2018 GENERATING WIKIPEDIA BY SUMMARIZING LONG SEQUENCES We show that generating English Wikipedia articles can be approached as a multidocument summarization of source documents. We use extractive summarization to coarsely identify salient information and a neural abstractive model to generate the article. For the abstractive model, we introduce a decoder-only architecture that can scalably attend to very long sequences, much longer than typical encoderdecoder architectures used in sequence transduction. We show that this model can generate fluent, coherent multi-sentence paragraphs and even whole Wikipedia articles. When given reference documents, we show it can extract relevant factual information as reflected in perplexity, ROUGE scores and human evaluations.
The main significance of this paper is to propose the task of generating the lead section of Wikipedia articles by viewing it as a multi-document summarization problem. Linked articles as well as the results of an external web search query are used as input documents, from which the Wikipedia lead section must be generated. Further preprocessing of the input articles is required, using simple heuristics to extract the most relevant sections to feed to a neural abstractive summarizer. A number of variants of attention mechanisms are compared, including the transofer-decoder, and a variant with memory-compressed attention in order to handle longer sequences. The outputs are evaluated by ROUGE-L and test perplexity. There is also a A-B testing setup by human evaluators to show that ROUGE-L rankings correspond to human preferences of systems, at least for large ROUGE differences. This paper is quite original and clearly written. The main strength is in the task setup with the dataset and the proposed input sources for generating Wikipedia articles. The main weakness is that I would have liked to see more analysis and comparisons in the evaluation. Evaluation: Currently, only neural abstractive methods are compared. I would have liked to see the ROUGE performance of some current unsupervised multi-document extractive summarization methods, as well as some simple multi-document selection algorithms such as SumBasic. Do redundancy cues which work for multi-document news summarization still work for this task? Extractiveness analysis: I would also have liked to see more analysis of how extractive the Wikipedia articles actually are, as well as how extractive the system outputs are. Does higher extractiveness correspond to higher or lower system ROUGE scores? This would help us understand the difficulty of the problem, and how much abstractive methods could be expected to help. A further analysis which would be nice to do (though I have less clear ideas how to do it), would be to have some way to figure out which article types or which section types are amenable to this setup, and which are not. I have some concern that extraction could do very well if you happen to find a related article in another website which contains encyclopedia-like or definition-like entries (e.g., Baidu, Wiktionary) which is not caught by clone detection. In this case, the problem could become less interesting, as no real analysis is required to do well here. Overall, I quite like this line of work, but I think the paper would be a lot stronger and more convincing with some additional work. ---- After reading the authors' response and the updated submission, I am satisfied that my concerns above have been adequately addressed in the new version of the paper. This is a very nice contribution.
iclr_2018_ByQZjx-0-
We propose Efficient Neural Architecture Search (ENAS), a faster and less expensive approach to automated model design than previous methods. In ENAS, a controller learns to discover neural network architectures by searching for an optimal path within a larger model. The controller is trained with policy gradient to select a path that maximizes the expected reward on the validation set. Meanwhile the model corresponding to the selected path is trained to minimize the cross entropy loss. On the Penn Treebank dataset, ENAS can discover a novel architecture thats achieves a test perplexity of 57.8, which is state-of-the-art among automatic model design methods on Penn Treebank. On the CIFAR-10 dataset, ENAS can design novel architectures that achieve a test error of 2.89%, close to the 2.65% achieved by standard NAS . Most importantly, our experiments show that ENAS is more than 10x faster and 100x less resource-demanding than NAS.
Summary: The paper presents a method for learning certain aspects of a neural network architecture, specifically the number of output maps in certain connections and the existence of skip connections. The method is relatively efficient, since it searches in a space of similar architectures, and uses weights sharing between the tested models to avoid optimization of each model from scratch. Results are presented for image classification on Cifar 10 and for language modeling. Page 3: “for each channel, we only predict C/S binary masks” -> this seems to be a mistake. Probably “for each operation, we only predict C/S binary masks” is the right wording Page 4: Stabilizing Stochastic Skip Connections: it seems that the suggested configuration does not enable an identity path, which was found very beneficial in (He. et al., 2016). Identity path does not exist since layers are concatenated and go through 1*1 conv, which does not enable plain identity unless learned by the 1*1 conv. Page 5: - The last paragraph in section 4.2 is not clear to me. What does a compilation failure mean in this context and why does it occur? And: if each layer is connected to all its previous layers by skip connections, what remains to be learned w.r.t the model structure? Isn’t the pattern of skip connection the thing we would like to learn? - Some details of the policy LSTM network are also not clear to me: o How is the integer mask (output of the B channel steps) encoded? Using 1-hot encoding over 2^{C/S} output neurons? Or maybe C/S output neurons, used for sampling the C/S bits of the mask? this should be reported in some detail. o How is the mask converted to an input embedding for the next step? Is it by linear multiplication with a matrix? Something more complicated? And are there different matrices used/trained for each mask embedding (one for 1*1 conv, one for 3*3 conv, etc..)? o What is the motivation for using equation 5 for the sampling of skip connection flags? What is the motivation for averaging the winning anchors as the average embedding for the next stage (to let it ‘know’ who is connected to the previous?). Is anchor j also added to the average? o How is equation 5 normalized? That is: the probability is stated to be proportional to an exponent of an inner product, but it is not clear what the constant is and how sampling is done. Page 6: - Section 4.4: what is the fixed policy used for generating models in the stage of training the shared W parameters? (this is answered at page 7 Experiments: - The accuracy figures obtained are impressive, but I’m not convinced the ENAS learning is the important ingredient in obtaining them (rather than a very good baseline) - Specifically, in the Cifar -10 example it does not seem that the networks chooses the number of maps in a way which is diverse or different from layer to layer. Therefore we do not have any evidence that the LSTM controller has learnt any interesting rule regarding block type, or relation between block type width and layer index. All we see is that the model does not chose too many maps, thus avoid significant overfit. The relevant baseline here is a model with 64 or 96 maps on each block, each layer.Such a model is likely to do as well as the ENAS model, and can be obtained easily with slight parameter tuning of a single parameter. - Similarly, I’m not convinced the skip connection pattern found for Cifar-10 is superior to standard denseNet or Resnet pattern. The found configuration was not compared to these baselines. So again we do not have evidence showing the merit of keeping and tuning many parameters with the RINFORCE - The experiments with Penn Treebank are described with too less details: for example, what exactly is the task considered (in terms on input-output mapping), what is the dataset size, etc.. - Also, for the Penn treebank experiments no baseline is given, so it is not possible to understand if the structure learning here is beneficial. Comparison of the results to an architecture with all skip connections, and with a single skip connection per layer is required to estimate if useful structure is being learnt. Overall: - Pro: the method gives high accuracy results - Cons: o It is not clear if the ENAS search is responsible to the results, or just the strong baseline. The advantage of ENAS over plain hyper parameter choosing was not sufficiently established. o The controller was not presented in a clear enough manner. Many of its details stay obscure. o The method does not seem to be general. It seems to be limited to choosing a specific set of parameters in very specific scenario (scenario which enable parameter sharing between model. The conditions for this to happen seems to be rather strict, and where not elaborated). After revision: The controller is now better presented. However, the main points were not changed: - ENAS seems to be limited to a specific architecture and search space, in which probably the search is already exhausted. For example for the image processing network, it is determining the number of skip connections and structure of a single layer as a combination of several function types. We already know the answers to these search problems (denser skip connection pattern works better, more functions types in a layer in parallel do better, the number of maps should be adjusted to the complexity and data size to avoid overfit). ENAS does not reveal a new surprising architectures, and it seems that instead of searching in the large space it suggests, one can just tune a 1-2 parameters (for the image network, it is the number of maps in a layer). - Results comparing ENAS results to the simple baseline of just tuning 1-2 hyper parameters were not shown. I hence believe the strong empirical results of ENAS are a property of the search space (the architecture used) and not of the search algorithm.
iclr_2018_HyyP33gAZ
Published as a conference paper at ICLR 2018 ACTIVATION MAXIMIZATION GENERATIVE ADVER- SARIAL NETS Class labels have been empirically shown useful in improving the sample quality of generative adversarial nets (GANs). In this paper, we mathematically study the properties of the current variants of GANs that make use of class label information. With class aware gradient and cross-entropy decomposition, we reveal how class labels and associated losses influence GAN's training. Based on that, we propose Activation Maximization Generative Adversarial Networks (AM-GAN) as an advanced solution. Comprehensive experiments have been conducted to validate our analysis and evaluate the effectiveness of our solution, where AM-GAN outperforms other strong baselines and achieves state-of-the-art Inception Score (8.91) on CIFAR-10. In addition, we demonstrate that, with the Inception ImageNet classifier, Inception Score mainly tracks the diversity of the generator, and there is, however, no reliable evidence that it can reflect the true sample quality. We thus propose a new metric, called AM Score, to provide more accurate estimation on the sample quality. Our proposed model also outperforms the baseline methods in the new metric.
This paper is a thorough investigation of various “class aware” GAN architectures. It purposes a variety of modifications on existing approaches and additionally provides extensive analysis of the commonly used Inception Score evaluation metric. The paper starts by introducing and analyzing two previous class aware GANs - a variant of the Improved GAN architecture used for semi-supervised results (named Label GAN in this work) and AC-GAN, which augments the standard discriminator with an auxiliary classifier to classify both real and generated samples as specific classes. The paper then discusses the differences between these two approaches and analyzes the loss functions and their corresponding gradients. Label GAN’s loss encourages the generator to assign all probability mass cumulatively across the k-different label classes while the discriminator tries to assign all probability mass to the k+1th output corresponding to a “generated” class. The paper views the generators loss as a form of implicit class target loss. This analysis motivates the paper’s proposed extension, called Activation Maximization. It corresponds to a variant of Label GAN where the generator is encouraged to maximize the probability of a specific class for every sample instead of just the cumulative probability assigned to label classes. The proposed approach performs strongly according to inception score on CIFAR-10 and includes additional experiments on Tiny Imagenet to further increase confidence in the results. A discussion throughout the paper involves dealing with the issue of mode collapse - a problem plaguing standard GAN variants. In particular the paper discusses how variants of class conditioning effect this problem. The paper presents a useful experimental finding - dynamic labeling, where targets are assigned based on whatever the discriminator thinks is the most likely label, helps prevent mode collapse compared to the predefined assignment approach used in AC-GAN / standard class conditioning. I am unclear how exactly predefined vs dynamic labeling is applied in the case of the Label GAN results in Table 1. The definition of dynamic labeling is specific to the generator as I interpreted it. But Label GAN includes no class specific loss for the generator. I assume it refers to the form of generator - whether it is class conditional or not - even though it would have no explicit loss for the class conditional version. It would be nice if the authors could clarify the details of this setup. The paper additionally performs a thorough investigation of the inception score and proposes a new metric the AM score. Through analysis of the behavior of the inception score has been lacking so this is an important contribution as well. As a reader, I found this paper to be thorough, honest, and thoughtful. It is a strong contribution to the “class aware” GAN literature.
iclr_2018_HyI6s40a-
Recent advances in adversarial Deep Learning (DL) have opened up a new and largely unexplored surface for malicious attacks jeopardizing the integrity of autonomous DL systems. We introduce a novel automated countermeasure called Parallel Checkpointing Learners (PCL) to thwart the potential adversarial attacks and significantly improve the reliability (safety) of a victim DL model. The proposed PCL methodology is unsupervised, meaning that no adversarial sample is leveraged to build/train parallel checkpointing learners. We formalize the goal of preventing adversarial attacks as an optimization problem to minimize the rarely observed regions in the latent feature space spanned by a DL network. To solve the aforementioned minimization problem, a set of complementary but disjoint checkpointing modules are trained and leveraged to validate the victim model execution in parallel. Each checkpointing learner explicitly characterizes the geometry of the input data and the corresponding high-level data abstractions within a particular DL layer. As such, the adversary is required to simultaneously deceive all the defender modules in order to succeed. We extensively evaluate the performance of the PCL methodology against the state-of-the-art attack scenarios, including Fast-Gradient-Sign (FGS), Jacobian Saliency Map Attack (JSMA), Deepfool, and Carlini&WagnerL2. Extensive proof-of-concept evaluations for analyzing various data collections including MNIST, CIFAR10, and ImageNet corroborate the effectiveness of our proposed defense mechanism against adversarial samples.
Summary: The paper presents an unsupervised method for detecting adversarial examples of neural networks. The method includes two independent components: an ‘input defender’ which tried to inspect the input, and a ‘latent defender’ trying to inspect a hidden representation. Both are based on the claim that adversarial examples lie outside a certain sub-space occupied by the natural image examples, and modeling this sub-space hence enables their detection. The input defender is based on sparse coding, and the latent defender on modeling the latent activity as a mixture of Gaussians. Experiments are presented on MInst, Cifar10, and ImageNet. - Introduction: The motivation for detecting adversarial examples is not stated clearly enough. How can such examples be used by a malicious agent to cause damage to a system? Sketching some such scenarios would help the reader understand why the issue is practically important. I was not convinced it is. Page 4: - Step 3 of the algorithm is not clear: o How exactly does HDDA model the data (formally) and how does it estimate the parameters? In the current version, the paper does not explain the HDDA formalism and learning algorithm, which is a main building block in the proposed system (as it provides the density score used for adversarial examples detection). Hence the paper cannot be read as a standalone document. I went on to read the relevant HDDA paper, but it is also not clear which of the model variants presented there is used in this paper. o What is the relation between the model learned at stage 2 (the centers c^i) and the model learnt by HDDA? Are they completely different models? Or are the C^I used when learning the HDDA model (and how)? If these are separate models, how are they used in conjunction to give a final density score? If I understand correctly, only the HDDA model is used to get the final score, and the C^i are only used to make the \phy(x) representation more class-seperable. Is that right? - Figure 4, b and c: it is not clear what the (x,y,z) measurements plotted in these 3D drawings are (what are the axis). Page 5: - Section 2: the risk analysis is done in a standard Bayesian way and leads to a ratio of PDFs in equation 5. However, this form is not appropriate for the case presented at this paper, since the method presented only models one of these PDFs (Specifically p(x | W1) - there is not generative model of p(x|W2)). - The authors claim in the last sentence of the section that p(x|W2) is equivalent to 1-p(x|W1), but this is not true: these are two continuous densities, they do not sum to 1, and a model of p(x|W2) is not available (as far as I understand the method) Page 6: - How is equation 7) optimized? - Which patchs are extracted from images, for training and at inference time? Are these patchs a dense coverage of the image? Sparsely sampled? Densely sampled with overlaps? - Its not clear enough what exactly is the ‘PSNR’ value which is used for the adversarial example detection, and what exactly is ‘profile the PSNR of legitimate samples within each class’. A formal definition of PSNR and’profiling’ is missing (does profiling simply mean finding a threshold for filtering?) Page 7: - Figure 7 is not very informative. Given the ROC curves in figure 8 and table 1 it is redundant. Page 8: - The results in general indicate that the method is much better than chance, but it is not clear if it is practical, because the false alarm rates for high detection are quite high. For example on ImageNet, 14.2% of the innocent images are mistakenly rejected as malicious to get 90% detection rate. I do not think this working point is useful for a real application - Given the high flares alarm rate, it is surprising that experiments with multiple checkpoints are not presented (specifically as this case of multiple checkpoints is discussed explicitly in previous sections of the paper). Experiments with multiple checkpoints are clear required to complete the picture regarding the empirical performance of this method - The experiments show that essentially, the latent defenders are stronger than the input defender in most cases. However, an ablation study of the latent defender is missing: Specifcially, it is not clear a) how much does stage 2 (model refinement with clusters) contribute to the accuracy (how does the model do without it? And 3) how important is the HDDA and the specific variant used (which is not clear) important: is it important to model the Gaussians using a sub-space? Of which dimension? Overall: Pros: - A nice idea with some novelty, based on a non-trivial observation - The experimental results how the idea holds some promise Cons - The method is not presented clearly enough: the main component modeling the network activity is not explained (the HDDA module used) - The results presented show that the method is probably not suitable for a practical application yet (high false alarm rate for good detection rate) - Experimental results are partial: results are not presented for multiple defenders, no ablation experiments After revision: Some of my comments were addressed, and some were not. Specifically, results were presented for multiple defenders and some ablation experiments were highlihgted Things not addressed: - The risk analysis is still not relevant. The authors removed a clearly flawed sentence, but the analysis still assumes that two densities (of 'good' and 'bad' examples) are modeled, while in the work presented only one of them is. Hence this analysis does not add anything to the paper- it states a general case which does not fit the current scenario and its relation to the work is not clear. It would have been better to omit it and use the space to describe HDDA and the specific variant used in this work, as this is the main tool doing the distinction. I believe the paper should be accepted.
iclr_2018_Sy-dQG-Rb
Published as a conference paper at ICLR 2018 NEURAL SPEED READING VIA SKIM-RNN Inspired by the principles of speed reading, we introduce Skim-RNN, a recurrent neural network (RNN) that dynamically decides to update only a small fraction of the hidden state for relatively unimportant input tokens. Skim-RNN gives computational advantage over an RNN that always updates the entire hidden state. Skim-RNN uses the same input and output interfaces as a standard RNN and can be easily used instead of RNNs in existing models. In our experiments, we show that Skim-RNN can achieve significantly reduced computational cost without losing accuracy compared to standard RNNs across five different natural language tasks. In addition, we demonstrate that the trade-off between accuracy and speed of Skim-RNN can be dynamically controlled during inference time in a stable manner. Our analysis also shows that Skim-RNN running on a single CPU offers lower latency compared to standard RNNs on GPUs.
Summary: The paper proposes a learnable skimming mechanism for RNN. The model decides whether to send the word to a larger heavy-weight RNN or a light-weight RNN. The heavy-weight and the light-weight RNN each controls a portion of the hidden state. The paper finds that with the proposed skimming method, they achieve a significant reduction in terms of FLOPS. Although it doesn’t contribute to much speedup on modern GPU hardware, there is a good speedup on CPU, and it is more power efficient. Contribution: - The paper proposes to use a small RNN to read unimportant text. Unlike (Yu et al., 2017), which skips the text, here the model decides between small and large RNN. Pros: - Models that dynamically decide the amount of computation make intuitive sense and are of general interests. - The paper presents solid experimentation on various text classification and question answering datasets. - The proposed method has shown reasonable reduction in FLOPS and CPU speedup with no significant accuracy degradation (increase in accuracy in some tasks). - The paper is well written, and the presentation is good. Cons: - Each model component is not novel. The authors propose to use Gumbel softmax, but does compare other gradient estimators. It would be good to use REINFORCE to do a fair comparison with (Yu et al., 2017 ) to see the benefit of using small RNN. - The authors report that training from scratch results in unstable skim rate, while Half pretrain seems to always work better than fully pretrained ones. This makes the success of training a bit adhoc, as one need to actively tune the number of pretraining steps. - Although there is difference from (Yu et al., 2017), the contribution of this paper is still incremental. Questions: - Although it is out of the scope for this paper to achieve GPU level speedup, I am curious to know some numbers on GPU speedup. - One recommended task would probably be text summarization, in which the attended text can contribute to the output of the summary. Conclusion: - Based on the comments above, I recommend Accept
iclr_2018_HkPCrEZ0Z
Model-free deep reinforcement learning algorithms are able to successfully solve a wide range of continuous control tasks, but typically require many on-policy samples to achieve good performance. Model-based RL algorithms are sampleefficient on the other hand, while learning accurate global models of complex dynamic environments has turned out to be tricky in practice, which leads to the unsatisfactory performance of the learned policies. In this work, we combine the sample-efficiency of model-based algorithms and the accuracy of model-free algorithms. We leverage multi-step neural network based predictive models by embedding real trajectories into imaginary rollouts of the model, and use the imaginary cumulative rewards as control variates for model-free algorithms. In this way, we achieved the strengths of both sides and derived an estimator which is not only sample-efficient, but also unbiased and of very low variance. We present our evaluation on the MuJoCo and OpenAI Gym benchmarks.
This paper presents a model-based approach to variance reduction in policy gradient methods. The basic idea is to use a multi-step dynamics model as a "baseline" (more properly a control variate, as the terminology in the paper uses, but I think baselines are more familiar to the RL community) to reduce the variance of a policy gradient estimator, while remaining unbiased. The authors also discuss how to best learn the type of multi-step dynamics that are well-suited to this problem (essentially, using off-policy data via importance weighting), and they demonstrate the effectiveness of the approach on four continuous control tasks. This paper presents a nice idea, and I'm sure that with some polish it will become a very nice conference submission. But right now (at least as of the version I'm reviewing), the paper reads as being half-finished. Several terms are introduced without being properly defined, and one of the key formalisms presented in the paper (the idea of "embedding" an "imaginary trajectory" remains completely opaque to me. Further, the paper seems to simply leave out some portions: the introduction claims that one of the contributions is "we show that techniques such as latent space trajectory embedding and dynamic unfolding can significantly boost the performance of the model based control variates," but I see literally no section that hints at anything like this (no mention of "dynamic unfolding" or "latent space trajectory embedding" ever occurs later in the paper). In a bit more detail, the key idea of the paper, at least to the extent that I understood it, was that the authors are able to introduce a model-based variance-reduction baseline into the policy gradient term. But because (unlike traditional baselines) introducing it alone would affect the actual estimate, they actually just add and subtract this term, and separate out the two terms in the policy gradient: the new policy gradient like term will be much smaller, and the other term can be computed with less variance using model-based methods and the reparameterization trick. But beyond this, and despite fairly reasonable familiarity with the subject, I simply don't understand other elements that the paper is talking about. The paper frequently refers to "embedding" "imaginary trajectories" into the dynamics model, and I still have no idea what this is actually referring to (the definition at the start of section 4 is completely opaque to me). I also don't really understand why something like this would be needed given the understanding above, but it's likely I'm just missing something here. But I also feel that in this case, it borders on being an issue with the paper itself, as I think this idea needs to be described much more clearly if it is central to the underlying paper. Finally, although I do think the extent of the algorithm that I could follow is interesting, the second issue with the paper is that the results are fairly weak as they stand currently. The improvement over TRPO is quite minor in most of the evaluated domains (other than possibly in the swimmer task), even with substantial added complexity to the approach. And the experiments are described with very little detail or discussion about the experimental setup. Nor are either of these issues simply due to space constraints: the paper is 2 pages under the soft ICLR limit, with no appendix. Not that there is anything wrong with short papers, but in this case both the clarity of presentation and details are lacking. My honest impression is simply that this is still work in progress and that the write up was done rather hastily. I think it will eventually become a good paper, but it is not ready yet.
iclr_2018_ryZElGZ0Z
The ability of an agent to discover its own learning objectives has long been considered a key ingredient for artificial general intelligence. Breakthroughs in autonomous decision making and reinforcement learning have primarily been in domains where the agent's goal is outlined and clear: such as playing a game to win, or driving safely. Several studies have demonstrated that learning extramural sub-tasks and auxiliary predictions can improve (1) single human-specified task learning, (2) transfer of learning, (3) and the agent's learned representation of the world. In all these examples, the agent was instructed what to learn about. We investigate a framework for discovery: curating a large collection of predictions, which are used to construct the agent's representation of the world. Specifically, our system maintains a large collection of predictions, continually pruning and replacing predictions. We highlight the importance of considering stability rather than convergence for such a system, and develop an adaptive, regularized algorithm towards that aim. We provide several experiments in computational micro-worlds demonstrating that this simple approach can be effective for discovering useful predictions autonomously.
I really enjoyed reading this paper and stopped a few time to write down new ideas it brought up. Well written and very clear, but somewhat lacking in the experimental or theoretical results. The formulation of AdaGain is very reminiscent of the SGA algorithm in Kushner & Yin (2003), and more generally gradient descent optimization of the learning rate is not new. The authors argue for the focus on stability over convergence, which is an interesting focus, but still I found the lack of connection with related work in this section a strange. How would a simple RNN work for the experimental problems? The first experiment demonstrates that the regularization is using fewer features than without, which one could argue does not need to be compared with other methods to be useful. Especially when combined with Figure 5, I am convinced the regularization is doing a good job of pruning the least important GVFs. However, the results in Figure 3 have no context for us to judge the results within. Is this effective or terrible? Fast or slow? It is really hard to judge from these results. We can say that more GVFs are better, and that the compositional GVFs add to the ability to lower RMSE. But I do not think this is enough to really judge the method beyond a preliminary "looks promising". The compositional GVFs also left me wondering: What keeps a GVF from being pruned that is depended upon by a compositional GVF? This was not obvious to me. Also, I think comparing GVFs and AdaGain-R with an RNN approach highlights the more general question. Is it generally true that GVFs setup like this can learn to represent any value function that an RNN could have? There's an obvious benefit to this approach which is that you do not need BPTT, fantastic, but why not highlight this? The network being used is essentially a recurrent neural net, the authors restrict it and train it, not with backprop, but with TD, which is very interesting. But, I think there is not quite enough here. Pros: Well written, very interesting approach and ideas Conceptually simple, should be easy to reproduce results Cons: AdaGain never gets analyzed or evaluated except for the evaluations of AdaGain-R. No experimental context, we need a non-trivial baseline to compare with
iclr_2018_BkrSv0lA-
LOSS-AWARE WEIGHT QUANTIZATION OF DEEP NET- WORKS The huge size of deep networks hinders their use in small computing devices. In this paper, we consider compressing the network by weight quantization. We extend a recently proposed loss-aware weight binarization scheme to ternarization, with possibly different scaling parameters for the positive and negative weights, and m-bit (where m > 2) quantization. Experiments on feedforward and recurrent neural networks show that the proposed scheme outperforms state-of-the-art weight quantization algorithms, and is as accurate (or even more accurate) than the full-precision network.
This paper proposes a new method to train DNNs with quantized weights, by including the quantization as a constraint in a proximal quasi-Newton algorithm, which simultaneously learns a scaling for the quantized values (possibly different for positive and negative weights). The paper is very clearly written, and the proposal is very well placed in the context of previous methods for the same purpose. The experiments are very clearly presented and solidly designed. In fact, the paper is a somewhat simple extension of the method proposed by Hou, Yao, and Kwok (2017), which is where the novelty resides. Consequently, there is not a great degree of novelty in terms of the proposed method, and the results are only slightly better than those of previous methods. Finally, in terms of analysis of the algorithm, the authors simply invoke a theorem from Hou, Yao, and Kwok (2017), which claims convergence of the proposed algorithm. However, what is shown in that paper is that the sequence of loss function values converges, which does not imply that the sequence of weight estimates also converges, because of the presence of a non-convex constraint ($b_j^t \in Q^{n_l}$). This may not be relevant for the practical results, but to be accurate, it can't be simply stated that the algorithm converges, without a more careful analysis.
iclr_2018_B1Gi6LeRZ
Published as a conference paper at ICLR 2018 LEARNING FROM BETWEEN-CLASS EXAMPLES FOR DEEP SOUND RECOGNITION Deep learning methods have achieved high performance in sound recognition tasks. Deciding how to feed the training data is important for further performance improvement. We propose a novel learning method for deep sound recognition: Between-Class learning (BC learning). Our strategy is to learn a discriminative feature space by recognizing the between-class sounds as between-class sounds. We generate between-class sounds by mixing two sounds belonging to different classes with a random ratio. We then input the mixed sound to the model and train the model to output the mixing ratio. The advantages of BC learning are not limited only to the increase in variation of the training data; BC learning leads to an enlargement of Fisher's criterion in the feature space and a regularization of the positional relationship among the feature distributions of the classes. The experimental results show that BC learning improves the performance on various sound recognition networks, datasets, and data augmentation schemes, in which BC learning proves to be always beneficial. Furthermore, we construct a new deep sound recognition network (EnvNet-v2) and train it with BC learning. As a result, we achieved a performance surpasses the human level 1 .
This manuscript proposes a method to improve the performance of a generic learning method by generating "in between class" (BC) training samples. The manuscript motivates the necessity of such technique and presents the basic intuition. The authors show how the so-called BC learning helps training different deep architectures for the sound recognition task. My first remark regards the presentation of the technique. The authors argue that it is not a data augmentation technique, but rather a learning method. I strongly disagree with this statement, not only because the technique deals exactly with augmenting data, but also because it can be used in combination to any learning method (including non-deep learning methodologies). Naturally, the literature review deals with data augmentation technique, which supports my point of view. In this regard, I would have expected comparison with other state-of-the-art data augmentation techniques. The usefulness of the BC technique is proven to a certain extent (see paragraph below) but there is not comparison with state-of-the-art. In other words, the authors do not compare the proposed method with other methods doing data augmentation. This is crucial to understand the advantages of the BC technique. There is a more fundamental question for which I was not able to find an explicit answer in the manuscript. Intuitively, the diagram shown in Figure 4 works well for 3 classes in dimension 2. If we add another class, no matter how do we define the borders, there will be one pair of classes for which the transition from one to another will pass through the region of a third class. The situation worsens with more classes. However, this can be solved by adding one dimension, 4 classes and 3 dimensions seems something feasible. One can easily understand that if there is one more class than the number of dimensions, the assumption should be feasible, but beyond it starts to get problematic. This discussion does not appear at all in the manuscript and it would be an important limitation of the method, specially when dealing with large-scale data sets. Overall I believe the paper is not mature enough for publication. Some minor comments: - 2.1: We introduce --> We discussion - Pieczak 2015a did not propose the extraction of MFCC. - the x_i and t_i of section 3.2.2 should not be denoted with the same letters as in 3.2.1. - The correspondence with a semantic feature space is too pretentious, specially since no experiment in this direction is shown. - I understand that there is no mixing in the test phase, perhaps it would be useful to recall it.
iclr_2018_BJypUGZ0Z
Workshop track -ICLR 2018 ACCELERATING NEURAL ARCHITECTURE SEARCH US- ING PERFORMANCE PREDICTION Methods for neural network hyperparameter optimization and architecture search are computationally expensive due to the need to train a large number of model configurations. In this paper, we show that simple regression models can predict the final performance of partially trained model configurations using features based on network architectures, hyperparameters, and time-series validation performance data. We empirically show that our performance prediction models are much more accurate than prominent Bayesian counterparts, are simpler to implement, and are faster to train. Our models can predict final performance in both visual classification and language modeling domains, are effective for predicting performance of drastically varying model architectures, and can even generalize between model classes. Using these prediction models, we also implement an early stopping method for hyperparameter optimization and architecture search, which obtains a speedup of a factor up to 6x in both hyperparameter optimization and architecture search. Finally, we empirically show that our early stopping method can be seamlessly incorporated into both reinforcement learning-based architecture selection algorithms and bandit based search methods. Through extensive experimentation, we empirically show our performance prediction models and early stopping algorithm are state-of-the-art in terms of prediction accuracy and speedup achieved while still identifying the optimal model configurations.
This paper shows a simple method for predicting the performance that neural networks will achieve with a given architecture, hyperparameters, and based on an initial part of the learning curve. The method assumes that it is possible to first execute 100 evaluations up to the total number of epochs. From these 100 evaluations (with different hyperparameters / architectures), the final performance y_T is collected. Then, based on an arbitrary prefix of epochs y_{1:t}, a model can be learned to predict y_T. There are T different models, one for each prefix y_{1:t} of length t. The type of model used is counterintuitive for me; why use a SVR model? Especially since uncertainty estimates are required, a Gaussian process would be the obvious choice. The predictions in Section 3 appear to be very good, and it is nice to see the ablation study. Section 4 fails to mention that its use of performance prediction for early stopping follows exactly that of Domhan et al (2015) and that this is not a contribution of this paper; this feels a bit disingenious and should be fixed. The section should also emphasize that the models discussed in this paper are only applicable for early stopping in cases where the function evaluation budget N is much larger than 100. The emphasis on the computational demand of 1-3 minutes for LCE seems like a red herring: MetaQNN trained 2700 networks in 100 GPU days, i.e., about 1 network per GPU hour. It trained 20 epochs for the studied case of CIFAR, so 1-3 minutes per epoch on the CPU can be implemented with zero overhead while the network is training on the GPU. Therefore, the following sentence seems sensational without substance: "Therefore, on a full meta-modeling experiment involving thousands of neural network configurations, our method could be faster by several orders of magnitude as compared to LCE based on current implementations." The experiment on fast Hyperband is very nice at first glance, but the longer I think about it the more questions I have. During the rebuttal I would ask the authors to extend f-Hyperband all the way to the right in Figure 6 (left) and particularly in Figure 6 (right). Especially in Figure 6 (right), the original Hyperband algorithm ends up higher than f-Hyperband. The question this leaves open is whether f-Hyperband would reach the same performance when continued or not. I would also request the paper not to casually mention the 7x speedup that can be found in the appendix, without quantifying this. This is only possible for a large number of 40 Hyperband iterations, and in the interesting cases of the first few iterations speedups are very small. Also, do the simulated speedup results in the appendix account for potentially stopping a new best configuration, or do they simply count how much computational time is saved, without looking at performance? The latter would of course be extremely misleading and should be fixed. I am looking forward to a clarification in the rebuttal period. For relating properly to the literatue, the experiment for speeding up Hyperband should also mention previous methods for speeding up Hyperband by a model (I only know one by the authors' reference Klein et al (2017)). Overall, this paper appears very interesting. The proposed technique has some limitations, but in some settings it seems very useful. I am looking forward to the reply to my questions above; my final score will depend on these. Typos / Details: - The range of the coefficient of determination is from 0 to 1. Table 1 probably reports 100 * R^2? Please fix the description. - I did not see Table 1 referenced in the text. - Page 3: "more computationally and" -> "more computationally efficient and" - Page 3: "for performing final" -> "for predicting final" Points in favor of the paper: - Simple method - Good prediction results - Useful possible applications identified Points against the paper: - Methodological advances are limited / unmotivated choice of model - Limited applicability to settings where >> 100 configurations can be run fully - Possibly inflated results reported for Hyperband experiment
iclr_2018_BJgPCveAW
We propose a novel way of reducing the number of parameters in the storagehungry fully connected layers of a neural network by using pre-defined sparsity, where the majority of connections are absent prior to starting training. Our results indicate that convolutional neural networks can operate without any loss of accuracy at less than 0.5% classification layer connection density, or less than 5% overall network connection density. We also investigate the effects of pre-defining the sparsity of networks with only fully connected layers. Based on our sparsifying technique, we introduce the 'scatter' metric to characterize the quality of a particular connection pattern. As proof of concept, we show results on CIFAR, MNIST and a new dataset on classifying Morse code symbols, which highlights some interesting trends and limits of sparse connection patterns.
This paper examines sparse connection patterns in upper layers of convolutional image classification networks. Networks with very few connections in the upper layers are experimentally determined to perform almost as well as those with full connection masks. Heuristics for distributing connections among windows/groups and a measure called "scatter" are introduced to construct the connectivity masks, and evaluated experimentally on CIFAR-10 and -100, MNIST and Morse code symbols. While it seems clear in general that many of the connections are not needed and can be made sparse (Figures 1 and 2), I found many parts of this paper fairly confusing, both in how it achieves its objectives, as well as much of the notation and method descriptions. I've described many of the points I was confused by in more detailed comments below. Detailed comments and questions: The distribution of connections in "windows" are first described to correspond to a sort of semi-random spatial downsampling, to get different views distributed over the full image. But in the upper layers, the spatial extent can be very small compared to the image size, sometimes even 1x1 depending on the network downsampling structure. So are do the "windows" correspond to spatial windows, and if so, how? Or are they different (maybe arbitrary) groupings over the feature maps? Also a bit confusing is the notation "conv2", "conv3", etc. These names usually indicate the name of a single layer within the network (conv2 for the second convolutional layer or series of layers in the second spatial size after downsampling, for example). But here it seems just to indicate the number of "CL" layers: 2. And p.1 says that the "CL" layers are those often referred to as "FC" layers, not "conv" (though they may be convolutionally applied with spatial 1x1 kernels). The heuristic for spacing connections in windows across the spatial extent of an image makes intuitive sense, but I'm not convinced this will work well in all situations, and may even be sub-optimal for the examined datasets. For example, to distinguish MNIST 1 vs 7 vs 9, it is most important to see the top-left: whether it is empty, has a horizontal line, or a loop. So some regions are more important than others, and the top half may be more important than an equally spaced global view. So the description of how to space connections between windows makes some intuitive sense, but I'm unclear on whether other more general connections might be even better, including some that might not be as easily analyzed with the "scatter" metric described. Another broader question I have is in the distinction between lower and upper layers (those referred to as "feature extracting" and "classification" in this paper). It's not clear to me that there is a crisply defined difference here (though some layers may tend to do more of one or the other function, such as we might interpret). So it seems that expanding the investigation to include all layers, or at least more layers, would be good: It might be that more of the "classification" function is pushed down to lower layers, as the upper layers are reduced in size. How would they respond to similar reductions? I'm also unsure why on p.6 MNIST uses 2d windows, while CIFAR uses 3d --- The paper mentions the extra dimension is for features, but MNIST would have a features dimension as well at this stage, I think? I'm also unsure whether the windows are over spatial extent only, or over features.
iclr_2018_Hk6WhagRW
Published as a conference paper at ICLR 2018 EMERGENT COMMUNICATION THROUGH NEGOTIATION Multi-agent reinforcement learning offers a way to study how communication could emerge in communities of agents needing to solve specific problems. In this paper, we study the emergence of communication in the negotiation environment, a semi-cooperative model of agent interaction. We introduce two communication protocols -one grounded in the semantics of the game, and one which is a priori ungrounded and is a form of cheap talk. We show that self-interested agents can use the pre-grounded communication channel to negotiate fairly, but are unable to effectively use the ungrounded channel. However, prosocial agents do learn to use cheap talk to find an optimal negotiating strategy, suggesting that cooperation is necessary for language to emerge. We also study communication behaviour in a setting where one agent interacts with agents in a community with different levels of prosociality and show how agent identifiability can aid negotiation.
The authors describe a variant of the negotiation game in which agents of different type, selfish or prosocial, and with different preferences. The central feature is the consideration of a secondary communication (linguistic) channel for the purpose of cheap talk, i.e. talk whose semantics are not laid out a priori. The essential findings include that prosociality is a prerequisite for effective communication (i.e. formation of meaningful communication on the linguistic channel), and furthermore, that the secondary channel helps improve the negotiation outcomes. The paper is well-structured and incrementally introduces the added features and includes staged evaluations for the individual additions, starting with the differentiation of agent characteristics, explored with combination of linguistic and proposal channel. Finally, agent societies are represented by injecting individuals' ID into the input representation. The positive: - The authors attack the challenging task of given agents a means to develop communication patterns without apriori knowledge. - The paper presents the problem in a well-structured manner and sufficient clarity to retrace the essential contribution (minor points for improvement). - The quality of the text is very high and error-free. - The background and results are well-contextualised with relevant related work. The problematic: - By the very nature of the employed learning mechanisms, the provided solution provides little insight into what the emerging communication is really about. In my view, the lack of interpretable semantics hardly warrants a reference to 'cheap talk'. As such the expectations set by the well-developed introduction and background sections are moderated over the course of the paper. - The goal of providing agents with richer communicative ability without providing prior grounding is challenging, since agents need to learn about communication partners at runtime. But it appears as of the main contribution of the paper can be reduced to the decomposition of the learnable feature space into two communication channels. The implicit relationship of linguistic channel on proposal channel input based on the time information (Page 4, top) provides agents with extended inputs, thus enabling a more nuanced learning based on the relationship of proposal and linguistic channel. As such the well-defined semantics of the proposal channel effectively act as the grounding for the linguistic channel. This, then, could have been equally achieved by providing agents with a richer input structure mediated by a single channel. From this perspective, the solution offers limited surprises. The improvement of accuracy in the context of agent societies based on provided ID follows the same pattern of extending the input features. - One of the motivating factors of using cheap talk is the exploitation of lying on the part of the agents. However, apart from this initial statement, this feature is not explicitly picked up. In combination with the previous point, the necessity/value of the additional communication channel is unclear. Concrete suggestions for improvement: - Providing exemplified communication traces would help the reader appreciate the complexity of the problem addressed by the paper. - Figure 3 is really hard to read/interpret. The same applies to Figure 4 (although less critical in this case). - Input parameters could have been made explicit in order to facilitate a more comprehensive understanding of technicalities (e.g. in appendix). - Emergent communication is effectively unidirectional, with one agent as listener. Have you observed other outcomes in your evaluation? In summary, the paper presents an interesting approach to combine unsupervised learning with multiple communication channels to improve learning of preferences in a well-established negotiation game. The problem is addressed systematically and well-presented, but can leave the reader with the impression that the secondary channel, apart from decomposing the model, does not provide conceptual benefit over introducing a richer feature space that can be exploited by the learning mechanisms. Combined with the lack of specific cheap talk features, the use of actual cheap talk is rather abstract. Those aspects warrant justification.
iclr_2018_B1X0mzZCW
Published as a conference paper at ICLR 2018 FIDELITY-WEIGHTED LEARNING Training deep neural networks requires many training samples, but in practice training labels are expensive to obtain and may be of varying quality, as some may be from trusted expert labelers while others might be from heuristics or other sources of weak supervision such as crowd-sourcing. This creates a fundamental qualityversus-quantity trade-off in the learning process. Do we learn from the small amount of high-quality data or the potentially large amount of weakly-labeled data? We argue that if the learner could somehow know and take the label-quality into account when learning the data representation, we could get the best of both worlds. To this end, we propose "fidelity-weighted learning" (FWL), a semi-supervised studentteacher approach for training deep neural networks using weakly-labeled data. FWL modulates the parameter updates to a student network (trained on the task we care about) on a per-sample basis according to the posterior confidence of its label-quality estimated by a teacher (who has access to the high-quality labels). Both student and teacher are learned from the data. We evaluate FWL on two tasks in information retrieval and natural language processing where we outperform state-of-the-art alternative semi-supervised methods, indicating that our approach makes better use of strong and weak labels, and leads to better task-dependent data representations.
The problem of interest is to train deep neural network models with few labelled training samples. The specific assumption is there is a large pool of unlabelled data, and a heuristic function that can provide label annotations, possibly with varying levels of noises, to those unlabelled data. The adopted learning model is of a student/teacher framework as in privileged learning/knowledge distillation/model compression, and also machine teaching. The student (deep neural network) model will learn from both labelled and unlabelled training data with the labels provided by the teacher (Gaussian process) model. The teacher also supplies an uncertainty estimate to each predicted label. How about the heuristic function? This is used for learning initial feature representation of the student model. Crucially, the teacher model will also rely on these learned features. Labelled data and unlabelled data are therefore lie in the same dimensional space. Specific questions to be addressed: 1) Clustering of strongly-labelled data points. Thinking about the statement “each an expert on this specific region of data space”, if this is the case, I am expecting a clustering for both strongly-labelled data points and weakly-labelled data points. Each teacher model is trained on a portion of strongly-labelled data, and will only predict similar weakly-labelled data. On a related remark, the nice side-effect is not right as it was emphasized that data points with a high-quality label will be limited. As well, GP models, are quite scalable nowadays (experiments with millions to billions of data points are available in recent NIPS/ICML papers, though, they are all rely on low dimensionality of the feature space for optimizing the inducing point locations). It will be informative to provide results with a single GP model. 2) From modifying learning rates to weighting samples. Rather than using uncertainty in label annotation as a multiplicative factor in the learning rate, it is more “intuitive” to use it to modify the sampling procedure of mini-batches (akin to baseline #4); sample with higher probability data points with higher certainty. Here, experimental comparison with, for example, an SVM model that takes into account instance weighting will be informative, and a student model trained with logits (as in knowledge distillation/model compression).
iclr_2018_S1Y7OOlRZ
Modern machine learning models are characterized by large hyperparameter search spaces and prohibitively expensive training costs. For such models, we cannot afford to train candidate models sequentially and wait months before finding a suitable hyperparameter configuration. Hence, we introduce the large-scale regime for parallel hyperparameter tuning, where we need to evaluate orders of magnitude more configurations than available parallel workers in a small multiple of the wall-clock time needed to train a single model. We propose a novel hyperparameter tuning algorithm for this setting that exploits both parallelism and aggressive earlystopping techniques, building on the insights of the Hyperband algorithm (Li et al., 2016). Finally, we conduct a thorough empirical study of our algorithm on several benchmarks, including large-scale experiments with up to 500 workers. Our results show that our proposed algorithm finds good hyperparameter settings nearly an order of magnitude faster than random search.
This paper introduces a simple extension to parallelize Hyperband. Points in favor of the paper: * Addresses an important problem Points against: * Only 5-fold speedup by parallelization with 5 x 25 workers, and worse performance in the same budget than Google Vizier (even though that treats the problem as a black box) * Limited methodological contribution/novelty The paper's methodological contribution is quite limited: it amounts to a straight-forward parallelization of successive halving (SHA). Specifically, whenever a worker frees up, do a new run on it, at the highest rung possible while making sure to not run too many runs for too high rungs. (I am pretty sure that is the idea, even though Algorithm 1, which is supposed to give the details, appears to have a bug in Procedure get_job -- it would always either pick the highest rung or the lowest!) Empirically, the paper strangely does not actually evaluate a parallel version of Hyperband, but only evaluates the 5 parallel variants of SHA that Hyperband would run, each of them with all workers. The experiments in Section 4.2 show that, using 25 workers, the best of these 5 variants obtains a 5-fold speedup over sequential Hyperband on CIFAR and an 8-fold speedup on SVHN. I am confused: the *best* of 5 SHA variants only achieves a 5-fold speedup using 25 workers? I.e., parallel Hyperband, which would run the 5 SHA variants in parallel, would require 125 workers but only yield a 5-fold speedup? If I understand this correctly, I would clearly call this a negative result. Likewise, for the large-scale experiment, a single run of Vizier actually yields as good performance as the best of the 5 SHA variants, and it is unknown beforehand which SHA variant works best -- in this example, actually Bracket 0 (which is often the best) stagnates. Parallel Hyperband would run the 5 SHA variants in parallel, so its performance at a budget of 10R with a total of 500 workers can be evaluated by taking the minimum of the 5 SHA variants at a budget of 2R. This would obtain a perplexity of above 90, which is quite a bit worse than Vizier's result of about 82. In general, the performance of parallel Hyperband can be computed by taking the minimum of the SHA variants and multiplying the time taken by 5; this shows that at any time in the plot (Figure 3, left) Vizier dominates parallel Hyperband. Again, this is apparently a negative result. (For Figure 3, right, no results for Vizier are given yet.) If I understand correctly, the experiment in Section 4.4 does not involve any run of Hyperband, but merely plots predictions of Qi et al.'s Paelo framework of how many models could be evaluated with a growing number of GPUs. Therefore, all empirical results for parallel Hyperband reported in the paper appear to be negative. This confuses me, especially since the authors seem to take them as positive results. Because the original Hyperband paper argued that Bayesian optimization does not parallelize as well as random search / Hyperband, and because Hyperband has been reported to work much better than Bayesian optimization on a single node, I would have expected clear improvements of parallel Hyperband over parallel Bayesian optimization (=Vizier in the authors' setup). However, this is not what I see in the results. Am I mistaken somewhere? If not, based on these negative results the paper does not seem to quite clear the bar for ICLR. Details, in order of appearance in the paper: - Vizier: why did the authors only use Vizier's default Bayesian optimization algorithm? The Vizier paper by Golovin et al (2017) states that for large budgets other optimizers often perform better, and the budget in the large scale experiments is as high as 5000 function evaluations. Also, isn't there an automatic choice built into Vizier to pick the optimizer expected to be best? I think using a suboptimal version of Vizier would be a problem for the experimental setup. - Algorithm 1: this needs some improvement; in particular fixing the bug I mentioned above. - Section 3.1: Li et al (2017) do not analyze any algorithm theoretically. They also do not discuss finite vs. infinite horizon. I believe the authors meant Li et al's arXiv paper (2016) in both of these cases. - Section 3.1, point 2: this is unclear to me, even though I know Hyperband very well. Can you please make this clearer? - "A complete theoretical treatment of asynchronous SHA is out of the scope of this paper" -> is some theoretical treatment in scope? - Section 4.1: It seems very useful to already recommend configurations in each rung of Hyperband, and I am surprised that the methods section does not mention this. From the text in this experiments section, it feels a little like that was always part of Hyperband; I didn't think it was, so I checked the original papers and blog posts, and both the ICLR 2017 and the arXiv 2016 paper state "In fact, the first result returned by HYPERBAND after using a budget of 5R is often competitive with results returned by other searchers after using 50R." and Kevin Jamieson's blog post on Hyperband (https://people.eecs.berkeley.edu/~kjamieson/hyperband.html) explicitly states: "While random and the Bayesian Optimization algorithms output their first recommendation after max_iter iterations, Hyperband does not output anything until about max_iter(logeta(max_iter)+1) iterations [...]" Therefore, recommending after each rung seems to be a contribution of this paper, and I think it would be nice to read about this in the methods section. - Experiment 1 (SVM) used dataset size as a budget, which is what Fabolas ("Fast Bayesian optimization on large datasets") is designed for according to Klein et al (2017). On the other hand, Experiments (2) and (3) used the number of epochs as a budget, and Fabolas is not designed for that (one would want to use a different kernel, for epochs, e.g., like Freeze-Thaw Bayesian optimization (FTBO) by Swersky et al (2014), instead of a kernel made for dataset sizes). Therefore, it is not surprising that Fabolas does not work as well in those cases. The case of number of epochs as a budget would be the domain of FTBO. I know that there is no reference implementation of FTBO, so I am not asking for a comparison, but the comparison against Fabolas is misleading for Experiments (2) and (3). This doesn't really change anything for the paper: the authors could still make the case that Fabolas hasn't been designed for this case and that (to the best of my knowledge) there simply isn't an implementation of a BO algorithm that is. Fabolas is arguably the closest thing, so the results could still be reported, just not as an apples-to-apples comparison; probably best as "Fabolas-like, with dataset size kernel" in the figure. The justification to not compare against Fabolas in the parallel regime is clearly valid. - A clarification question: Section 4.4 does not report on any runs of actual neural networks, does it? And not on any runs of Hyperband, correct? Do I understand the reasoning correctly as pointing out that standard parallelization across multiple GPUs is not great, and that thus, in combination with parallel Hyperband, runs should be done mostly on one GPU only? How does this relate to the results in the cited paper "Accurate, Large-batch SGD: Training ImageNet in 1 Hour" (https://arxiv.org/abs/1706.02677)? Quoting from its abstract: "Using commodity hardware, our implementation achieves ∼ 90% scaling efficiency when moving from 8 to 256 GPUs." That seems like a very good utilization of parallel computing power? - There is no conclusion / future work. ---------- Edit after author rebuttal: I thank the reviewers for their rebuttal. This cleared up some points, but some others are still open. (1) and (2) Unfortunately, I still do not agree that the need for 5*25 workers to get a 5-fold to 8-fold speedup is a positive result. Similarly, I would interpret the results in Figure 3 differently than the authors. For the comparison against Vizier the authors argue that they could just take the lowest 2 brackets of Hyperband; but running both of these two would still be 2x slower than Vizier. And we can't only run the best bracket because the information which one is the best is not available ahead of time. In fact, it is the entire point of Hyperband to hedge across multiple brackets including the one that is random search; one *could* just use the smallest bracket, but that is a heuristic and has no theoretical guarantees of being better (or at least not worse by more than a bounded factor) than random search. Orthogonally: the comparison to Vizier (or any other baseline) is still missing for the LSTM acoustic model. (3) Concerning SOTA results, I have to agree with AnonReviewer3: one way to demonstrate success is to show competitive performance on a dataset (e.g., CIFAR) on which other researchers can also evaluate their algorithms on. Getting 17% on CIFAR-10 does not fall into that category. Nevertheless, I agree with the authors that another way to demonstrate success is to show competitive performance on a *combination* of a dataset and a design space, but for that to be something that other researchers can compare to requires the authors making publicly available the implementations they have optimized; without that public availability, due to a host of possible confounding factors, it is impossible to judge whether state-of-the-art performance on such a combination of dataset and design space has been achieved. I therefore recommend that the authors make the entire code they used for training CIFAR available; I don't expect this to have anything new in there, but it's a useful benchmark. Likewise, for the LSTM on PTB, DeepMind used Google Vizier (https://arxiv.org/abs/1707.05589) to achieve *perplexities below 60* (compared to the results above 80 reported by the authors). Just as above, I therefore recommend that the authors make their pipeline for LSTB on PTB available. Likewise for the LSTM acoustic model. (4) I'm confused that Section 4.4 does relate to SHA/Hyperband. Of course, there are some diminishing returns of running an optimizer across multiple GPUs. But similarly, there are diminishing returns of parallelizing SHA (e.g., the 5-fold speedup on 125 workers above). So the natural question that would be nice to answer is which combination of the two will yield the best results. Relatedly, the paper by Goyal et al seems to show that the weak scaling regime leads to almost linear speedups; why do the authors then analyze the strong scaling regime that does not appear to work as well? Overall, the rebuttal did not change my evaluation and I kept my original score.
iclr_2018_rJwelMbR-
DIVIDE-AND-CONQUER REINFORCEMENT LEARNING Standard model-free deep reinforcement learning (RL) algorithms sample a new initial state for each trial, allowing them to optimize policies that can perform well even in highly stochastic environments. However, problems that exhibit considerable initial state variation typically produce high-variance gradient estimates for model-free RL, making direct policy or value function optimization challenging. In this paper, we develop a novel algorithm that instead partitions the initial state space into "slices", and optimizes an ensemble of policies, each on a different slice. The ensemble is gradually unified into a single policy that can succeed on the whole state space. This approach, which we term divide-and-conquer RL, is able to solve complex tasks where conventional deep RL methods are ineffective. Our results show that divide-and-conquer RL greatly outperforms conventional policy gradient methods on challenging grasping, manipulation, and locomotion tasks, and exceeds the performance of a variety of prior methods. Videos of policies learned by our algorithm can be viewed at https://sites.google.com/view/dnc-rl/.
This paper presents a method for learning a global policy over multiple different MDPs (referred to as different "contexts", each MDP having the same dynamics and reward, but different initial state). The basic idea is to learn a separate policy for each context, but regularized in a manner that keeps all of them relatively close to each other, and then learn a single centralized policy that merges the multiple policies via supervised learning. The method is evaluated on several continuous state and action control tasks, and shows improvement over existing and similar approaches, notably the Distral algorithm. I believe there are some interesting ideas presented in this paper, but in its current form I think that the delta over past work (particularly Distral) is ultimately too small to warrant publication at ICLR. The authors should correct me if I'm wrong, but it seems as though the algorithm presented here is virtually identical to Distral except that: 1) The KL divergence term regularizes all policies together in a pairwise manner. 2) The distillation step happens episodically every R steps rather than in a pure SGD manner. 3) The authors possibly use a TRPO type objective for the standard policy gradient term, rather than REINFORCE-like approach as in Distral (this one point wasn't completely clear, as the authors mention that a "centralized DnC" is equivalent to Distral, so they may already be adapting it to the TRPO objective? some clarity on this point would be helpful). Thus, despite better performance of the method over Distral, this doesn't necessarily seem like a substantially new algorithmic development. And given how sensitive RL tasks are to hyperparameter selection, there needs to be some very substantial treatment of how the regularization parameters are chosen here (both for DnC and for the Distral and centralized DnC variants). Otherwise, it honestly seems that the differences between the competing methods could be artifacts of the choice of regularization (the alpha parameter will affect just how tightly coupled the control policies actually are). In addition to this point, the formulation of the problem setting in many cases was also somewhat unclear. In particular, the notion of the contextual MDP is not very clear from the presentation. The authors define a contextual MDP setting where in addition to the initial state there is an observed context to the MDP that can affect the initial state distribution (but not the transitions or reward). It's entirely unclear to me why this additional formulation is needed, and ultimately just seems to confuse the nature of the tasks here which is much more clearly presented just as transfer learning between identical MDPs with different state distributions; and the terminology also conflicts with the (much more complex) setting of contextual decision processes (see: https://arxiv.org/abs/1610.09512). It doesn't seem, for instance, that the final policy is context dependent (rather, it has to "infer" the context from whatever the initial state is, so effectively doesn't take the context into account at all). Part of the reasoning seems to be to make the work seem more distinct from Distral than it really is, but I don't see why "transfer learning" and the presented contextual MDP are really all that different. Finally, the experimental results need to be described in substantially more detail. The choice of regularization parameters, the precise nature of the context in each setting, and the precise design of the experiments is all extremely opaque in the current presentation. Since the methodology here is so similar to previous approaches, much more emphasis is required to better understand the (improved) empirical results in this eating. In summary, while I do think the core ideas of this paper are interesting: whether it's better to regularize policies to a single central policy as in Distral or whether it's better to use joint regularization, whether we need two different timescales for distillation versus policy training, and what policy optimization method works best, as it is right now the algorithmic choices in the paper seem rather ad-hoc compared to Distral, and need substantially more empirical evidence. Minor comments: • There are several missing words/grammatical errors throughout the manuscript, e.g. on page 2 "gradient information can better estimated".
iclr_2018_rkcya1ZAW
Two fundamental problems in unsupervised learning are efficient inference for latent-variable models and robust density estimation based on large amounts of unlabeled data. For efficient inference, normalizing flows have been recently developed to approximate a target distribution arbitrarily well. In practice, however, normalizing flows only consist of a finite number of deterministic transformations, and thus they possess no guarantee on the approximation accuracy. For density estimation, the generative adversarial network (GAN) has been advanced as an appealing model, due to its often excellent performance in generating samples. In this paper, we propose the concept of continuous-time flows (CTFs), a family of diffusion-based methods that are able to asymptotically approach a target distribution. Distinct from normalizing flows and GANs, CTFs can be adopted to achieve the above two goals in one framework, with theoretical guarantees. Our framework includes distilling knowledge from a CTF for efficient inference, and learning an explicit energy-based distribution with CTFs for density estimation. Experiments on various tasks demonstrate promising performance of the proposed CTF framework, compared to related techniques.
The authors propose the use of first order Langevin dynamics as a way to transition from one latent variable to the next in the VAE setting, as opposed to the deterministic transitions of normalizing flow. The extremely popular Fokker-Planck equation is used to analyze the steady state distributions in this setting. The authors also propose the use of CTF in density estimation, as a generator of samples from the ''true'' distribution, and show competitive performance w.r.t. inception score for some common datasets. The use of Langevin diffusion for latent transitions is a good idea in my opinion; though quite simple, it has the benefit of being straightforward to analyze with existing machinery. Though the discretized Langevin transitions in \S 3.1 are known and widely used, I liked the motivation afforded by Lemma 2. I am not convinced that taking \rho to be the sample distribution with equal probabilities at the z samples is a good choice in \S 3.1; it would be better to incorporate the proximity of the langevin chain to a stationary point in the atom weights instead of setting them to 1/K. However to their credit the authors do provide an estimate of the error in the distribution stemming from their choice. To the best of my knowledge the use of CTF in density estimation as described in \S 4 is new, and should be of interest to the community; though again it is fairly straightforward. Regarding the experiments, the difference in ELBO between the macVAE and the vanilla ones with normalizing flows is only about 2%; I wish the authors included a discussion on how the parameters of the discretized Langevin chain affects this, if at all. Overall I think the theory is properly described and has a couple of interesting formulations, in spite of being not particularly novel. I think CTFs like the one described here will see increased usage in the VAE setting, and thus the paper will be of interest to the community.
iclr_2018_S1CChZ-CZ
Published as a conference paper at ICLR 2018 ASK THE RIGHT QUESTIONS: ACTIVE QUESTION REFORMULATION WITH REINFORCEMENT LEARNING We frame Question Answering (QA) as a Reinforcement Learning task, an approach that we call Active Question Answering. We propose an agent that sits between the user and a black box QA system and learns to reformulate questions to elicit the best possible answers. The agent probes the system with, potentially many, natural language reformulations of an initial question and aggregates the returned evidence to yield the best answer. The reformulation system is trained end-to-end to maximize answer quality using policy gradient. We evaluate on SearchQA, a dataset of complex questions extracted from Jeopardy!. The agent outperforms a stateof-the-art base model, playing the role of the environment, and other benchmarks. We also analyze the language that the agent has learned while interacting with the question answering system. We find that successful question reformulations look quite different from natural language paraphrases. The agent is able to discover non-trivial reformulation strategies that resemble classic information retrieval techniques such as term re-weighting (tf-idf) and stemming.
This paper formulates the Jeopardy QA as a query reformulation task that leverages a search engine. In particular, a user will try a sequence of alternative queries based on the original question in order to find the answer. The RL formulation essentially tries to mimic this process. Although this is an interesting formulation, as promoted by some recent work, this paper does not provide compelling reasons why it's a good formulation. The lack of serious comparisons to baseline methods makes it hard to judge the value of this work. Detailed comments/questions: 1. I am actually quite confused on why it's a good RL setting. For a human user, having a series of queries to search for the right answer is a natural process, but it's not natural for a computer program. For instance, each query can be viewed as different formulation of the same question and can be issued concurrently. Although formulated as an RL problem, it is not clear to me whether the search result after each episode has been used as the immediate environment feedback. As a result, the dependency between actions seems rather weak. 2. I also feel that the comparisons to other baselines (not just the variation of the proposed system) are not entirely fair. For instance, the baseline BiDAF model has only one shot, namely using the original question as query. In this case, AQA should be allowed to use the same budget -- only one query. Another more realistic baseline is to follow the existing work on query formulation in the IR community. For example, 20 shorter queries generated by methods like [1] can be used to compare the queries created by AQA. [1] Kumaran & Carvalho. "Reducing Long Queries Using Query Quality Predictors". SIGIR-09 Pros: 1. An interesting RL formulation for query reformulation Cons: 1. The use of RL is not properly justified 2. The empirical result is not convincing that the proposed method is indeed advantageous --------------------------------------- After reading the author response and checking the revised paper, I'm both delighted and surprised that the authors improved the submission substantially and presented stronger results. I believe the updated version has reached the bar and recommend accepting this paper.
iclr_2018_Hk6kPgZA-
Published as a conference paper at ICLR 2018 CERTIFYING SOME DISTRIBUTIONAL ROBUSTNESS WITH PRINCIPLED ADVERSARIAL TRAINING Neural networks are vulnerable to adversarial examples and researchers have proposed many heuristic attack and defense mechanisms. We address this problem through the principled lens of distributionally robust optimization, which guarantees performance under adversarial input perturbations. By considering a Lagrangian penalty formulation of perturbing the underlying data distribution in a Wasserstein ball, we provide a training procedure that augments model parameter updates with worst-case perturbations of training data. For smooth losses, our procedure provably achieves moderate levels of robustness with little computational or statistical cost relative to empirical risk minimization. Furthermore, our statistical guarantees allow us to efficiently certify robustness for the population loss. For imperceptible perturbations, our method matches or outperforms heuristic approaches.
This paper proposes a principled methodology to induce distributional robustness in trained neural nets with the purpose of mitigating the impact of adversarial examples. The idea is to train the model to perform well not only with respect to the unknown population distribution, but to perform well on the worst-case distribution in some ball around the population distribution. In particular, the authors adopt the Wasserstein distance to define the ambiguity sets. This allows them to use strong duality results from the literature on distributionally robust optimization and express the empirical minimax problem as a regularized ERM with a different cost. The theoretical results in the paper are supported by experiments. Overall, this is a very well-written paper that creatively combines a number of interesting ideas to address an important problem.
iclr_2018_HJ39YKiTb
In this paper, we propose the Associative Conversation Model that generates visual information from textual information and uses it for generating sentences in order to utilize visual information in a dialogue system without image input. In research on Neural Machine Translation, there are studies that generate translated sentences using both images and sentences, and these studies show that visual information improves translation performance. However, it is not possible to use sentence generation algorithms using images for the dialogue systems since many text-based dialogue systems only accept text input. Our approach generates (associates) visual information from input text and generates response text using context vector fusing associative visual information and sentence textual information. A comparative experiment between our proposed model and a model without association showed that our proposed model is generating useful sentences by associating visual information related to sentences. Furthermore, analysis experiment of visual association showed that our proposed model generates (associates) visual information effective for sentence generation.
The paper proposes to augment (traditional) text-based sentence generation/dialogue approaches by incorporating visual information. The idea is that associating visual information with input text, and using that associated visual information as additional input will produce better output text than using only the original input text. The basic idea is to collect a bunch of data consisting of both text and associated images or video. Here, this was done using Japanese news programs. The text+image/video is used to train a model that requires both as input and that encodes both as context vectors, which are then combined and decoded into output text. Next, the image inputs are eliminated, with the encoded image context vector being instead associatively predicted directly from the encoded text context vector (why not also use the input text to help predict the visual context?), which is still obtained from the text input, as before. The result is a model that can make use of the text-visual associations without needing visual stimuli. This is a nice idea. Actually, based on the brief discussion in Section 2.2.2, it occurs to me that the model might not really be learning visual context vectors associatively, or, that this doesn't really have meaning in some sense. Does it make sense to say that what it is really doing is just learning to associate other concepts/words with the input text, and that it is using the augmenting visual information in the training data to provide those associations? Is this worth talking about? Unfortunately, while the idea has merit, and I'd like to see it pursued, the paper suffers from a fatal lack of validation/evaluation, which is very curious, given the amount of data that was collected, the fact that the authors have both a training and a test set, and that there are several natural ways such an evaluation might be performed. The two examples of Fig 3 and the additional four examples in the appendix are nice for demonstrating some specific successes or weaknesses of the model, but they are in no way sufficient for evaluation of the system, to demonstrate its accuracy or value in general. Perhaps the most obvious thing that should be done is to report the model's accuracy for reproducing the news dialogue, that is, how accurately is the next sentence predicted by the baseline and ACM models over the training instances and over the test data? How does this compare with other state-of-the-art models for dialogue generation trained on this data (perhaps trained only on the textual part of the data in some cases)? Second, some measure of accuracy for recall of the associative image context vector should be reported; for example, on average, how close (cosine similarity or some other appropriate measure) is the associatively recalled image context vector to the target image context vector? On average? Best case? Worst case? How often is this associative vector closer to a confounding image vector than an appropriate one? A third natural kind of validation would be some form of study employing human subjects to test it's quality as a generator of dialogue. One thing to note, the example of learning to associate the snowy image with the text about university entrance exams demonstrates that the model is memorizing rather than generalizing. In general, this is a false association (that is, in general, there is no reason that snow should be associated with exams on the 14th and 15th—the month is not mentioned, which might justify such an association.) Another thought: did you try not retraining the decoder and attention mechanisms for step 3? In theory, if step 2 is successful, the retraining should not be necessary. To the extent that it is necessary, step 2 has failed to accurately predict visual context from text. This seems like an interesting avenue to explore (and is obviously related to the second type of validation suggested above). Also, in addition to the baseline model, it seems like it would be good to compare a model that uses actual visual input and the model of step 1 against the model of step 3 (possibly bot retrained and not retrained) to see the effect on the outputs generated—how well do each of these do at predicting the next sentence on both training and test sets? Other concerns: 1. The paper is too long by almost a page in main content. 2. The paper exhibits significant English grammar and usage issues and should be carefully proofed by a native speaker. 3. There are lots of undefined variables in the Eqs. (s, W_s, W_c, b_s, e_t,i, etc.) Given the context and associated discussion, it is almost possible to sort out what all of them mean, but brief careful definitions should be given for clarity. 4. Using news broadcasts as a substitute for true dialogue data seems kind of problematic, though I see why it was done.
iclr_2018_HJPSN3gRW
In this work, we focus on the problem of grounding language by training an agent to follow a set of natural language instructions and navigate to a target object in a 2D grid environment. The agent receives visual information through raw pixels and a natural language instruction telling what task needs to be achieved. Other than these two sources of information, our model does not have any prior information of both the visual and textual modalities and is end-to-end trainable. We develop an attention mechanism for multi-modal fusion of visual and textual modalities that allows the agent to learn to complete the navigation tasks and also achieve language grounding. Our experimental results show that our attention mechanism outperforms the existing multi-modal fusion mechanisms proposed in order to solve the above mentioned navigation task. We demonstrate through the visualization of attention weights that our model learns to correlate attributes of the object referred in the instruction with visual representations and also show that the learnt textual representations are semantically meaningful as they follow vector arithmetic and are also consistent enough to induce translation between instructions in different natural languages. We also show that our model generalizes effectively to unseen scenarios and exhibit zero-shot generalization capabilities. In order to simulate the above described challenges, we introduce a new 2D environment for an agent to jointly learn visual and textual modalities.
**Paper Summary** The paper studies the problem of navigating to a target object in a 2D grid environment by following given natural language description as well as receiving visual information as raw pixels. The proposed architecture consists of a convoutional neural network encoding visual input, gated recurrent unit encoding natural language descriptions, an attention mechanism fusing multimodal input, and a policy learning network. To verify the effectiveness of the proposed framework, a new environment is proposed. The environment is 2-D grid based and it consists of an agent, a list of objects with different attributes, and a list of obstacles. Agents perceive the environment throught raw pixels with a limited visible region, and they can perform actions to move in the environment to reach target objects. The problem has been studied for a while and therefore it is not novel. The proposed framework is incremental. The proposed environment is trivial and therefore it is unclear if the proposed framework is able to scale up to a more complicated environment. The experiemental results do not support several claims stated in the paper. Overall, I would vote for rejection. - This paper solves the problem of navigating to the target object specified by language instruction in a 2D grid environment. It requires understanding of language, language grounding for visual features, and navigating to the target object while avoiding non-target objects. An attention mechanism is used to map a language instruction into a set of 1x1 convolutional filters which are intended to distinguish visual features described in the instruction from others. The experimental results show that the proposed method performs better than other methods. - This paper presents an end-to-end trainable model to navigate an agent through visual sources and natural language instructions. The model utilizes a proposed attention mechanism to draw correlation between the objects mentioned in the instructions with deep visual representations, without requiring any prior knowledge about these inputs. The experimental results demonstrate the effectiveness of the learnt textual representation and the zero-shot generalization capabilities to unseen scenarios. **Paper Strengths** - The paper proposes an interesting task which is a navigation task with language instructions. This is important yet relatively unxplored. - The implementation details are included, including optimizers, learning rates with weight decayed, numbers of training epochs, the discount factor, etc. - The attention mechanism used in the paper is reasonable and the learned language embedding clearly shows meaningful relationships between instructions. - The learnt textual representation follows vector arithmetic, which enables the agent to perceive unseen instructions as a new combination of the attributes and perform zero-shot generalization. **Paper Weaknesses** - The problem of following natural language descriptions together with visual representations of environments is not completely novel. For example, Both the problem and the proposed method are similar to those already introduced in the Gated Attention method (Chaplot et al., 2017). Although the proposed method performs better than the prior work, the approach is incremental. - The proposed environment is simple. The vocabulary size is 40 and the longest instruciton only consists of 9 words. Whether the proposed framework is able to deal with more complicated environments is not clear. The experimental results shown in Figure 5 is not convincing that the proposed method only took less than 20k iterations to perform almost perfectly. The proposed environment is small and simple compared to the related work. It would be better to test the proposed method in a similar scale with the existing 3D navigation environments (Chaplot et al., 2017 and Hermann et al., 2017). - The novelty of the proposed framework is unclear. This work is not the first one which proposes the multimodal fusion network incorporating a CNN achitecture dealing with visual information and a GRU architecture encoding language instructions. Also, the proposed attention mechanism is an obvious choice. - The shown visualized attention maps are not enough to support the contribution of proposing the attention mechanism. It is difficult to tell whether the model learns to attend to correct objects. Also, the effectiveness of incorporating the attention mechanism is unclear. - The paper claims that the proposed framework is flexible and is able to handle a rich set of natural language descriptions. However, the experiemental results are not enough to support the claim. - The presentaiton of the experiment is not space efficient at all. - The reference of the related papers which fuse multimodal data (vision and language) are missing. - Comapred to 8 pages was the suggested page limit, 13 pages is a bit too long. - Stating captions of figures above figures is not recommended. - It would be better to show where each 1x1 filter for multimodal fusion attends on the input image. Ideally, one filter should attend on the target object and others should attend on non-target objects. However, I wonder how RNN can generate filters to detect non-target objects given an instruction. Although Figure 6 and Figure 7 try to show insights about the proposed attention model, they don’t tell which kernel is in charge of which visual feature. Blurred attention maps in Figure 6 and 7 make it hard to interpret the behavior of the model. - The graphs shown in the Figure 5 are hard to interpret because of their large variance. It would be better to smoothing curves, so that comparing methods clearly. - For zero-shot generalization evaluation, there is no detail about the training steps and comparisons to other methods. - A highly related paper (Hermann et al., 2017) is missing in the references. - Since the instructions are simple, the model does not require attention mechanism on the textual sources. If the framework can take more complex language, might be worthwhile to try visual-text co-attention mechanism. Such demonstration will be more convincing. - The attention maps of different attribute is not as clear as the paper stated. Why do we need several “non-target” objects highlight if one can learn to consolidate all of them? - The interpretation of n in the paper is vague, the authors should also show qualitatively why n=5 is better than that of n=1,10. If the attention maps learnt are really focusing on different attributes, given more and more objects, shouldn’t n=10 have more information for the policy learning? - The unseen scenario generalization should also include texture change on the grid environment and/or new attribute combinations on non-target objects to be more convincing. - The contribution in the visual part is marginal. ** Preliminary Evaluation** - The modality fusion technique which leads to the attention maps is an effective and seem to work well approach, however, the author should present more thorough ablated analysis. The overall architecture is elegant, but the capability of it to be extended to more complex environment is in doubt. The vector arithmetic of the learnt textual embedding is the key component to enable zero-shot generalization, while the effectiveness of this method is not convincing if more complex instructions such that it contains object-object relations or interactions are perceived by the agent.
iclr_2018_Sy-tszZRZ
In this paper, we study the representational power of deep neural networks (DNN) that belong to the family of piecewise-linear (PWL) functions, based on PWL activation units such as rectifier or maxout. We investigate the complexity of such networks by studying the number of linear regions of the PWL function. Typically, a PWL function from a DNN can be seen as a large family of linear functions acting on millions of such regions. We directly build upon the work of Montúfar et al. (2014), Montúfar (2017), and Raghu et al. (2017) by refining the upper and lower bounds on the number of linear regions for rectified and maxout networks. In addition to achieving tighter bounds, we also develop a novel method to perform exact enumeration or counting of the number of linear regions with a mixed-integer linear formulation that maps the input space to output. We use this new capability to visualize how the number of linear regions change while training DNNs.
Paper Summary: This paper looks at providing better bounds for the number of linear regions in the function represented by a deep neural network. It first recaps some of the setting: if a neural network has a piecewise linear activation function (e.g. relu, maxout), the final function computed by the network (before softmax) is also piecewise linear and divides up the input into polyhedral regions which are all different linear functions. These regions also have a correspondence with Activation Patterns, the active/inactive pattern of neurons over the entire network. Previous work [1], [2], has derived lower and upper bounds for the number of linear regions that a particular neural network architecture can have. This paper improves on the upper bound given by [2] and the lower bound given by [1]. They also provide a tight bound for the one dimensional input case. Finally, for small networks, they formulate finding linear regions as solving a linear program, and use this method to compute the number of linear regions on small networks during training on MNIST Main Comments: The paper is very well written and clearly states and explains the contributions. However, the new bounds proposed (Theorem 1, Theorem 6), seem like small improvements over the previously proposed bounds, with no other novel interpretations or insights into deep architectures. (The improvement on Zaslavsky's theorem is interesting.) The idea of counting the number of regions exactly by solving a linear program is interesting, but is not going to scale well, and as a result the experiments are on extremely small networks (width 8), which only achieve 90% accuracy on MNIST. It is therefore hard to be entirely convinced by the empirical conclusions that more linear regions is better. I would like to see the technique of counting linear regions used even approximately for larger networks, where even though the results are an approximation, the takeaways might be more insightful. Overall, while the paper is well written and makes some interesting points, it presently isn't a significant enough contribution to warrant acceptance. [1] On the number of linear regions of Deep Neural Networks, 2014, Montufar, Pascanu, Cho, Bengio [2] On the expressive power of deep neural networks, 2017, Raghu, Poole, Kleinberg, Ganguli, Sohl-Dickstein
iclr_2018_B1lMMx1CW
Workshop track -ICLR 2018 THE EFFECTIVENESS OF A TWO-LAYER NEURAL NET- WORK FOR RECOMMENDATIONS We present a personalized recommender system using neural network for recommending products, such as eBooks, audio-books, Mobile Apps, Video and Music. It produces recommendations based on customer's implicit feedback history such as purchases, listens or watches. Our key contribution is to formulate recommendation problem as a model that encodes historical behavior to predict the future behavior using soft data split, combining predictor and auto-encoder models. We introduce convolutional layer for learning the importance (time decay) of the purchases depending on their purchase date and demonstrate that the shape of the time decay function can be well approximated by a parametrical function. We present offline experimental results showing that neural networks with two hidden layers can capture seasonality changes, and at the same time outperform other modeling techniques, including our recommender in production. Most importantly, we demonstrate that our model can be scaled to all digital categories, and we observe significant improvements in an online A/B test. We also discuss key enhancements to the neural network model and describe our production pipeline. Finally we open-sourced our deep learning library which supports multi-gpu model parallel training. This is an important feature in building neural network based recommenders with large dimensionality of input and output data.
The paper proposes a new neural network based method for recommendation. The main finding of the paper is that a relatively simple method works for recommendation, compared to other methods based on neural networks that have been recently proposed. This contribution is not bad for an empirical paper. There's certainly not that much here that's groundbreaking methodologically, though it's certainly nice to know that a simple and scalable method works. There's not much detail about the data (it is after all an industrial paper). It would certainly be helpful to know how well the proposed method performs on a few standard recommender systems benchmark datasets (compared to the same baselines), in order to get a sense as to whether the improvement is actually due to having a better model, versus being due to some unique attributes of this particular industrial dataset under consideration. As it is, I am a little concerned that this may be a method that happens to work well for the types of data the authors are considering but may not work elsewhere. Other than that, it's nice to see an evaluation on real production data, and it's nice that the authors have provided enough info that the method should be (more or less) reproducible. There's some slight concern that maybe this paper would be better for the industry track of some conference, given that it's focused on an empirical evaluation rather than really making much of a methodological contribution. Again, this could be somewhat alleviated by evaluating on some standard and reproducible benchmarks.
iclr_2018_r1BRfhiab
We consider neural network training, in applications in which there are many possible classes, but at test-time, the task is to identify only whether the given example belongs to a specific class, which can be different in different applications of the classifier. For instance, this is the case in an image search engine. We consider the Single Logit Classification (SLC) task: training the network so that at test-time, it would be possible to accurately identify if the example belongs to a given class, based only on the output logit for this class. We propose a natural principle, the Principle of Logit Separation, as a guideline for choosing and designing losses suitable for the SLC. We show that the cross-entropy loss function is not aligned with the Principle of Logit Separation. In contrast, there are known loss functions, as well as novel batch loss functions that we propose, which are aligned with this principle. In total, we study seven loss functions. Our experiments show that indeed in almost all cases, losses that are aligned with Principle of Logit Separation obtain a 20%-35% relative performance improvement in the SLC task, compared to losses that are not aligned with it. We therefore conclude that the Principle of Logit Separation sheds light on an important property of the most common loss functions used by neural network classifiers. Tensorflow code for optimizing the new batch losses will be made publicly available upon publication; A URL will be provided in the publication version of this manuscript.
The paper addresses the problem of a mismatch between training classification loss and a loss at test time. This is motivated by use cases in which multiclass classification problems are learned during training, but where binary or reduced multi-class classifications is performed at test time. The question for me is the following: if at test time, we have to solve "some" binary classification task, possibly drawn at random from a set of binary problems (this is not made precise in the paper), then why not optimize the same classification error or a surrogate loss at training time? Instead, the authors start with a multiclass problem, which may introduce a computational burden. when the number of classes is large as one needs to compute a properly normalized softmax. The authors now seem to ask, what if one were to use a multi-classification loss at training time, but then decides at test time that a binary classification of one-vs-all is asked for. If one buys into the relevance of the setting, then of course, one is faced with the problem that the multiclass logits (aka raw scores) may not be calibrated to be used for binary classification by applying a fixed threshold. The authors call this sententiously "Principle of logit separation". Not too surprisingly, the standard multiclass losses do not have the desired property, however approaches that reduce multi-class to binary classification at training time do, namely unnormalized models with penalized log Z (self-normalization), the NCE approach, as well as (the natural in the proposed setting) binary classification loss. I find this almost a bit circular in the line of argumentation, but ok. It remains odd that while usually one has tried to reduce multiclass to binary, the authors go the opposite direction. The main technical contribution of the paper is the batch-nornalization that makes sure that multiclass logits across mini-batches of data are better calibrated. One can almost think of that as an additional regularization. This seems interesting and does not create much overhead, if one applies mini-batched SGD optimization anyway. However, I feel this technique would need to be investigated with regard to general improvements in a multiclass setting and as such also benchmarked relative to other methods that could be applied.
iclr_2018_SJJySbbAZ
TRAINING GANS WITH OPTIMISM We address the issue of limit cycling behavior in training Generative Adversarial Networks and propose the use of Optimistic Mirror Decent (OMD) for training Wasserstein GANs. Recent theoretical results have shown that optimistic mirror decent (OMD) can enjoy faster regret rates in the context of zero-sum games. WGANs is exactly a context of solving a zero-sum game with simultaneous noregret dynamics. Moreover, we show that optimistic mirror decent addresses the limit cycling problem in training WGANs. We formally show that in the case of bi-linear zero-sum games the last iterate of OMD dynamics converges to an equilibrium, in contrast to GD dynamics which are bound to cycle. We also portray the huge qualitative difference between GD and OMD dynamics with toy examples, even when GD is modified with many adaptations proposed in the recent literature, such as gradient penalty or momentum. We apply OMD WGAN training to a bioinformatics problem of generating DNA sequences. We observe that models trained with OMD achieve consistently smaller KL divergence with respect to the true underlying distribution, than models trained with GD variants. Finally, we introduce a new algorithm, Optimistic Adam, which is an optimistic variant of Adam. We apply it to WGAN training on CIFAR10 and observe improved performance in terms of inception score as compared to Adam.
This paper proposes the use of optimistic mirror descent to train Wasserstein Generative Adversarial Networks (WGANS). The authors remark that the current training of GANs, which amounts to solving a zero-sum game between a generator and discriminator, is often unstable, and they argue that one source of instability is due to limit cycles, which can occur for FTRL-based algorithms even in convex-concave zero-sum games. Motivated by recent results that use Optimistic Mirror Descent (OMD) to achieve faster convergence rates (than standard gradient descent) in convex-concave zero-sum games and normal form games, they suggest using these techniques for WGAN training as well. The authors prove that, using OMD, the last iterate converges to an equilibrium and use this as motivation that OMD methods should be more stable for WGAN training. They then compare OMD against GD on both toy simulations and a DNA sequence task before finally introducing an adaptive generalization of OMD, Optimistic Adam, that they test on CIFAR10. This paper is relatively well-written and clear, and the authors do a good job of introducing the problem of GAN training instability as well as the OMD algorithm, in particular highlighting its differences with standard gradient descent as well as discussing existing work that has applied it to zero-sum games. Given the recent work on OMD for zero-sum and normal form games, it is natural to study its effectiveness in training GANs.The issue of last iterate versus average iterate for non convex-concave problems is also presented well. The theoretical result on last-iterate convergence of OMD for bilinear games is interesting, but somewhat wanting as it does not provide an explicit convergence rate as in Rakhlin and Sridharan, 2013. Moreover, the result is only at best a motivation for using OMD in WGAN training since the WGAN optimization problem is not a bilinear game. The experimental results seem to indicate that OMD is at least roughly competitive with GD-based methods, although they seem less compelling than the prior discussion in the paper would suggest. In particular, they are matched by SGD with momentum when evaluated by last epoch performance (albeit while being less sensitive to learning rates). OMD does seem to outperform SGD-based methods when using the lowest discriminator loss, but there doesn't seem to be even an attempt at explaining this in the paper. I found it a bit odd that Adam was not used as a point of comparison in Section 5, that optimistic Adam was only introduced and tested for CIFAR but not for the DNA sequence problem, and that the discriminator was trained for 5 iterations in Section 5 but only once in Section 6, despite the fact that the reasoning provided in Section 6 seems like it would have also applied for Section 5. This gives the impression that the experimental results might have been at least slightly "gamed". For the reasons above, I give the paper high marks on clarity, and slightly above average marks on originality, significance, and quality. Specific comments: Page 1, "no-regret dynamics in zero-sum games can very often lead to limit cycles": I don't think limit cycles are actually ever formally defined in the entire paper. Page 3, "standard results in game theory and no-regret learning": These results should be either proven or cited. Page 3: Don't the parameter spaces need to be bounded for these convergence results to hold? Page 4, "it is well known that GD is equivalent to the Follow-the-Regularized-Leader algorithm": For completeness, this should probably either be (quickly) proven or a reference should be provided. Page 5, "the unique equilibrium of the above game is...for the discriminator to choose w=0": Why is w=0 necessary here? Page 6, "We remark that the set of equilibrium solutions of this minimax problem are pairs (x,y) such that x is in the null space of A^T and y is in the null space of A": Why is this true? This should either be proven or cited. Page 6, Initialization and Theorem 1: It would be good to discuss the necessity of this particular choice of initialization for the theoretical result. In the Initialization section, it appears simply to be out of convenience. Page 6, Theorem 1: It should be explicitly stated that this result doesn't provide a convergence rate, in contrast to the existing OMD results cited in the paper. Page 7, "we considered momentum, Nesterov momentum and AdaGrad": Why isn't Adam used in this section if it is used in later experiments? Page 7-8, "When evaluated by....the lowest discriminator loss on the validation set, WGAN trained with Stochastic OMD (SOMD) achieved significantly lower KL divergence than the competing SGD variants.": Can you explain why SOMD outperforms the other methods when using the lowest discriminator loss on the validation set? None of the theoretical arguments presented earlier in the paper seem to even hint at this. The only result that one might expect from the earlier discussion and results is that SOMD would outperform the other methods when evaluating by the last epoch. However, this doesn't even really hold, since there exist learning rates in which SGD with momentum matches the performance of SOMD. Page 8, "Evaluated by the last epoch, SOMD is much less sensitive to the choice of learning rate than the SGD variants": Learning rate sensitivity doesn't seem to be touched upon in the earlier discussion. Can these results be explained by theory? Page 8, "we see that optimistic Adam achieves high numbers of inception scores after very few epochs of training": These results don't mean much without error bars. Page 8, "we only trained the discriminator once after one iteration of generator training. The latter is inline with the intuition behind the use of optimism....": Why didn't this logic apply to the previous section on DNA sequences, where the discriminator was trained multiple times? After reading the response of the authors (in particular their clarification of some technical results and the extra experiments they carried out during the rebuttal period), I have decided to upgrade my rating of the paper from a 6 to a 7. Just as a note, Figure 3b is now very difficult to read.
iclr_2018_S1Auv-WRZ
Effective training of neural networks requires much data. In the low-data regime, parameters are underdetermined, and learnt networks generalise poorly. Data Augmentation (Krizhevsky et al., 2012) alleviates this by using existing data more effectively. However standard data augmentation produces only limited plausible alternative data. Given there is potential to generate a much broader set of augmentations, we design and train a generative model to do data augmentation. The model, based on image conditional Generative Adversarial Networks, takes data from a source domain and learns to take any data item and generalise it to generate other within-class data items. As this generative process does not depend on the classes themselves, it can be applied to novel unseen classes of data. We show that a Data Augmentation Generative Adversarial Network (DAGAN) augments standard vanilla classifiers well. We also show a DAGAN can enhance few-shot learning systems such as Matching Networks. We demonstrate these approaches on Omniglot, on EMNIST having learnt the DAGAN on Omniglot, and VGG-Face data. In our experiments we can see over 13% increase in accuracy in the low-data regime experiments in Omniglot (from 69% to 82%), EMNIST (73.9% to 76%) and VGG-Face (4.5% to 12%); in Matching Networks for Omniglot we observe an increase of 0.5% (from 96.9% to 97.4%) and an increase of 1.8% in EMNIST (from 59.5% to 61.3%).
This paper proposes a conditional Generative Adversarial Networks that is used for data augmentation. In order to evaluate the performance of the proposed model, they use Omniglot, EMNIST, and VGG-Faces datasets and uses in the meta-learning task and standard classification task in the low-data regime. The paper is well-written and consistent. Even though this paper learns to do data-augmentation (which is very interesting ) rather than just simply applies some standard data augmentation techniques and shows improvements in some tasks, I am not convinced about novelty and originality of this paper, especially on the model side. To be more specific, the paper uses the previously proposed conditional GAN as the main component of their model. And for the one-shot learning tasks, it only trains the previously proposed models with these newly augmented data. In addition, there are some other works that used GAN as a method for some version of data augmentation: - RenderGAN: Generating Realistic Labeled Data https://arxiv.org/abs/1611.01331 -Data Augmentation in Emotion Classification Using Generative Adversarial Networks https://arxiv.org/abs/1711.00648 It is fair to say that their model shows improvement on the above tasks but this improvement comes with a cost of training of GAN network. In summary, the idea of the paper is very interesting to learn data-augmentation but yet I am not convinced the current paper has enough novelty and contribution and see the contribution of paper as on more the application side rather than on model and problem side. That said I'd be happy to hear the argument of the author about my comments.
iclr_2018_HyI5ro0pW
Artificial neural networks have opened up a world of possibilities in data science and artificial intelligence, but neural networks are cumbersome tools that grow with the complexity of the learning problem. We make contributions to this issue by considering a modified version of the fully connected layer we call a block diagonal inner product layer. These modified layers have weight matrices that are block diagonal, turning a single fully connected layer into a set of densely connected neuron groups. This idea is a natural extension of group, or depthwise separable, convolutional layers applied to the fully connected layers. Block diagonal inner product layers can be achieved by either initializing a purely block diagonal weight matrix or by iteratively pruning off diagonal block entries. This method condenses network storage and speeds up the run time without significant adverse effect on the testing accuracy, thus offering a new approach to improve network computation efficiency.
The paper proposes to make the inner layers in a neural network be block diagonal, mainly as an alternative to pruning. The implementation of this seems straightforward, and can be done either via initialization or via pruning on the off-diagonals. There are a few ideas the paper discusses: (1) compared to pruning weight matrices and making them sparse, block diagonal matrices are more efficient since they utilize level 3 BLAS rather than sparse operations which have significant overhead and are not "worth it" until the matrix is extremely sparse. I think this case is well supported via their experiments, and I largely agree. (2) that therefore, block diagonal layers lead to more efficient networks. This point is murkier, because the paper doesn't discuss possible increases in *training time* (due to increased number of iterations) in much detail. At if we only care about running the net, then reducing the time from 0.4s to 0.2s doesn't seem to be that useful (maybe it is for real-time predictions? Please cite some work in that case) (3) to summarize points (1) and (2), block diagonal architectures are a nice alternative to pruned architectures, with similar accuracy, and more benefit to speed (mainly speed at run-time, or speed of a single iteration, not necessarily speed to train) [as I am not primarly a neural net researcher, I had always thought pruning was done to decrease over-fitting, not to increase computation speed, so this was a surprise to me; also note that the sparse matrix format can increase runtime if implemented as a sparse object, as demonstrated in this paper, but one could always pretend it is sparse, so you never ought to be slower with a sparse matrix] (4) there is some vague connection to random matrices, with some limited experiments that are consistent with this observation but far from establish it, and without any theoretical analysis (Martingale or Markov chain theory) This is an experimental/methods paper that proposes a new algorithm, explained only in general details, and backs up it up with two reasonable experiments (that do a good job of convincing me of point (1) above). The authors seem to restrict themselves to convolutional networks in the first paragraph (and experiments) but don't discuss the implications or reasons of this assumption. The authors seem to understand the literature well, and not being an expert myself, I have the impression they are doing a fair job. The paper could have gone farther experimentally (or theoretically) in my opinion. For example, with sparse and block diagonal matrices, reducing the size of the matrix to fit into the cache on the GPU must obviously make a difference, but this did not seem to be investigated. I was also wondering about when 2 or more layers are block sparse, do these blocks overlap? i.e., are they randomly permuted between layers so that the blocks mix? And even with a single block, does it matter what permutation you use? (or perhaps does it not matter due to the convolutional structure?) The section on the variance of the weights is rather unclear mathematically, starting with the abstract and even continuing into the paper. We are talking about sample variance? What does DeltaVar mean in eq (2)? The Marchenko-Pastur theorem seemed to even be imprecise, since if y>1, then a < 0, implying that there is a nonzero chance that the positive semi-definite matrix XX' has a negative eigenvalue. I agree this relationship with random matrices could be interesting, but it seems too vague right now. Is there some central limit theorem explanation? Are you sure that you've run enough iterations to fully converge? (Fig 4 was still trending up for b1=64). Was it due to the convolutional net structure (you could test this)? Or, perhaps train a network on two datasets, one which is not learnable (iid random labels), and one which is very easily learnable (e.g., linearly separable). Would this affect the distributions? Furthermore, I think I misunderstood parts, because the scaling in MNIST and CIFAR was different and I didn't see why (for MNIST, it was proportional to block size, and for CIFAR it was independent of block size almost). Minor comment: last paragraph of 4.1, comparing with Sindhwani et al., was confusing to me. Why was this mentioned? And it doesn't seem to be comparable. I have no idea what "Toeplitz (3)" is.
iclr_2018_rylejExC-
Graph convolutional networks (GCNs) are powerful deep neural networks for graph-structured data. However, GCN computes nodes' representation recursively from their neighbors, making the receptive field size grow exponentially with the number of layers. Previous attempts on reducing the receptive field size by subsampling neighbors do not have any convergence guarantee, and their receptive field size per node is still in the order of hundreds. In this paper, we develop a preprocessing strategy and two control variate based algorithms to further reduce the receptive field size. Our algorithms are guaranteed to converge to GCN's local optimum regardless of the neighbor sampling size. Empirical results show that our algorithms have a similar convergence speed per epoch with the exact algorithm even using only two neighbors per node. The time consumption of our algorithm on the Reddit dataset is only one fifth of previous neighbor sampling algorithms.
The paper proposes a method to speed up the training of graph convolutional networks, which are quite slow for large graphs. The key insight is to improve the estimates of the average neighbor activations (via neighbor sampling) so that we can either sample less neighbors or have higher accuracy for the same number of sampled neighbors. The idea is quite simple: estimate the current average neighbor activations as a delta over the minibatch running average. I was hoping the method would also include importance sampling, but it doesn’t. The assumption that activations in a graph convolution are independent Gaussians is quite odd (and unproven). Quality: Statistically, the paper seems sound. There are some odd assumptions (independent Gaussian activations in a graph convolution embedding?!?) but otherwise the proposed methodology is rather straightforward. Clarity: It is well written and the reader is able to follow most of the details. I wish the authors had spent more time discussing the independent Gaussian assumption, rather than just arguing that a graph convolution (where units are not interacting through a simple grid like in a CNN) is equivalent to the setting of Wang and Manning (I don’t see the equivalence). Wang and Manning are looking at MLPs, not even CNNs, which clearly have more independent activations than a CNN or a graph convolution. Significance: Not very significant. The problem of computing better averages for a specific problem (neighbor embedding average) seems a bit too narrow. The solution is straightforward, while some of the approximations make some odd simplifying assumptions (independent activations in a convolution, infinitesimal learning rates). Theorem 2 is not too useful, unfortunately: Showing that the estimated gradient is asymptotically unbiased with learning rates approaching zero over Lipchitz functions does not seem like an useful statement. Learning rates will never be close enough to zero (specially for large batch sizes). And if the running activation average converges to the true value, the training is probably over. The method should show it helps when the values are oscillating in the early stages of the training, not when the training is done near the local optimum.
iclr_2018_HkuGJ3kCb
ALL-BUT-THE-TOP: SIMPLE AND EFFECTIVE POST- PROCESSING FOR WORD REPRESENTATIONS Real-valued word representations have transformed NLP applications; popular examples are word2vec and GloVe, recognized for their ability to capture linguistic regularities. In this paper, we demonstrate a very simple, and yet counter-intuitive, postprocessing technique -eliminate the common mean vector and a few top dominating directions from the word vectors -that renders off-the-shelf representations even stronger. The postprocessing is empirically validated on a variety of lexical-level intrinsic tasks (word similarity, concept categorization, word analogy) and sentence-level tasks (semantic textural similarity and text classification) on multiple datasets and with a variety of representation methods and hyperparameter choices in multiple languages; in each case, the processed representations are consistently better than the original ones.
This paper proposes a simple post-processing technique for word representations designed to improve representational quality and performance on downstream tasks. The procedure involves mean subtraction followed by projecting out the first D principle directions and is motivated by improving isotropy of the partition function. Extensive empirical analysis supports the efficacy of the approach. The idea of post-processing word embeddings to improve their performance is not new, but I believe the specific procedure and its connection to the concept of isotropy has not been investigated previously. Relative to other post-processing techniques, this method has a fair amount of theoretical justification, particularly as described in Appendix A. I think the experiments are reasonably comprehensive. All told, I think this is a good paper, but I do have some comments and questions that I think should be addressed before publication. 1) I think it is useful to analyze the distribution of singular values of the matrix of word vectors. However, I did not find the heuristic analysis based on the visual appearance of these distributions to be convincing. For example, in Fig. 1, it is not clear to me that there exists a separation between regimes of exponential decay and rough constancy. It would be ideal if a more quantitative metric is established that captures the main qualitative behavior alluded to here. Furthermore, the vocabulary size is likely to have a strong effect on the shape of the distributions. Are the plots in Fig. 4 for the same vocabulary size? Related to this, the dimensionality of the representation will have a strong effect on the shape, and this should be controlled for in Fig. 8. One way to do this would be to instead plot the density of singular values. Finally, for the Gaussian matrix simulations, in the asymptotic limit, the density of singular values depends only on the ratio of dimensions, i.e. the vector dimension to the vocabulary size. Fig. 4/8 might be more revealing if this ratio were controlled for. 2) It would be useful to describe why isotropy of the partition function is the goal, as opposed to isotropy of the vectors themselves. This may be argued in Arora et al. (2016), but summarizing that argument in this paper would be helpful. In fact, an additional experiment that would be very valuable would be to investigate empirically which form of isotropy is more effective in governing performance. One way to do this would be to enforce approximate isotropy of the partition function without also enforcing isotropy of the vectors themselves. Practically speaking, one might imagine doing this by requiring I = 1 to second order without also requiring that the mean vanish. I think this would allow for \sigma_max > \sigma_min while still satisfying I = 1 to second order. (But this is just off the top of my head -- there may be better ways to conduct this experiment). It is not clear to me why the experiment leading to Table 2 is a good proxy for the exact computation of I. It would be great if there were some mathematical justification for this approximation. Why does Fig. 3 use D=10, 20 when much smaller D are considered elsewhere? Also I think a log scale on the x-axis might be more informative. 3) It would be good to mention other forms of post-processing, especially in the context of word similarity. For example, in the original paper, GloVe advocates averaging the target and context vector representations, and normalizing across the feature dimension before computing cosine similarity. 4) I think it's likely that there is a strong connection between the optimal value of D and the frequency distribution of words in the evaluation dataset. While the paper does mention that D may depend on specifics of the dataset, etc., I would expect frequency-dependence to be the main factor, and it might be worth exploring this effect explicitly.
iclr_2018_S1pWFzbAW
The large memory requirements of deep neural networks limit their deployment and adoption on many devices. Model compression methods effectively reduce the memory requirements of these models, usually through applying transformations such as weight pruning or quantization. In this paper, we present a novel scheme for lossy weight encoding which complements conventional compression techniques. The encoding is based on the Bloomier filter, a probabilistic data structure that can save space at the cost of introducing random errors. Leveraging the ability of neural networks to tolerate these imperfections and by re-training around the errors, the proposed technique, Weightless, can compress DNN weights by up to 496× with the same model accuracy. This results in up to a 1.51× improvement over the state-of-the-art.
This paper proposes an interesting approach to compress the weights of a network for storage or transmission purposes. My understanding is, at inference, the network is 'recovered' therefore there is no difference in processing time (slight differences in accuracy due to the approximation in recovering the weights). - The idea is nice although it's applicability is limited as it is only for distribution of the model and storing (is storage really a problem?). Method: - the idea of using the Bloomier filter is new to me. However, the paper is miss-leading as the filtering is a minor part of the complete process. The paper introduces a complete pipeline including quantization, and pruning to maximize the benefits of the filter and an additional (optional) step to achieve further compression. - The method / idea seems simply and easy to reproduce (except the subsequent steps that are not clearly detailed). Clarity - The paper could improve its clarity. At the moment, the Bloomier is the core but needs many other components to make it effective. Those components are not detailed to the level of being reproducible. - One interesting point is the self-implementation of the Deep compression algorithm. The paper claims this is a competitive representation as it achieves better compression than the original one. However, those numbers are not clear in tables (only in table 3 numbers seem to be equivalent to the ones in the text). This needs clarification, CSR achieves 81.8% according to Table 2 and 119 according to the text. Results: - Current results are interesting. However I have several concerns: 1) it is not clear to me why assuming similar performance. While Bloomier is weightless the complete process involves many retraining steps involving performance loss. Analysis on this would be nice to see (I doubt it ends exactly at the same number). Section 3 explicitly suggest there is the need of retraining to mitigate the effect of false positives which is then increased with pruning and quantization. Therefore, would be nice to see the impact in accuracy (even it is not the main focus of the work). 2) Resutls are focused on fully connected layers which carry (for the given models) the larger number of weights (and therefore it is easy to get large compression numbers). What would happen in newer models where the fully connected layer is minimal compared to conv. layers? What about the accuracy impact there? Let's say in a Resnet-34. 3) I would like to see further analysis on why Bloomier filter encoding improves accuracy (or is a typo and meant to be error?) by 2%. This is a large improvement without training from scractch. 4) It is interesting to me how the retraining process is 'hidden' all over the paper. At the beginning it is claimed that it takes about one hour for VGG-16 to compute the Bloomier filters. Howerver, that is only a minimal portion of the entire pipeline. Later in the experimental section it is mentioned that 'tens of epochs' are needed for retraining (assuming to compensate for errors) after retraining for compensating l1 pruning?.... tens of epochs is a significant portion of the entire training process assuming VGG is trained for 90epochs max. 5) Interestingly, as mentioned in the paper, this is 'static compression'. That is, the model needs to be completely 'restored' before inference. This is miss-leading as an embedded device will need the same requirements as any other at inferece time(or maybe I am missing something). That is, the benefit is mainly for storing and transmission. 6) I would like to see the sensibility analysis with respect to t and the number of clusters. 7) As mentioned before, LeNet is great but would be nice to see more complicated models (even resnet on CIFAR). These models are not only large in terms of parameters but also quite sensitive to modifications in the weight structure. 8) Results are focused on a single layer. What happens if all the layers are considered at the same time? Here I am also concerned about the retraining process (fixing one layer and retraining the deeper ones). How is this done using only fully connected layers? What is the impact of doing it all over the network (let's say VGG-16 from the first convolutional layer to the very last). Summary: All in all, the idea has potential but there are many missing details. I would like to see clearer and more comprehensive results in terms of modern models and in the complete model, not only in the FC layer, including accuracy impact.
iclr_2018_SkAK2jg0b
Transfer learning for feature extraction can be used to exploit deep representations in contexts where there is very few training data, where there are limited computational resources, or when tuning the hyper-parameters needed for training is not an option. While previous contributions to feature extraction propose embeddings based on a single layer of the network, in this paper we propose a full-network embedding which successfully integrates convolutional and fully connected features, coming from all layers of a deep convolutional neural network. To do so, the embedding normalizes features in the context of the problem, and discretizes their values to reduce noise and regularize the embedding space. Significantly, this also reduces the computational cost of processing the resultant representations. The proposed method is shown to outperform single layer embeddings on several image classification tasks, while also being more robust to the choice of the pre-trained model used for obtaining the initial features. The performance gap in classification accuracy between thoroughly tuned solutions and the full-network embedding is also reduced, which makes of the proposed approach a competitive solution for a large set of applications.
The paper addresses the scenario when using a pretrained deep network as learnt feature representation for another (small) task where retraining is not an option or not desired. In this situation it proposes to use all layers of the network to extract feature from, instead of only one layer. Then it proposes to standardize different dimensions of the features based on their response on the original task. Finally, it discretize each dimension into {-1, 0, 1} to compress the final concatenated feature representation. Doing this, it shows improvements over using a single layer for 9 target image classification datasets including object, scene, texture, material, and animals. The reviewer does not find the paper suitable for publication at ICLR due to the following reasons: - The paper is incremental with limited novelty. - the results are not encouraging - the pipeline of standardization, discretization is relatively costly, the final feature vector still large. - combining different layers, as the only contribution of the paper, has been done in the literature before, for instance: “The Treasure beneath Convolutional Layers: Cross-convolutional-layer Pooling for Image Classification” CVPR 2016
iclr_2018_SyKoKWbC-
In most current formulations of adversarial training, the discriminators can be expressed as single-input operators, that is, the mapping they define is separable over observations. In this work, we argue that this property might help explain the infamous mode collapse phenomenon in adversarially-trained generative models. Inspired by discrepancy measures and two-sample tests between probability distributions, we propose distributional adversaries that operate on samples, i.e., on sets of multiple points drawn from a distribution, rather than on single observations. We show how they can be easily implemented on top of existing models. Various experimental results show that generators trained in combination with our distributional adversaries are much more stable and are remarkably less prone to mode collapse than traditional models trained with observation-wise prediction discriminators. In addition, the application of our framework to domain adaptation results in strong improvement over recent state-of-the-art.
The paper proposes to replace single-sample discriminators in adversarial training with discriminators that explicitly operate on distributions of examples, so as to incentivize the generator to cover the full distribution of the training data and not collapse to isolated modes. The idea of avoiding mode collapse by providing multiple samples to the discriminator is not new; the paper acknowledges prior work on minibatch discrimination but does not really describe the differences with previous work in any technical detail. Not being highly familiar with this literature, my reading is that the scheme in this paper grounds out into a somewhat different architecture than previous minibatch discriminators, with a nice interpretation in terms of a sample-based approximation to a neural mean embedding. However the paper does not provide any empirical evidence that their approach actually works better than previous approaches to minibatch discrimination. By comparing only to one-sample discriminators it leaves open the (a priori quite plausible) possibility that minibatch discrimination is generally a good idea but that other architectures might work equally well or better, i.e., the experiments do not demonstrate that the MMD machinery that forms the core of the paper has any real purchase. The paper also proposes a two-sample objective DAN-2S, in which the discriminator is asked to classify two sets of samples as coming from the same or different distributions. This is an interesting approach, although empirically it does not appear to have any advantage over the simpler DAN-S -- do the authors agree with this interpretation? If so it is still a worthwhile negative result, but the paper should make this conclusion explicit. Alternately if there are cases when the two-sample test is actually recommended, that should be made explicit as well. Overall this paper seems borderline -- a nice theoretical story, grounding out into a simple architecture that does seem to work in practice (the domain adaptation results are promising), but with somewhat sloppy writing and experimentation that doesn't clearly demonstrate the value of the proposed approach. I hope the authors continue to improve the paper by comparing to other minibatch discrimination techniques. It would also be helpful to see value on a real-world task where mode collapse is explicitly seen as a problem (and/or to provide some intuition for why this would be the case in the Amazon reviews dataset). Specific comments: - Eqn (2.2) is described as representing the limit of a converged discriminator, but it looks like this is just the general gradient of the objective --- where does D* enter into the picture? - Fig 1: the label R is never explained; why not just use P_x? - Section 5.1 "we use the pure distributional objective for DAN (i.e., setting λ != 0 in (3.5))" should this be λ = 0? - "Results" in the domain adaptation experiments are not clearly explained -- what do the reported numbers represent? (presumably accuracy(stddev) but the figure caption should say this). It is also silly to report accuracy to 2 decimal places when they are clearly not significant at that level.
iclr_2018_Hk0wHx-RW
Published as a conference paper at ICLR 2018 LEARNING SPARSE LATENT REPRESENTATIONS WITH THE DEEP COPULA INFORMATION BOTTLENECK Deep latent variable models are powerful tools for representation learning. In this paper, we adopt the deep information bottleneck model, identify its shortcomings and propose a model that circumvents them. To this end, we apply a copula transformation which, by restoring the invariance properties of the information bottleneck method, leads to disentanglement of the features in the latent space. Building on that, we show how this transformation translates to sparsity of the latent space in the new model. We evaluate our method on artificial and real data.
This paper identifies and proposes a fix for a shortcoming of the Deep Information Bottleneck approach, namely that the induced representation is not invariant to monotonic transform of the marginal distributions (as opposed to the mutual information on which it is based). The authors address this shortcoming by applying the DIB to a transformation of the data, obtained by a copula transform. This explicit approach is shown on synthetic experiments to preserve more information about the target, yield better reconstruction and converge faster than the baseline. The authors further develop a sparse extension to this Deep Copula Information Bottleneck (DCIB), which yields improved representations (in terms of disentangling and sparsity) on a UCI dataset. (significance) This is a promising idea. This paper builds on the information theoretic perspective of representation learning, and makes progress towards characterizing what makes for a good representation. Invariance to transforms of the marginal distributions is clearly a useful property, and the proposed method seems effective in this regard. Unfortunately, I do not believe the paper is ready for publication as it stands, as it suffers from lack of clarity and the experimentation is limited in scope. (clarity) While Section 3.3 clearly defines the explicit form of the algorithm (where data and labels are essentially pre-processed via a copula transform), details regarding the “implicit form” are very scarce. From Section 3.4, it seems as though the authors are optimizing the form of the gaussian information bottleneck I(x,t), in the hopes of recovering an encoder $f_\beta(x)$ which gaussianizes the input (thus emulating the explicit transform) ? Could the authors clarify whether this interpretation is correct, or alternatively provide additional clarifying details ? There are also many missing details in the experimental section: how were the number of “active” components selected ? Which versions of the algorithm (explicit/implicit) were used for which experiments ? I believe explicit was used for Section 4.1, and implicit for 4.2 but again this needs to be spelled out more clearly. I would also like to see a discussion (and perhaps experimental comparison) to standard preprocessing techniques, such as PCA-whitening. (quality) The experiments are interesting and seem well executed. Unfortunately, I do not think their scope (single synthetic, plus a single UCI dataset) is sufficient. While the gap in performance is significant on the synthetic task, this gap appears to shrink significantly when moving to the UCI dataset. How does this method perform for more realistic data, even e.g. MNIST ? I think it is crucial to highlight that the deficiencies of DIB matter in practice, and are not simply a theoretical consideration. Similarly, the representation analyzed in Figure 7 is promising, but again the authors could have targeted other common datasets for disentangling, e.g. the simple sprites dataset used in the beta-VAE paper. I would have also liked to see a more direct and systemic validation of the claims made in the paper. For example, the shortcomings of DIB identified in Section 3.1, 3.2 could have been verified more directly by plotting I(y,t) for various monotonic transformations of x. A direct comparison of the explicit and implicit forms of the algorithms would also also make for a stronger paper in my opinion. Pros: * Theoretically well motivated * Promising results on synthetic task * Potential for impact Cons: * Paper suffers from lack of clarity (method and experimental section) * Lack of ablative / introspective experiments * Weak empirical results (small or toy datasets only).
iclr_2018_BkDB51WR-
We propose to tackle a time series regression problem by computing temporal evolution of a probability density function to provide a probabilistic forecast. A Recurrent Neural Network (RNN) based model is employed to learn a nonlinear operator for temporal evolution of a probability density function. We use a softmax layer for a numerical discretization of a smooth probability density functions, which transforms a function approximation problem to a classification task. Explicit and implicit regularization strategies are introduced to impose a smoothness condition on the estimated probability distribution. A Monte Carlo procedure to compute the temporal evolution of the distribution for a multiple-step forecast is presented. The evaluation of the proposed algorithm on three synthetic and two real data sets shows advantage over the compared baselines.
Interesting ideas that extend LSTM to produce probabilistic forecasts for univariate time series, experiments are okay. Unclear if this would work at all in higher-dimensional time series. It is also unclear to me what are the sources of the uncertainties captured. The author proposed to incorporate 2 different discretisation techniques into LSTM, in order to produce probabilistic forecasts of univariate time series. The proposed approach deviates from the Bayesian framework where there are well-defined priors on the model, and the parameter uncertainties are subsequently updated to incorporate information from the observed data, and propagated to the forecasts. Instead, the conditional density p(y_t|y_{1:t-1|, \theta}) was discretised by 1 of the 2 proposed schemes and parameterised by a LSTM. The LSTM was trained using discretised data and cross-entropy loss with regularisations to account for ordering of the discretised labels. Therefore, the uncertainties produced by the model appear to be a black-box. It is probably unlikely that the discretisation method can be generalised to high-dimensional setting? Quality: The experiments with synthetic data sufficiently showed that the model can produce good forecasts and predictive standard deviations that agree with the ground truth. In the experiments with real data, it's unclear how good the uncertainties produced by the model are. It may be useful to compare to the uncertainty produced by a GP with suitable kernels. In Fig 6c, the 95pct CI looks more or less constant over time. Is there an explanation for that? Clarity: The paper is well-written. The presentations of the ideas are pretty clear. Originality: Above average. I think the regularisation techniques proposed to preserve the ordering of the discretised class label are quite clever. Significance: Average. It would be excellent if the authors can extend this to higher dimensional time series. I'm unsure about the correctness of Algorithm 1 as I don't have knowledge in SMC.
iclr_2018_ry8dvM-R-
Published as a conference paper at ICLR 2018 ROUTING NETWORKS: ADAPTIVE SELECTION OF NON-LINEAR FUNCTIONS FOR MULTI-TASK LEARN- ING Multi-task learning (MTL) with neural networks leverages commonalities in tasks to improve performance, but often suffers from task interference which reduces the benefits of transfer. To address this issue we introduce the routing network paradigm, a novel neural network and training algorithm. A routing network is a kind of self-organizing neural network consisting of two components: a router and a set of one or more function blocks. A function block may be any neural network -for example a fully-connected or a convolutional layer. Given an input the router makes a routing decision, choosing a function block to apply and passing the output back to the router recursively, terminating when a fixed recursion depth is reached. In this way the routing network dynamically composes different function blocks for each input. We employ a collaborative multi-agent reinforcement learning (MARL) approach to jointly train the router and function blocks. We evaluate our model against cross-stitch networks and shared-layer baselines on multi-task settings of the MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a significant improvement in accuracy, with sharper convergence. In addition, routing networks have nearly constant per-task training cost while cross-stitch networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we obtain cross-stitch performance levels with an 85% reduction in training time.
Summary: The paper suggests to use a modular network with a controller which makes decisions, at each time step, regarding the next nodule to apply. This network is suggested a tool for solving multi-task scenarios, where certain modules may be shared and others may be trained independently for each task. It is proposed to learn the modules with standard back propagation and the controller with reinforcement learning techniques, mostly tabular. - page 4: In algorithm 2, line 6, I do not understand the reward computation. It seems that either a _{k+1} subscript index is missing for the right hand side R, or an exponent of n-k is missing on \gamma. In the current formula, the final reward affects all decisions without a decay based on the distance between action and reward gain. This issue should be corrected or explicitly stated. The ‘collaboration reward’ is not clearly justified: If I understand correctly, It is stated that actions which were chosen often in the past get higher reward when chosen again. This may create a ‘winner takes all’ effect, but it is not clear why this is beneficial for good routing. Specifically, this term is optimized when a single action is always chosen with high probability – but such a single winner does not seem to be the behavior we want to encourage. - Page 5: It is not described clearly (and better: defined formally) what exactly is the state representation. It is said to include the current network output (which is a vector in R^d), the task label and the depth, but it is not stated how this information is condensed into a single integer index for the tabular methods. If I understand correctly, the state representation used in the tabular algorithms includes only the current depth. If this is true, this constitutes a highly restricted controller, making decisions only based on depth without considering the current output. - The functional approximation versions are even less clear: Again it is not clear what information is contained in the state and how it is represented. In addition it is not clear in this case what network architecture is used for computation of the policy (PG) or valkue (Q-learning), and how exactly they are optimized. - The WPL algorithm is not clear to me o In algorithm box 3, what is R_k? I do not see it defined anywhere. Is it related to \hat{R}? how? o Is it assumed that the actions are binary? o I do not understand why positive gradients are multiplied with the action probability and negative gradients with 1 minus this probability. What is the source of a-symmetry between positive and negative gradients? - Page 6: o It is not clear why MNist is tested over 200 examples, where there is a much larger test set available o In MIN-MTL I do not understand the motivation from creating superclasses composed of 5 random classes each: why do we need such arbitrary and un-natural class definitions? - Page 7: The results on Cifar-100 are compared to several baselines, but not to the standard non-MTL solution: Solve the multi-class classification problem using a softmax loss and a unified, non routing architecture in which all the layers are shared by all classes, with the only distinction in the last classification layer. If the routing solution does not beat this standard baseline, there is no justification for its more complex structure and optimization. - Page 8: The author report that when training the controller with single agent methods the policy collapses into choosing a single module for most tasks. However, this is not-surprising, given that the action-based reward (whos strength is unclear) seems to promote such winner-takes-all behavior. Overall: - The paper is highly unclear in its method representation o There is no unified clear notation. The essential symbols (states, actions, rewards) are not formally defined, and often it is not clear even if they are integers, scalars, or vectors. In notation existing, there are occasional notation errors. o The reward is a) not clear, and b) not well motivated when it is explained, and c) not explicitly stated anywhere: it is said that the action-specific reward may be up to 10 times larger than the final reward, but the actual tradeoff parameter between them is not stated. Note that this parameter is important, as using a 10-times larger action-related reward means that the classification-related reward becomes insignificant. o The state representation used is not clear, and if I understand correctly, it includes only the current depth. This is a severely limited state representation, which does not enable to learn actions based on intermediate results o The continuous versions of the RL algorithms are not explained at all: no state representation, nor optimization is described. o The presentation suffers from severe over-generalization and lack of clarity, which disabled my ability to understand the network and algorithms for a specific case. Instead, I would recommend that in future versions of this document a single network, with a specific router and set of decisions, and with a single algorithm, will be explained with clear notation end-to-end Beyond the clarity issues, I suspect also that the novelty is minor (if the state does not include any information about the current output) and that the empirical baseline is lacking. However, it is hard to judge these due to lack of clarity. After revision: - Most of the clarity issues were handled well, and the paper now read nicely - It is now clear that routing is not done based on the current input (an example is not dynamically routed based on its current representation). Instead routing depends on the task and depth only. This is still interesting, but is far from reaching context-dependent routing. - The results presented are nice and show that task-dependent routing may be better than plain baseline or the stiching alternative. However, since this is a task transfer issue, I believe several data size points should be tested. For example, as data size rises, the task-specific-all-fc alternative is expected to get stronger (as with more data, related task are less required for good performance). -
iclr_2018_rJoXrxZAZ
This paper introduces HybridNet, a hybrid neural network to speed-up autoregressive models for raw audio waveform generation. As an example, we propose a hybrid model that combines an autoregressive network named WaveNet and a conventional LSTM model to address speech synthesis. Instead of generating one sample per time-step, the proposed HybridNet generates multiple samples per time-step by exploiting the long-term memory utilization property of LSTMs. In the evaluation, when applied to text-to-speech, HybridNet yields state-of-art performance. HybridNet achieves a 3.83 subjective 5-scale mean opinion score on US English, largely outperforming the same size WaveNet in terms of naturalness and provide 2x speed up at inference.
This paper presents HybridNet, a neural speech (and other audio) synthesis system (vocoder) that combines the popular and effective WaveNet model with an LSTM with the goal of offering a model with faster inference-time audio generation. Summary: The proposed model, HybridNet is a fairly straightforward variation of WaveNet and thus the paper offers a relatively low novelty. There is also a lack of detail regarding the human judgement experiments that make the significance of the results difficult to interpret. Low novelty of approach / impact assessment: The proposed model is based closely on WaveNet, an existing state-of-the-art vocoder model. The proposal here is to extend WaveNet to include an LSTM that will generate samples between WaveNet samples -- thus allowing WaveNet to sample at a lower sample frequency. WaveNet is known for being relatively slow at test-time generation time, thus allowing it to run at a lower sample frequency should decrease generation time. The introduction of a local LSTM is perhaps not a sufficiently significant innovation. Another issue that lowers the assessment of the likely impact of this paper is that there are already a number of alternative mechanism to deal with the sampling speed of WaveNet. In particular, the cited method of Ramachandran et al (2017) uses caching and other tricks to achieve a speed up of 21 times over WaveNet (compared to the 2-4 times speed up of the proposed method). The authors suggest that these are orthogonal strategies that can be combined, but the combination is not attempted in this paper. There are also other methods such as sampleRNN (Mehri et al. 2017) that are faster than WaveNet at inference time. The authors do not compare to this model. Inappropriate evaluation: While the model is motivated by the need to reduce the generation of WaveNet sampling, the evaluation is largely based on the quality of the sampling rather than the speed of sampling. The results are roughly calibrated to demonstrate that HybridNet produces higher quality samples when (roughly) adjusted for sampling time. The more appropriate basis of comparison is to compare sample time as a function of sample quality. Experiments: Few details are provided regarding the human judgment experiments with Mechanical Turkers. As a result it is difficulty to assess the appropriateness of the evaluation and therefore the significance of the findings. I would also be much more comfortable with this quality assessment if I was able to hear the samples for myself and compare the quality of the WaveNet samples with HybridNet samples. I will also like to compare the WaveNet samples generated by the authors' implementation with the WaveNet samples posted by van den Oord et al (2017). Minor comments / questions: How, specifically, is validation error defined in the experiments? There are a few language glitches distributed throughout the paper.
iclr_2018_rJSr0GZR-
Most deep latent factor models choose simple priors for simplicity, tractability or not knowing what prior to use. Recent studies show that the choice of the prior may have a profound effect on the expressiveness of the model, especially when its generative network has limited capacity. In this paper, we propose to learn a proper prior from data for adversarial autoencoders (AAEs). We introduce the notion of code generators to transform manually selected simple priors into ones that can better characterize the data distribution. Experimental results show that the proposed model can generate better image quality and learn better disentangled representations than AAEs in both supervised and unsupervised settings. Lastly, we present its ability to do cross-domain translation in a text-to-image synthesis task.
This paper propose a simple extension of the adversarial auto-encoders for (conditional) image generation. The general idea is that instead of using Gaussian prior, the propose algorithm uses a "code generator" network to warp the gaussian distribution, such that the internal prior of the latent encoding space is more expressive and complicated. Pros: - The proposed idea is simple and easy to implement - The results show improvement in terms of visual quality Cons: - I agree that the proposed prior should better capture the data distribution. However, incorporating a generic prior over the latent space plays a vital role as regularisation, this helps avoid model collapse. Adding a complicated code generation network brings too much flexibility for the prior part. This makes the prior and posterior learnable, which makes it easier to fool the regularisation discriminator (think about the latent code and prior code collapsed to two different points). As a result, this weakens the regularisation over the latent encoder space. - The above mentioned could be verified through qualitative results. As shown in Fig. 5. I believe this is a result due to the fact that the adversarial loss in the regularisation phase does not a significant influence there. - I have some doubts over why AAE works so poorly when the latent dimension is 2000. How to make sure it's not a problem of implementation or the model wasn't trapped into a bad local optima / saddle points. Could you justify this? - Contributions; this paper propose an improvement over a existing model. However, neither the idea/insights it brought can be applied onto other generative models, nor the improvement bring a significant improvement over the-state-of-the-arts. I am wondering what the community will learn from this paper, or what the author would like to claim as significant contributions.
iclr_2018_Syx6bz-Ab
Relational databases store a significant amount of the worlds data. However, accessing this data currently requires users to understand a query language such as SQL. We propose Seq2SQL, a deep neural network for translating natural language questions to corresponding SQL queries. Our model uses rewards from inthe-loop query execution over the database to learn a policy to generate the query, which contains unordered parts that are less suitable for optimization via cross entropy loss. Moreover, Seq2SQL leverages the structure of SQL to prune the space of generated queries and significantly simplify the generation problem. In addition to the model, we release WikiSQL, a dataset of 80654 hand-annotated examples of questions and SQL queries distributed across 24241 tables from Wikipedia that is an order of magnitude larger than comparable datasets. By applying policybased reinforcement learning with a query execution environment to WikiSQL, Seq2SQL outperforms a state-of-the-art semantic parser, improving execution accuracy from 35.9% to 59.4% and logical form accuracy from 23.4% to 48.3%.
This paper presents a new approach to support the conversion from natural language to database queries. One of the major contributions of the work is the introduction of a new real-world benchmark dataset based on questions over Wikipedia. The scale of the data set is significantly larger than any existing ones. However, from the technical perspective, the reviewer feels this work has limited novelty and does not advance the research frontier by much. The detailed comments are listed below. 1) Limitation of the dataset: While the authors claim this is a general approach to support seq2sql, their dataset only covers simple queries in form of aggregate-where-select structure. Therefore, their proposed approach is actually an advanced version of template filling, which considers the expression/predicate for one of the three operators at a time, e.g., (Giordani and Moschitti, 2012). 2) Limitation of generalization: Since the design of the algorithms is purely based on their own WikiSQL dataset, the reviewer doubts if their approach could be generalized to handle more complicated SQL queries, e.g., (Li and Jagadish, 2014). The high complexity of real-world SQL stems from the challenges on the appropriate connections between tables with primary/foreign keys and recursive/nested queries. 3) Comparisons to existing approaches: Since it is a template-based approach in nature, the author should shrink the problem scope in their abstract/introduction and compare against existing template approaches. While there are tons of semantic parsing works, which grow exponentially fast in last two years, these works are actually handling more general problems than this submission does. It thus makes sense when the performance of semantic parsing approaches on a constrained domain, such as WikiSQL, is not comparable to the proposal in this submission. However, that only proves their method is fully optimized for their own template. As a conclusion, the reviewer believes the problem scope they solve is much smaller than their claim, which makes the submission slightly below the bar of ICLR. The authors must carefully consider how their proposed approach could be generalized to handle wider workload beyond their own WikiSQL dataset. PS, After reading the comments on OpenReview, the reviewer feels recent studies, e.g., (Guu et al., ACL 2017), (Mou et al, ICML 2017) and (Yin et al., IJCAI 2016), deserve more discussions in the submission because they are strongly relevant and published on peer-reviewed conferences.
iclr_2018_Bk8ZcAxR-
EIGENOPTION DISCOVERY THROUGH THE DEEP SUCCESSOR REPRESENTATION Options in reinforcement learning allow agents to hierarchically decompose a task into subtasks, having the potential to speed up learning and planning. However, autonomously learning effective sets of options is still a major challenge in the field. In this paper we focus on the recently introduced idea of using representation learning methods to guide the option discovery process. Specifically, we look at eigenoptions, options obtained from representations that encode diffusive information flow in the environment. We extend the existing algorithms for eigenoption discovery to settings with stochastic transitions and in which handcrafted features are not available. We propose an algorithm that discovers eigenoptions while learning non-linear state representations from raw pixels. It exploits recent successes in the deep reinforcement learning literature and the equivalence between proto-value functions and the successor representation. We use traditional tabular domains to provide intuition about our approach and Atari 2600 games to demonstrate its potential.
Eigenoption Discovery Through the Deep Successor Representation The paper is a follow up on previous work by Machado et al. (2017) showing how proto-value functions (PVFs) can be used to define options called “eigenoptions”. In essence, Machado et al. (2017) showed that, in the tabular case, if you interpret the difference between PVFs as pseudo-rewards you end up with useful options. They also showed how to extend this idea to the linear case: one replaces the Laplacian normally used to build PVFs with a matrix formed by sampling differences phi(s') - phi(s), where phi are features. The authors of the current submission extend the approach above in two ways: they show how to deal with stochastic dynamics and how to replace a linear model with a nonlinear one. Interestingly, the way they do so is through the successor representation (SR). Stachenfeld et al. (2014) have showed that PVFs can be obtained as a linear transformation of the eigenvectors of the matrix formed by stacking all SRs of an MDP. Thus, if we have the SR matrix we can replace the Laplacian mentioned above. This provides benefits already in the tabular case, since SRs naturally extend to domains with stochastic dynamics. On top of that, one can apply a trick similar to the one used in the linear case --that is, construct the matrix representing the diffusion model by simply stacking samples of the SRs. Thus, if we can learn the SRs, we can extend the proposed approach to the nonlinear case. The authors propose to do so by having a deep neural network similar to Kulkarni et al. (2016)'s Deep Successor Representation. The main difference is that, instead of using an auto-encoder, they learn features phi(s) such that the next state s' can be recovered from it (they argue that this way psi(s) will retain information about aspects of the environment the agent has control over). This is a well-written paper with interesting (and potentially useful) insights. I only have a few comments regarding some aspects of the paper that could perhaps be improved, such as the way eigenoptions are evaluated. One question left open by the paper is the strategy used to collect data in order to compute the diffusion model (and thus the options). In order to populate the matrix that will eventually give rise to the PVFs the agent must collect transitions. The way the authors propose to do it is to have the agent follow a random policy. So, in order to have options that lead to more direct, "purposeful" behaviour, the agent must first wander around in a random, purposeless, way, and hope that this will lead to a reasonable exploration of the state space. This problem is not specific to the proposed approach, though: in fact, any method to build options will have to resolve the same issue. One related point that is perhaps more specific to this particular work is the strategy used to evaluate the options built: the diffusion time, or the expected number of steps between any two states of an MDP when following a random walk. First, although this metric makes intuitive sense, it is unclear to me how much it reflects control performance, which is what we ultimately care about. Perhaps more important, measuring performance using the same policy used to build the options (the random policy) seems somewhat unsatisfactory to me. To see why, suppose that the options were constructed based on data collected by a non-random policy that only visits a subspace of the state space. In this case it seems likely that the decrease in the diffusion time would not be as apparent as in the experiments of the paper. Conversely, if the diffusion time were measured under another policy, it also seems likely that options built with a random policy would not perform so well (assuming that the state space is reasonably large to make an exhaustive exploration infeasible). More generally, we want options built under a given policy to reduce the diffusion time of other policies (preferably ones that lead to good control performance). Another point associated with the evaluation of the proposed approach is the method used to qualitatively assess options in the Atari experiments described in Section 4.2. In the last paragraph of page 7 the authors mention that eigenoptions are more effective in reducing the diffusion time than “random options” built based on randomly selected sub-goals. However, looking at Figure 4, the terminal states of the eigenoptions look a bit like randomly-selected sub-goals. This is especially true when we note that only a subset of the options are shown: given enough random options, it should be possible to select a subset of them that are reasonably spread across the state space as well. Interestingly, one aspect of the proposed approach that seems to indeed be an improvement over random options is made visible by a strategy used by the authors to circumvent computational constraints. As explained in the second paragraph of page 8, instead of learning policies to maximize the pseudo-rewards associated with eigenoptions the authors used a myopic policy that only looks one step ahead (which is the same as having a policy learned with a discount factor of zero). The fact that these myopic policies are able to navigate to specific locations and stay there suggests that the proposed approach gives rise to dense pseudo-rewards that are very informative. As a comparison, when we define a random sub-goal the resulting reward is a very sparse signal that would almost certainly not give rise to useful myopic policies. Therefore, one could argue that the proposed approach not only generate useful options, it also gives rise to dense pseudo-rewards that make it easier to build the policies associated with them.
iclr_2018_ryj0790hb
Given an existing trained neural network, it is often desirable to learn new capabilities without hindering performance of those already learned. Existing approaches either learn sub-optimal solutions, require joint training, or incur a substantial increment in the number of parameters for each added task, typically as many as the original network. We propose a method called Deep Adaptation Networks (DAN) that constrains newly learned filters to be linear combinations of existing ones. DANs preserve performance on the original task, require a fraction (typically 13%) of the number of parameters compared to standard fine-tuning procedures and converge in less cycles of training to a comparable or better level of performance. When coupled with standard network quantization techniques, we further reduce the parameter cost to around 3% of the original with negligible or no loss in accuracy. The learned architecture can be controlled to switch between various learned representations, enabling a single network to solve a task from multiple different domains. We conduct extensive experiments showing the effectiveness of our method on a range of image classification tasks and explore different aspects of its behavior.
This paper proposes to adapt convnet representations to new tasks while avoiding catastrophic forgetting by learning a per-task “controller” specifying weightings of the convolution-al filters throughout the network while keeping the filters themselves fixed. Pros The proposed approach is novel and broadly applicable. By definition it maintains the exact performance on the original task, and enables the network to transfer to new tasks using a controller with a small number of parameters (asymptotically smaller than that of the base network). The method is tested on a number of datasets (each used as source and target) and shows good transfer learning performance on each one. A number of different fine-tuning regimes are explored. The paper is mostly clear and well-written (though with a few typos that should be fixed). Cons/Questions/Suggestions The distinction between the convolutional and fully-connected layers (called “classifiers”) in the approach description (sec 3) is somewhat arbitrary -- after all, convolutional layers are a generalization of fully-connected layers. (This is hinted at by the mention of fully convolutional networks.) The method could just as easily be applied to learn a task-specific rotation of the fully-connected layer weights. A more systematic set of experiments could compare learning the proposed weightings on the first K layers of the network (for K={0, 1, …, N}) and learning independent weights for the latter N-K layers, but I understand this would be a rather large experimental burden. When discussing the controller initialization (sec 4.3), it’s stated that the diagonal init works the best, and that this means one only needs to learn the diagonals to get the best results. Is this implying that the gradients wrt off-diagonal entries of the controller weight matrix are 0 under the diagonal initialization, hence the off-diagonal entries remain zero after learning? It’s not immediately clear to me whether this is the case -- it could help to clarify this in the text. If the off-diag gradients are indeed 0 under the diag init, it could also make sense to experiment with an “identity+noise” initialization of the controller matrix, which might give the best of both worlds in terms of flexibility and inductive bias to maintain the original representation. (Equivalently, one could treat the controller-weighted filters as a “residual” term on the original filters F with the controller weights W initialized to noise, with the final filters being F+(W\crossF) rather than just W\crossF.) The dataset classifier (sec 4.3.4) could be learnt end-to-end by using a softmax output of the dataset classifier as the alpha weighting. It would be interesting to see how this compares with the hard thresholding method used here. (As an intermediate step, the performance could also be measured with the dataset classifier trained in the same way but used as a soft weighting, rather than the hard version rounding alpha to 0 or 1.) Overall, the paper is clear and the proposed method is sensible, novel, and evaluated reasonably thoroughly.
iclr_2018_SkfNU2e0Z
Building deep neural networks to control autonomous agents which have to interact in real-time with the physical world, such as robots or automotive vehicles, requires a seamless integration of time into a network's architecture. The central question of this work is, how the temporal nature of reality should be reflected in the execution of a deep neural network and its components. Most artificial deep neural networks are partitioned into a directed graph of connected modules or layers and the layers themselves consist of elemental building blocks, such as single units. For most deep neural networks, all units of a layer are processed synchronously and in parallel, but layers themselves are processed in a sequential manner. In contrast, all elements of a biological neural network are processed in parallel. In this paper, we define a class of networks between these two extreme cases. These networks are executed in a streaming or synchronous layerwise-parallel manner, unlocking the layers of such networks for parallel processing. Compared to the standard layerwise-sequential deep networks, these new layerwise-parallel networks show a fundamentally different temporal behavior and flow of information, especially for networks with skip or recurrent connections. We argue that layerwise-parallel deep networks are better suited for future challenges of deep neural network design, such as large functional modularized and/or recurrent architectures as well as networks allocating different network capacities dependent on current stimulus and/or task complexity. We layout basic properties and discuss major challenges for layerwise-parallel networks. Additionally, we provide a toolbox to design, train, evaluate, and online-interact with layerwiseparallel networks.
This paper introduces a new toolbox for deep neural networks learning and evaluation. The central idea is to include time in the processing of all the units in the network. For this, the authors propose a paradigm switch: form layerwise-sequential networks, where at every time frame the network is evaluated by updating each layer – from bottom to top – sequentially; to layerwise-parallel networks, where all the neurons are updated in parallel. The new paradigm implies that the layer update is achieved by using the stored previous state and the corresponding previous state of the previous layer. This has three consequences. First, every layer now use memory, a condition that already applies for RNNs in layerwise-sequential networks. Second, in order to have a consistent output, the information has to flow in the network for a number of time frames equal to the number of layers. In Neuroscience, this concept is known as reaction time. Third, since the network is not synchronized in terms of the information that is processed in a specific time frame, there are discrepancies w.r.t. the layerwise-sequential networks computation: all the techniques used to train deep NNs have to be reconsidered. Overall, the concept is interesting and timely especially for the rising field of spiking neural networks or for large and distributed architectures. The paper, however, should probably provide more examples and results in terms of architectures that can been implemented with the toolbox in comparison with other toolboxes. The paper presents a single example in which either the accuracy and the training time are not reported. While I understand that the main result of this work is the toolbox itself, more examples and results would improve the clarity and the implications for such paradigm switch. Another concern comes from the choice to use Theano as back-end, since it's known that it is going to be discontinued. Finally I suggest to improve the clarity and description of Figure 2, which is messy and confusing especially if printed in B&W.
iclr_2018_Sy0GnUxCb
Published as a conference paper at ICLR 2018 EMERGENT COMPLEXITY VIA MULTI-AGENT COMPETITION Reinforcement learning algorithms can train agents that solve problems in complex, interesting environments. Normally, the complexity of the trained agent is closely related to the complexity of the environment. This suggests that a highly capable agent requires a complex environment for training. In this paper, we point out that a competitive multi-agent environment trained with self-play can produce behaviors that are far more complex than the environment itself. We also point out that such environments come with a natural curriculum, because for any skill level, an environment full of agents of this level will have the right level of difficulty. This work introduces several competitive multi-agent environments where agents compete in a 3D world with simulated physics. The trained agents learn a wide variety of complex and interesting skills, even though the environment themselves are relatively simple. The skills include behaviors such as running, blocking, ducking, tackling, fooling opponents, kicking, and defending using both arms and legs. A highlight of the learned behaviors can be found here: https://goo.gl/eR7fbX.
In this paper, the authors produced quite cool videos showing the acquisition of highly complex skills, and they are happy about it. If you read the conclusion, this is the only message they put forward, and to me this is not a scientific message. A more classical summary is that the authors use PPO, a state-of-the-art deep RL method, in a context where two agents are trained to perform competitive games against each other. They reuse a very recent "dense reward" technique to bootstrap the agent skills, and then anneal it to zero so that the competitive rewards obtained from defeating the opponent takes the lead. They study the effect of this annealing process (considered as a curriculum) and of various strategies for sampling the opponents. The main outcome is the acquisition of a large variety of useful skills, just observed from videos of the competitions. The main issue with this paper is the lack of scientific analysis of the results, together with many local issues in the presentation of these results. Below, I talk directly to the authors. --------------------------------- The related work subsection is just a list of works, it should explain how the proposed work position itself with respect to these works. In Section 5.2, you are just describing "cool" behaviors observed from your videos. Science is about producing quantitative results, analyzing them and discussing them. I would be glad to read more science about these cool behaviors. Can you define a repertoire of such behaviors? Determine how often they are discovered? Study how the are represented in the networks? Anything beyond "look, that's great!" would make the paper better... By the end of Section 5.2, you allude to transfer learning phenomena. It would be nice to study these transfer effects in your results with a quantitative methodology. Section 5.3 is more scientific, but it has serious issues. In all subfigures in Figure 3, the performance of opponents should be symmetric around 50%. This is not the case for subfigures (b) and (c-1). Why? Do they correspond to non-zero sum game? The x-label is "version". Don't you mean "number of epochs", or something like this? Why do the last 2 images share the same caption? I had a hard time understanding the message from Table 1. It really needs a line before the last row and a more explicative caption. Still in 5.3, "These results echo"...: can you characterize this echo? What is the relationship to this other work? Again, "These results shed further light": further with respect to what? Can you be more explicit about what we learn? Also, I find that annealing a kind of reward with respect to another is a weak form of curriculum learning. This should be further discussed. In Section 5.4, the idea of using many opponents from many stages of learning in not new. If I'm correct, the same was done in evolutionary method to escape the "arms race" dead-end in prey-predator races quite a while ago (see e.g. "Coevolving predator and prey robots: Do “arms races” arise in artificial evolution?" Nolfi and Floreano, 1998) Section 5.5.1 would deserve a more quantitative presentation of the effect of randomization. Actually, in Fig5: the axes are not labelled. I don't believe it shows a win-rate. So probably the caption (or the image) is wrong. In Section 5.5.2, you "suspect this is because...". The role of a scientific paper is to clearly establish results and explanation from solid quantitative analysis. ------------------------------------------- More local comments: Abstract: "Normally, the complexity of the trained agent is closely related to the complexity of the environment." Here you could cite Herbert Simon (1962). "In this paper, we point out that a competitive multi-agent environment trained with self-play can produce behaviors that are far more complex than the environment itself." Well, for an agent, the other agent(s) are part of its environment, aren't they? So I don't like this perspective that the environment itself is "simple". Intro: "RL is exciting because good RL exists." I don't believe this is a strong argument. There are many good things that exist which are not exciting. "In general, training an agent to perform a highly complex task requires a highly complex environment, and these can be difficult to create." Well, the standard perspective is the other way round: in general, you face a complex problem, then you need to design a complex agent to solve it, and this is difficult. "This happens because no matter how weak or strong an agent is, an environment populated with other agents of comparable strength provides the right challenge to the agent, facilitating maximally rapid learning and avoiding getting stuck." This is not always true. The literature is full of examples where two-players competition end-up with oscillations between to solutions rather than ever-increasing skill performance. See the prey-predator literature pointed above. "in the domain of continuous control, where balance, dexterity, and manipulation are the key skills." In robotics, dexterity, and manipulation usually refer to using the robot's hand(s), a capability which is not shown here. In preliminaries, notation, what you describe corresponds to the framework of Dec-POMDPs, you should position yourself with respect to this framework (see e.g. Memory-Bounded Dynamic Programming for DEC-POMDPs. S Seuken, S Zilberstein) In PPO description : Let l_t(\theta) ... denote the likelihood ratio: of what? p5: would train on the dense reward for about 10-15% of the trainig epochs. So how much is \alpha_t? How did you tune it? Was it hard? p6: you give to the agent the mass: does the mass change over time??? In observations: Are both agents given different observations? Could you specify which is given what? In Algorithms parameters: why do you have to anneal longer for kick-and-defend? What is the underlying phenomenon? In Section 5, the text mentions Fig5 before Fig4. ------------------------------------------------- Typos: p4: research(Andrychowicz => missing space straight forward => straightforward p5: agent like humanoid(s) from exi(s)ting work p6: eq. 1 => Eq. (1) (you should use \eqref{}) In section 4.1 => In Section 4.1 (same p7 for Section 4.2) "One question that arises is the extent to which the outcome of learning is affected by this exploration reward and to explore the benefit of this exploration reward. As already argued, we found the exploration reward to be crucial for learning as otherwise the agents are unable to explore the sparse competition reward." => One question that arises is the extent to which the outcome of learning is affected by this exploration reward and to explore its benefit. As already argued, we found it to be crucial for learning as otherwise the agents are unable to explore the sparse competition reward. p8: in a local minima => minimum p9: in references, you have Jakob Foerster and Jakob N Foerster => try to be more consistent. p10, In Laetitia Matignon et al. ... markov => Markov p11, I would rename C_{alive} as C_{standing}
iclr_2018_HJXOfZ-AZ
According to parallel distributed processing (PDP) theory in psychology, neural networks (NN) learn distributed rather than interpretable localist representations. This view has been held so strongly that few researchers have analysed single units to determine if this assumption is correct. However, recent results from psychology, neuroscience and computer science have shown the occasional existence of local codes emerging in artificial and biological neural networks. In this paper, we undertake the first systematic survey of when local codes emerge in a feed-forward neural network, using generated input and output data with known qualities. We find that the number of local codes that emerge from a NN follows a well-defined distribution across the number of hidden layer neurons, with a peak determined by the size of input data, number of examples presented and the sparsity of input data. Using a 1-hot output code drastically decreases the number of local codes on the hidden layer. The number of emergent local codes increases with the percentage of dropout applied to the hidden layer, suggesting that the localist encoding may offer a resilience to noisy networks. This data suggests that localist coding can emerge from feed-forward PDP networks and suggests some of the conditions that may lead to interpretable localist representations in the cortex. The findings highlight how local codes should not be dismissed out of hand.
This paper studies the development of localist representations in the hidden layers of feed-forward neural networks. The idea is interesting and the findings are intriguing. Local codes increase understandability and could be important for better understanding natural neural networks. Understanding how local codes form and the factors that increase their likelihood is critically important. This is a good start in that direction, but still leaves open many questions. The issues raised in the Conclusions section are also very interesting -- do the local codes increase with networks that generalize better, or with overtrained networks? A weakness in this paper (admitted by the authors in the Conclusions section) is the dependence of the results on the form of input representation. If we consider the Jennifer Aniston cells, they do not receive as input as well separated inputs as modeled in this paper. In fact the input representation used in this study is already a fairly localist representation as each 1 unit is fairly selectively on for its own class and mostly off for the other classes. It will be very interesting to see the results of hidden layers in deep networks operating on natural images. Please give your equation for selectivity. On Page 2 it is stated "We use the word ‘selectivity’ as a quantitative measure of the difference between activations for the two categories, A and not-A, where A is the class a neuron is selective for (and not-A being all other classes)." However you state that neurons were counted as a local code if the selectivity was above .05. A difference between activations for the two categories of .05 does not seem very selective, so I'm thinking you used something other than the mathematical difference. What is the selectivity of units in the input codewords? With no perturbation, and S_x=.2, w_R=50, w_P=50, the units in the prototype blocks have a high selectivity responding with 1 for all patterns in their class and with 0 for 8/9 of the patterns in the other classes. Could this explain the much higher selectivity for this case in the hidden units? I would like to see the selectivity of the input units for each of the plots/curves. This would be especially interesting for Figure 5. It is stated that LCs emerge with longer training and that ReLU neurons may produce more LCs because they train quicker and all experiments were stopped at 45,000 epochs. Why not investigate this by changing learning rates for one of ReLu or sigmoidal units to more closely match their training speed? It would be interesting to see if the difference is simply due to learning rate, or something deeper about the activation functions. You found that very few local codes in the HLNs were found when a 1-hot ouput encoding was used and suggest that this means that emergent local codes are highly unlikely to be found in the penultimate layer of deep networks. If your inputs are a local code (e.g. for low w_R), you found local codes above the layer of local codes but in this result not below the layer of local codes which might also imply (as you say in the Conclusions) that more local coding neurons may be found in the higher layers (though not the penultimate one as you argue). Could you analyze how the selectivity of a hidden layer changes as a function of the selectivity in the lower and higher layers? Minor Note -- The Neural Network Design section looks like it still has draft notes in it.
iclr_2018_B1CNpYg0-
Words in natural language follow a Zipfian distribution whereby some words are frequent but most are rare. Learning representations for words in the "long tail" of this distribution requires enormous amounts of data. Representations of rare words trained directly on end tasks are usually poor, requiring us to pre-train embeddings on external data, or treat all rare words as out-of-vocabulary words with a unique representation. We provide a method for predicting embeddings of rare words on the fly from small amounts of auxiliary data with a network trained end-to-end for the downstream task. We show that this improves results against baselines where embeddings are trained on the end task for reading comprehension, recognizing textual entailment and language modeling.
This paper examines ways of producing word embeddings for rare words on demand. The key real-world use case is for domain specific terms, but here the techniques are demonstrated on rarer words in standard data sets. The strength of this paper is that it both gives a more systematic framework for and builds on existing ideas (character-based models, using dictionary definitions) to implement them as part of a model trained on the end task. The contribution is clear but not huge. In general, for the scope of the paper, it seems like what is here could fairly easily have been made into a short paper for other conferences that have that category. The basic method easily fits within 3 pages, and while the presentation of the experiments would need to be much briefer, this seems quite possible. More things could have been considered. Some appear in the paper, and there are some fairly natural other ones such as mining some use contexts of a word (such as just from Google snippets) rather than only using textual definitions from wordnet. The contributions are showing that existing work using character-level models and definitions can be improved by optimizing representation learning in the context of the final task, and the idea of adding a learned linear transformation matrix inside the mean pooling model (p.3). However, it is not made very clear why this matrix is needed or what the qualitative effect of its addition is. The paper is clearly written. A paper that should be referred to is the (short) paper of Dhingra et al. (2017): A Comparative Study of Word Embeddings for Reading Comprehension https://arxiv.org/pdf/1703.00993.pdf . While it in no way covers the same ground as this paper it is relevant as follows: This paper assumes a baseline that is also described in that paper of using a fixed vocab and mapping other words to UNK. However, they point out that at least for matching tasks like QA and NLI that one can do better by assigning random vectors on the fly to unknown words. That method could also be considered as a possible approach to compare against here. Other comments: - The paper suggests a couple of times including at the end of the 2nd Intro paragraph that you can't really expect spelling models to perform well in representing the semantics of arbitrary words (which are not morphological derivations, etc.). While this argument has intuitive appeal, it seems to fly in the face of the fact that actually spelling models, including in this paper, seem to do surprisingly well at learning such arbitrary semantics. - p.2: You use pretrained GloVe vectors that you do not update. My impression is that people have had mixed results, sometimes better, sometimes worse with updating pretrained vectors or not. Did you try it both ways? - fn. 1: Perhaps slightly exaggerates the point being made, since people usually also get good results with the GloVe or word2vec model trained on "only" 6 billion words – 2 orders of magnitude less data. - p.4. When no definition is available, is making e_d(w) a zero vector worse than or about the same as using a trained UNK vector? - Table 1: The baseline seems reasonable (near enough to the quality of the original Salesforce model from 2016 (66 F1) but well below current best single models of around 76-78 F1. The difference between D1 and D3 does well illustrate that better definition learning is done with backprop from end objective. This model shows the rather strong performance of spelling models – at least on this task – which he again benefit from training in the context of the end objective. - Fig 2: It's weird that only the +dict (left) model learns to connect "In" and "where". The point made in the text between "Where" and "overseas" is perfectly reasonable, but it is a mystery why the base model on the right doesn't learn to associate the common words "where" and "in" both commonly expressing a location. - Table 2: These results are interestingly different. Dict is much more useful than spelling here. I guess that is because of the nature of NLI, but it isn't 100% clear why NLI benefits so much more than QA from definitional knowledge. - p.7: I was slightly surprised by how small vocabs (3k and 5k words) are said to be optimal for NLI (and similar remarks hold for SQuAD). My impression is that most papers on NLI use much larger vocabs, no? - Fig 3: This could really be drawn considerably better: make the dots bigger and their colors more distinct. - Table 3: The differences here are quite small and perhaps the least compelling, but the same trends hold.
iclr_2018_HJC2SzZCW
Published as a conference paper at ICLR 2018 SENSITIVITY AND GENERALIZATION IN NEURAL NETWORKS: AN EMPIRICAL STUDY In practice it is often found that large over-parameterized neural networks generalize better than their smaller counterparts, an observation that appears to conflict with classical notions of function complexity, which typically favor smaller models. In this work, we investigate this tension between complexity and generalization through an extensive empirical exploration of two natural metrics of complexity related to sensitivity to input perturbations. Our experiments survey thousands of models with various fully-connected architectures, optimizers, and other hyper-parameters, as well as four different image classification datasets. We find that trained neural networks are more robust to input perturbations in the vicinity of the training data manifold, as measured by the norm of the input-output Jacobian of the network, and that it correlates well with generalization. We further establish that factors associated with poor generalization -such as full-batch training or using random labels -correspond to lower robustness, while factors associated with good generalization -such as data augmentation and ReLU non-linearities -give rise to more robust functions. Finally, we demonstrate how the input-output Jacobian norm can be predictive of generalization at the level of individual test points.
This paper proposes an analysis of the robustness of deep neural networks with respect to data perturbations. *Quality* The quality of exposition is not satisfactory. Actually, the paper is pretty difficult to evaluate at the present stage and it needs a drastic change in the writing style. *Clarity* The paper is not clear and highly unstructured. *Originality* The originality is limited for what regards Section 3: the proposed metrics are quite standard tools from differential geometry. Also, the idea of taking into account the data manifold is not brand new since already proposed in “Universal Adversarial Perturbation” at CVPR 2017. *Significance* Due to some flaws in the experimental settings, the relevance of the presented results is very limited. First, the authors essentially exploit a customized architecture, which has been broadly fine-tuned regarding hyper-parameters, gating functions and optimizers. Why not using well established architectures (such as DenseNets, ResNets, VGG, AlexNet)? Moreover, despite having a complete portrait of the fine-tuning process is appreciable, this compromises the clarity of the figures which are pretty hard to interpret and absolutely not self-explanatory: probably it’s better to only consider the best configuration as opposed to all the possible ones. Second, authors assume that circular interpolation is a viable way to traverse the data manifold. The reviewer believes that it is an over-simplistic assumption. In fact, it is not guaranteed a priori that such trajectories are geodesic curves so, a priori, it is not clear why this could be a sound technique to explore the data manifold. CONS: The paper is difficult to read and needs to be thoroughly re-organized. The problem is not stated in a clear manner, and paper’s contribution is not outlined. The proposed architectures should be explained in detail. The results of the sensitivity analysis should be discussed in detail. The authors should explain the approach of traversing the data manifold with ellipses (although the reviewer believes that such approach needs to be changed with something more principled). Figures and results are not clear. The authors are kindly asked to shape their paper to match the suggested format of 8 pages + 1 of references (or similar). The work is definitely too long considered its quality. Additional plots and discussion can be moved to an appendix. Despite the additional explanation in Footnote 6, the graphs are not clear. Probably authors should avoid to present the result for each possible configuration of the hyper-parameters, gatings and optimizers and just choose the best setting. Apart from the customized architecture, authors should have considered established deep nets, such as DenseNets, ResNets, VGG, AlexNet. The idea of considering the data manifold within the measurement of complexity is a nice claim, which unfortunately is paired with a not convincing experimental analysis. Why ellipses should be a proper way to explore the data manifold? In general, circular interpolation is not guaranteed to be geodesic curves which lie on the data manifold. Minor Comments: Sentence to rephrase: “We study common in the machine learning community ways to ...” Please, put the footnotes in the corresponding page in which it is referred. The reference to ReLU is trivially wrong and need to be changed with [Nair & Hinton ICML 2010] **UPDATED EVALUATION AFTER AUTHORS' REBUTTAL** We appreciated the effort in providing specific responses and we also inspected the updated version of the paper. Unfortunately, despite the authors' effort, the reviewer deems that the conceptual issues that have been highlighted are still present in the paper which, therefore, is not ready for acceptance yet.
iclr_2018_BkeC_J-R-
Reinforcement learning methods have recently achieved impressive results on a wide range of control problems. However, especially with complex inputs, they still require an extensive amount of training data in order to converge to a meaningful solution. This limitation largely prohibits their usage for complex input spaces such as video signals, and it is still impossible to use it for a number of complex problems in a real world environments, including many of those for video based control. Supervised learning, on the contrary, is capable of learning on a relatively small number of samples, however it does not take into account rewardbased control policies and is not capable to provide independent control policies. In this article we propose a model-free control method, which uses a combination of reinforcement and supervised learning for autonomous control and paves the way towards policy based control in real world environments. We use SpeedDreams/TORCS video game to demonstrate that our approach requires much less samples (hundreds of thousands against millions or tens of millions) comparing to the state-of-the-art reinforcement learning techniques on similar data, and at the same time overcomes both supervised and reinforcement learning approaches in terms of quality. Additionally, we demonstrate the applicability of the method to MuJoCo control problems.
This paper proposes leveraging labelled controlled data to accelerate reinforcement-based learning of a control policy. It provides two main contributions: pre-training the policy network of a DDPG agent in a supervised manner so that it begins in reasonable state-action distribution and regalurizing the Q-updates of the q-network to be biased towards existing actions. The authors use the TORCS enviroment to demonstrate the performance of their method both in final cumulative return of the policy and speed of learning. This paper is easy to understand but has a couple shortcomings and some fatal (but reparable) flaws:. 1) When using RL please try to standardize your notation to that used by the community, it makes things much easier to read. I would strongly suggest avoiding your notation a(x|\Theta) and using \pi(x) (subscripting theta or making conditional is somewhat less important). Your a(.) function seems to be the policy here, which is invariable denoted \pi in the RL literature. There has been recent effort to clean up RL notation which is presented here: https://sites.ualberta.ca/~szepesva/papers/RLAlgsInMDPs.pdf. You have no obligation to use this notation but it does make reading of your paper much easier on others in the community. This is more of a shortcoming than a fundamental issue. 2) More fatally, you have failed to compare your algorithm's performance against benchline implementations of similar algorithms. It is almost trivial to run DDPG on Torcs using the openAI baselines package [https://github.com/openai/baselines]. I would have loved, for example, to see the effects of simply pre-training the DDPG actor on supervised data, vs. adding your mixture loss on the critic. Using the baselines would have (maybe) made a very compelling graph showing DDPG, DDPG + actor pre-training, and then your complete method. 3) And finally, perhaps complementary to point 2), you really need to provide examples on more than one environment. Each of these simulated environments has its own pathologies linked to determenism, reward structure, and other environment particularities. Almost every algorithm I've seen published will often beat baselines on one environment and then fail to improve or even be wors on others, so it is important to at least run on a series of these. Mujoco + AI Gym should make this really easy to do (for reference, I have no relatinship with OpenAI). Running at least cartpole (which is a very well understood control task), and then perhaps reacher, swimmer, half-cheetah etc. using a known contoller as your behavior policy (behavior policy is a good term for your data-generating policy.) 4) In terms of state of the art you are very close to Todd Hester et. al's paper on imitation learning, and although you cite it, you should contrast your approach more clearly with the one in that paper. Please also have a look at some more recent work my Matej Vecerik, Todd Hester & Jon Scholz: 'Leveraging Demonstrations for Deep Reinforcement Learning on Robotics Problems with Sparse Rewards' for an approach that is pretty similar to yours. Overall I think your intuitions and ideas are good, but the paper does not do a good enough job justifying empirically that your approach provides any advantages over existing methods. The idea of pre-training the policy net has been tried before (although I can't find a published reference) and in my experience will help on certain problems, and hinder on others, primarily because the policy network is already 'overfit' somewhat to the expert, and may have a hard time moving to a more optimal space. Because of this experience I would need more supporting evidence that your method actually generalizes to more than one RL environment.
iclr_2018_SyMvJrdaW
DECOUPLING THE LAYERS IN RESIDUAL NETWORKS We propose a Warped Residual Network (WarpNet) using a parallelizable warp operator for forward and backward propagation to distant layers that trains faster than the original residual neural network. We apply a perturbation theory on residual networks and decouple the interactions between residual units. The resulting warp operator is a first order approximation of the output over multiple layers. The first order perturbation theory exhibits properties such as binomial path lengths and exponential gradient scaling found experimentally by Veit et al. (2016). We demonstrate through an extensive performance study that the proposed network achieves comparable predictive performance to the original residual network with the same number of parameters, while achieving a significant speed-up on the total training time. As WarpNet performs model parallelism in residual network training in which weights are distributed over different GPUs, it offers speed-up and capability to train larger networks compared to original residual networks.
The main contribution of this paper is a particular Taylor expansion of the outputs of a ResNet which is shown to be exact at almost all points in the input space. This expression is used to develop a new layer called a “warp layer” which essentially tries to compute several layers of the residual network using the Taylor expansion expression — however in this expression, things can be done in parallel, and interestingly, the authors show that the gradients also decouple when the (ResNet) model is close to a local minimum in a certain sense, which may motivate the decoupling of layers to begin with. Finally the authors stack these warp layers to create a “warped resnet” which they show does about as well as an ordinary ResNet but has better parallelization properties. To me the analytical parts of the paper are the most interesting, particularly in showing how the gradients approximately decouple. However there are several weaknesses to the paper (or maybe just things I didn’t understand). First, a major part of the paper tries to make the case that there is a symmetry breaking property of the proposed model, which I am afraid I simply was not able to follow. Some of the notation is confusing here — for example, presumably the rotations refer to image level rotations rather than literally multiplying the inputs by an orthogonal matrix, which the notation suggests to be the case. It is also never precisely spelled out what the final theoretical guarantee is (preferably the authors would do this in the form of a proposition or theorem). Throughout, the authors write out equations as if the weights in all layers are equal, but this is confusing even if the authors say that this is what they are doing, since their explanation is not very clear. The confusion is particularly acute in places where derivatives are taken, because the derivatives continue to be taken as if the weights were untied, but then written as if they happened to be the same. Finally the experimental results are okay but perhaps a bit preliminary. I have a few recommendations here: * It would be stronger to evaluate results on a larger dataset like ILSVRC. * The relative speed-up of WarpNet compared to ResNet needs to be better explained — the authors break the computation of the WarpNet onto two GPUs, but it’s not clear if they do this for the (vanilla) ResNet as well. In batch mode, the easiest way to parallelize is to have each GPU evaluate half the batch. Even in a streaming mode where images need to be evaluated one by one, there are ways to pipeline execution of the residual blocks, and I do not see any discussion of these alternatives in the paper. * In the experimental results, K is set to be 2, and the authors only mention in passing that they have tried larger K in the conclusion. It would be good to have a more thorough experimental evaluation of the trade-offs of setting K to be higher values. A few remaining questions for the authors: * There is a parallel submission (presumably by different authors called “Residual Connections Encourage Iterative Inference”) which contains some related insights. I wonder what are the differences between the two Taylor expansions, and whether the insights of this paper could be used to help the other paper and vice versa? * On implementation - the authors mention using Tensorflow’s auto-differentiation. My question here is — are gradients being re-used intelligently as suggested in Section 3.1? * I notice that the analysis about the vanishing Hessian could be applied to most of the popular neural network architectures available now. How much of the ideas offered in this paper would then generalize to non-resnet settings?
iclr_2018_ByCPHrgCW
When deep learning is applied to sensitive data sets, many privacy-related implementation issues arise. These issues are especially evident in the healthcare, finance, law and government industries. Homomorphic encryption could allow a server to make inferences on inputs encrypted by a client, but to our best knowledge, there has been no complete implementation of common deep learning operations, for arbitrary model depths, using homomorphic encryption. This paper demonstrates a novel approach, efficiently implementing many deep learning functions with bootstrapped homomorphic encryption. As part of our implementation, we demonstrate Single and Multi-Layer Neural Networks, for the Wisconsin Breast Cancer dataset, as well as a Convolutional Neural Network for MNIST. Our results give promising directions for privacy-preserving representation learning, and the return of data control to users.
The paper presents a means of evaluating a neural network securely using homomorphic encryption. A neural network is already trained, and its weights are public. The network is to be evaluated over a private input, so that only the final outcome of the computation-and nothing but that-is finally learned. The authors take a binary-circuit approach: they represent numbers via a fixed point binary representation, and construct circuits of secure adders and multipliers, based on homomorphic encryption as a building block for secure gates. This allows them to perform the vector products needed per layer; two's complement representation also allows for an "easy" implementation of the ReLU activation function, by "checking" (multiplying by) the complement of the sign bit. The fact that multiplication often involves public weights is used to speed up computations, wherever appropriate. A rudimentary experimental evaluation with small networks is provided. All of this is somewhat straightforward; a penalty is paid by representing numbers via fixed point arithmetic, which is used to deal with ReLU mostly. This is somewhat odd: it is not clear why, e.g., garbled circuits where not used for something like this, as it would have been considerably faster than FHE. There is also a work in this area that the authors do not cite or contrast to, bringing the novelty into question; please see the following papers and references therein: GILAD-BACHRACH, R., DOWLIN, N., LAINE, K., LAUTER, K., NAEHRIG, M., AND WERNSING, J. Cryptonets: Applying neural networks to encrypted data with high throughput and accuracy. In Proceedings of The 33rd International Conference on Machine Learning (2016), pp. 201–210. SecureML: A System for Scalable Privacy-Preserving Machine Learning Payman Mohassel and Yupeng Zhang SHOKRI, R., AND SHMATIKOV, V. Privacy-preserving deep learning. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security (2015), ACM, pp. 1310–1321. The first paper is the most related, also using homomorphic encryption, and seems to cover a superset of the functionalities presented here (more activation functions, a more extensive analysis, and faster decryption times). The second paper uses arithmetic circuits rather than HE, but actually implements training an entire neural network securely. Minor details: The problem scenario states that the model/weights is private, but later on it ceases to be so (weights are not encrypted). "Both deep learning and FHE are relatively recent paradigms". Deep learning is certainly not recent, while Gentry's paper is now 7 years old. "In theory, this system alone could be used to compute anything securely." This is informal and incorrect. Can it solve the halting problem? "However in practice the operations were incredibly slow, taking up to 30 minutes in some cases." It is unclear what operations are referred to here.
iclr_2018_S1EzRgb0W
Neural networks make mistakes. The reason why a mistake is made often remains a mystery. It would be useful to have a method that can give an explanation that is intuitive to a user as to why an image is misclassified. In this paper we develop a method for explaining the mistakes of an image classification model by visually showing what must be added to an image such that it is correctly classified. Our work combines the fields of adversarial examples, generative modeling and a correction technique based on difference target propagation to create an technique that creates explanations of why an image is misclassified. In this paper we explain our method and demonstrate it on MNIST and CelebA. This approach could aid in demystifying neural networks for a user.
In this paper, the authors aim to better understand the classification of neural networks. The authors explore the latent space of a variational auto encoder and consider the perturbations of the latent space in order to obtain the correct classification. They evaluate their method on CelebA and MNIST datasets. Pros: 1) The paper explores an alternate methodology that uses perturbation in latent spaces to better understand neural networks 2) It takes inspiration from adversarial examples and uses the explicit classifier loss to better perturb the $z$ in the latent space 3) The method is quite simple and captures the essence of the problem well Cons: The main drawback of the paper is it claims to understand working of neural networks, however, actually what the authors end up doing are perturbations of the encoded latent space. This would evidently not explain why a deep network generates misclassifications for instance understanding the failure modes of ResNet or DenseNet cannot be obtained through this method. Other drawbacks include: 1) They do not show how their method would perform against standard adversarial attack techniques, since by explaining a neural network they should be able to guard against attacks, or at-least explain why they work well. 2) The paper reports results on 2 datasets, out of which on 1 of them it does not perform well and gets stuck in a local minima therefore implying that it is not able to capture the diversity in the data well. 3) The authors provide limited evaluation on a few attributes of CelebA. Extensive evaluation that would show on a larger scale with more attributes is not performed. 4) The authors also have claimed that the added parts should be interpretable and visible. However, the perturbations of the latent space would yield small $\epsilon$ variation in the image and it need not actually explain why the modification is yielding a correct classification, the same way an imperceptible adversarial attack yields a misclassification. Therefore there is no guarantee that the added parts would be interpretable. What would be more reasonable to claim would be that the latent transformations that yield correct classifications are projected into the original image space. Some of these yield interpretations that are semantically meaningful and some of these do not yield semantically meaningful interpretations. 5) Solving mis-classification does not seem to equate with explaining the neural network, but rather only suggest where it makes mistakes. That is not equal to an explanation about how it is making a classification decision. That would rather be done by using the same input and perturbing the weights of the classifier network. In conclusion, the paper in its current form provides a direction in terms of using latent space exploration to understand classification errors and corrections to them in terms of perturbations of the latent space. However, these are not conclusive yet and actually verifying this would need a more thorough evaluation.
iclr_2018_ByJDAIe0b
Episodic memory is a psychology term which refers to the ability to recall specific events from the past. We suggest one advantage of this particular type of memory is the ability to easily assign credit to a specific state when remembered information is found to be useful. Inspired by this idea, and the increasing popularity of external memory mechanisms to handle long-term dependencies in deep learning systems, we propose a novel algorithm which uses a reservoir sampling procedure to maintain an external memory consisting of a fixed number of past states. The algorithm allows a deep reinforcement learning agent to learn online to preferentially remember those states which are found to be useful to recall later on. Critically this method allows for efficient online computation of gradient estimates with respect to the write process of the external memory. Thus unlike most prior mechanisms for external memory it is feasible to use in an online reinforcement learning setting. Much of reinforcement learning (RL) theory is based on the assumption that the environment has the Markov property, meaning that future states are independent of past states given the present state. This implies the agent has all the information it needs to make an optimal decision at each time and therefore has no need to remember the past. This is however not realistic in general, realistic problems often require significant information from the past to make an informed decision in the present, and there is often no obvious way to incorporate the relevant information into an expanded present state. It is thus desirable to establish techniques for learning a representation of the relevant details of the past (e.g. a memory, or learned state) to facilitate decision making in the present. A popular approach to integrate information from the past into present decision making is to use some variant of a recurrent neural network, possibly coupled to some form of external memory, trained with backpropagation through time. This can work well for many tasks, but generally requires backpropagating many steps into the past which is not practical in an online RL setting. In purely recurrent architectures one way to make online training practical is to simply truncate gradients after a fixed number of steps. In architectures which include some form of external memory however it is not clear that this is a viable option as the intent of the external memory is generally to capture long term dependencies which would be difficult for a recurrent architecture alone to handle, especially when trained with truncated gradients. Truncating gradients to the external memory would likely greatly hinder this capability. In this work we explore a method for adding external memory to a reinforcement learning architecture which can be efficiently trained online. We liken our method to the idea of episodic memory from psychology. In this approach the information stored in memory is constrained to consist of a finite set of past states experienced by the agent. In this work, by states we mean observations explicitly provided by the environment. In general, states could be more abstract, such as the internal state of an RNN or predictions generated by something like the Horde architecture of Sutton et al. (2011). By storing states explicitly we enforce that the information recorded also provides the context in which it was recorded. We can therefore assign credit to the recorded state without explicitly backpropagating through time between when the information proves useful and when it was recorded. If a recorded state is found to be useful we train the agent to preferentially remember similar states in the future.
The paper proposes a modified approach to RL, where an additional "episodic memory" is kept by the agent. What this means is that the agent has a reservoir of n "states" in which states encountered in the past can be stored. There are then of course two main questions to address (i) which states should be stored and how (ii) how to make use of the episodic memory when deciding what action to take. For the latter question, the authors propose using a "query network" that based on the current state, pulls out one state from the memory according to certain probability distribution. This network has many tunable parameters, but the main point is that the policy then can condition on this state drawn from the memory. Intuitively, one can see why this may be advantageous as one gets some information from the past. (As an aside, the authors of course acknowledge that recurrent neural networks have been used for this purpose with varying degrees of success.) The first question, had a quite an interesting and cute answer. There is a (non-negative) importance weight associated with each state and a collection of states has weight that is simply the product of the weights. The authors claim (with some degree of mathematical backing) that sampling a memory of n states where the distribution over the subsets of past states of size n is proportional to the product of the weights is desired. And they give a cute online algorithm for this purpose. However, the weights themselves are given by a network and so weights may change (even for states that have been observed in the past). There is no easy way to fix this and for the purpose of sampling the paper simply treats the weights as immutable. There is also a toy example created to show that this approach works well compared to the RNN based approaches. Positives: - An interesting new idea that has potential to be useful in RL - An elegant algorithm to solve at least part of the problem properly (the rest of course relies on standard SGD methods to train the various networks) Negatives: - The math is fudged around quite a bit with approximations that are not always justified - While overall the writing is clear, in some places I feel it could be improved. I had a very hard time understanding the set-up of the problem in Figure 2. [In general, I also recommend against using figure captions to describe the setup.] - The experiments only demonstrate the superiority of this method on an example chosen artificially to work well with this approach.
iclr_2018_SyrGJYlRZ
Hyperparameter tuning is one of the most time-consuming workloads in deep learning. State-of-the-art optimizers, such as AdaGrad, RMSProp and Adam, reduce this labor by adaptively tuning an individual learning rate for each variable. Recently researchers have shown renewed interest in simpler methods like momentum SGD as they may yield better results. Motivated by this trend, we ask: can simple adaptive methods, based on SGD perform as well or better? We revisit the momentum SGD algorithm and show that hand-tuning a single learning rate and momentum makes it competitive with Adam. We then analyze its robustness to learning rate misspecification and objective curvature variation. Based on these insights, we design YELLOWFIN, an automatic tuner for momentum and learning rate in SGD. YELLOWFIN optionally uses a negative-feedback loop to compensate for the momentum dynamics in asynchronous settings on the fly. We empirically show YELLOWFIN can converge in fewer iterations than Adam on ResNets and LSTMs for image recognition, language modeling and constituency parsing, with a speedup of up to 3.28x in synchronous and up to 2.69x in asynchronous settings.
This paper proposes a method to automatically tuning the momentum parameter in momentum SGD methods, which achieves better results and fast convergence speed than state-of-the-art Adam algorithm. Although the results are promising, I found the presentation of this paper almost inaccessible to me. First, though a minor point, but where does the name *YellowFin* come from? For the presentation, the motivation in introduction is fine, but the following section about momentum operator is hard to follow. There are a lot of undefined notation. For example, what does the *convergence rate* mean (what is the measurement for convergence)? And is the *optimal accelerated rate* the same as *convergence rate* mentioned above? Also, what do you mean by *all directions* in the sentence below eq.2? Then the paper talks about robustness properties of the momentum operator. But: first, I am not sure why the derivative of f(x) is defined as in eq.3, how is that related to the original definition of derivative? In the following paragraph, what is *contraction*? Does it have anything to do with the paper as I didn't see it in the remaining text? Lemma 2 seems to use the spectral radius of the momentum operator as the *robustness*. But how can it describe the robustness? More details are needed to understand this. What it comes to Section 3, it seems to me that the authors try to use a local quadratic approximation for the original function f(x), and use the results in last section to find the optimal momentum parameter. I got confused in this section because eq.9 defines f(x) as a quadratic function. Is this f(x) the original function (non quadratic) or just the local quadratic approximation? If it is the local quadratic approximation, how is it correlated to the original function? It seems to me that the authors try to say if h and C are calculated from the original function, then this f(x) is a local quadratic approximation? If what I think is correct, I think it would be important to show this. Also, the objective function in SingleStep algorithm seems to come from eq.13, but I failed to get the exact reasoning. Overall, I think this is an interesting paper, but the presentation is too fuzzy to get it evaluated.
iclr_2018_ByxLBMZCb
With the increasing interest in deeper understanding of the loss surface of many non-convex deep models, this paper presents a unifying framework to study the local/global optima equivalence of the optimization problems arising from training of such non-convex models. Using the local openness property of the underlying training models, we provide simple sufficient conditions under which any local optimum of the resulting optimization problem is globally optimal. We first completely characterize the local openness of matrix multiplication mapping in its range. Then we use our characterization to: 1) show that every local optimum of two layer linear networks is globally optimal. Unlike many existing results in the literature, our result requires no assumption on the target data matrix Y , and input data matrix X. 2) develop almost complete characterization of the local/global optima equivalence of multi-layer linear neural networks. We provide various counterexamples to show the necessity of each of our assumptions. 3) show global/local optima equivalence of non-linear deep models having certain pyramidal structure. Unlike some existing works, our result requires no assumption on the differentiability of the activation functions and can go beyond "full-rank" cases.
Summary: The paper focuses on the characterization of the landscape of deep neural networks; i.e., when and why local minima are global, what are the conditions for saddle critical points, etc. The paper covers a somewhat wide range of deep nets (from shallow with linear activation to deeper with non-linear activation); it focuces only on feed forward neural networks. As the authors state, this paper provides a unifying perspective to the subject (it justifies the results of others through this unifying theory, but also provides new results; e.g., there are results that do not depend on assumptions on the target data matrix Y). Originality: The paper provides similar results to previous work, while removing some of the assumptions made in previous work. In that sense, the originality of the results is weak, but definitely there is some novelty in the methodology used to get to these results. Thus, I would say original. Importance: The paper deals with the important problem of when and why training algorithms might get to global/local/saddle critical points. While there are no direct connections with generalization properties, characterizing the landscape of neural networks is an important topic to make further steps into better understanding of deep learning. It will attract some attention at the conference. Clarity: The paper is well-written - some parts need improvement, but overall I'm satisfied with the current version. Comments: 1. If problem (4) is not considered at all in this paper (in its full generality that considers matrix completion and matrix sensing as special cases), then the authors could just start with the model in (5). 2. Remark 1 has a nice example - could this example be shown with Y not being the all-zeros vector? 3. In section 5, the authors make a connection with the work of Ge et al. 2016. They state that the problems in (10)-(11) constitute generalizations of the symmetric matrix completion case, considered in Ge et al. 2016. However, in that work, the main difficulty of proving global optimality comes from the randomness of the sampling mask operator (which introduces the notion of incoherence and requires results in expectation). It is not clear, and maybe it is an overstatement, that the results in section 5 generalize that work. If that is the case, could the authors describe this a bit further?
iclr_2018_HyZoi-WRb
Published as a conference paper at ICLR 2018 DEBIASING EVIDENCE APPROXIMATIONS: ON IMPORTANCE-WEIGHTED AUTOENCODERS AND JACKKNIFE VARIATIONAL INFERENCE The importance-weighted autoencoder (IWAE) approach of Burda et al. (2015) defines a sequence of increasingly tighter bounds on the marginal likelihood of latent variable models. Recently, Cremer et al. (2017) reinterpreted the IWAE bounds as ordinary variational evidence lower bounds (ELBO) applied to increasingly accurate variational distributions. In this work, we provide yet another perspective on the IWAE bounds. We interpret each IWAE bound as a biased estimator of the true marginal likelihood where for the bound defined on K samples we show the bias to be of order O(K −1 ). In our theoretical analysis of the IWAE objective we derive asymptotic bias and variance expressions. Based on this analysis we develop jackknife variational inference (JVI), a family of bias-reduced estimators reducing the bias to O(K −(m+1) ) for any given m < K while retaining computational efficiency. Finally, we demonstrate that JVI leads to improved evidence estimates in variational autoencoders. We also report first results on applying JVI to learning variational autoencoders.
[After author feedback] I think this is an interesting paper and recommend acceptance. My remaining main comments are described in the response to author feedback below. [Original review] The authors introduce jackknife variational inference (JVI), a method for debiasing Monte Carlo objectives such as the importance weighted auto-encoder. Starting by studying the bias of the IWAE bound for approximating log-marginal likelihood, the authors propose to make use of debiasing techniques to improve the approximation. For the binarized MNIST the authors show improved approximations given the same number of samples from the auxiliary distribution q(z|x). JVI seems to be an interesting extension of, and perspective on, the IWAE bound (and other Monte Carlo objectives). Some questions and comments: * The Cremer et al. (2017) paper contains some errors when interpreting the IWAE bound as a standard ELBO with a more flexible variational approximation distribution. For example eq. (1) in their paper does not correspond to an actual distribution, it is not properly normalized. This makes the connection in their section 2.1 unclear. I would suggest citing the following paper instead for this connection and the relation to importance sampling (IS): Naesseth, Linderman, Ranganath, Blei, "Variational Sequential Monte Carlo", 2017. * Regarding the analysis of the IWAE bound the paper by Rainforth et al. (2017) mentioned in the comments seems very relevant. Also, because of the strong connection between IWAE and IS detailed in the Naesseth et al. (2017) paper it is possible to make use of a standard Taylor approximation/delta methods to derive Prop. 1 and Prop. 2, see e.g. Robert & Casella, "Monte Carlo Statistical Methods" or Liu's "Monte Carlo Strategies for Scientific Computing". * It could be worth mentioning that the JVI objective function is now no longer (I think?) a lower bound to the log-evidence. * Could the surprising issue (IWAE-learned, JV1-evaluated being better than JV1-learned, JV1-evaluated) in Table 1 be because of different local optima? * Also, we can easily get unbiased estimates of the evidence p(x) using IS and optimize this objective wrt to model parameters. The proposal parameters can be optimized to minimize variance, how do you think this compares to the proposed method? Minor comments: * p(x) -> p_\theta(x) * In the last paragraph of section 1 it seems like you claim that the expressiveness of p_\theta(x|z) is a limitation of VAE. It was a bit unclear to me what was actually a general limitation of maximum likelihood versus the approximation based on VAEs. * Last paragraph of section 1, "strong bound" -> "tight bound" * Last paragraph of section 2, citation missing for DVI
iclr_2018_SkBHr1WRW
While existing graph embedding models can generate useful embedding vectors for graph-related tasks, what valuable information can be jointly learned from a graph embedding model is less discussed. In this paper, we consider the possibility of detecting critical structures by a graph embedding model. We propose Ego-CNN to embed graphs, which works in a local-to-global manner to take advantages of CNNs that gradually expands the detectable local regions on the graph as the network depth increases. Critical structures can be detected if Ego-CNN is combined with a supervised task model. We show that Ego-CNN is (1) competitive to state-of-the-art graph embeddings models, (2) can work nicely with CNNs visualization techniques to show the detected structures, and (3) is efficient and can incorporate with scale-free priors, which commonly occurs in social network datasets, to further improve the training efficiency.
Dear authors, Thank you for your contribution to ICLR. The problem you are addressing with your work is important. Your paper is well-motivated. Detecting and exploiting "critical structures" in graphs for graph classification is indeed something that is missing in previous work. After the introduction you discuss some related work. While I really appreciate the effort you put into this section (including the figures etc.) there are several inaccuracies in the portrayal of existing methods. Especially the comparison to Patchy-san is somewhat vague. Please make sure that you clearly state the differences between patchy-san and Ego-CNNs. What exactly is it that Patchy cannot achieve that you can. I believe I understood what the advantages of the proposed method are but it took a while to get there. Just one example to show you what I mean; you write: "The reason why the idea of Patchy-San fails to generalize into multiple layers is that its definition of neighborhood, which is based on adjacency matrix, is not static and may not corresponding to local regions in the graph. " It is very difficult to understand what it is that you want to express with the above sentence. Its definition of neighborhood is based on adjacency matrix - what does that mean? A neighborhood is a set of nodes, no? Why is it that their definition of neighborhood might not correspond to local regions? In general, you should try to be more precise and concise when discussing related work. Section 3, the most important section in the paper that describes the proposed Ego-CNN approach, should also be written more clearly. For instance, it would be good if you could define the notion of an "Ego-Convolution layer." You use that term without properly defining it and it is difficult to make sense of the approach without understanding it. Also, you contrast your approach with patchy and write that "Our main idea is to use the egocentric design, i.e. the neighborhood at next layer is defined on the same node." Unfortunately, I find it difficult to understand what this means. In general, this section is very verbose and needs a lot more work. This is at the moment also the crucial shortcoming of the paper. You should spent more time on section 3 and formally and more didactically introduce your approach. In my opinion, without a substantial improvement of this section, the paper should not be accepted. The experiments are standard and compare to numerous existing state of the art methods. The data sets are also rather standard. The one thing I would add to the results are the standard deviations. It is common to report those. Also, in the learning for graph structured data, the variance can be quite high and providing the stddev would at least indicate how significant the improvements are. I also like the visualizations and the discussion of the critical structures found in some of the graphs. Overall, I think this is an interesting paper that has a lot of potential. The problem, however, is that the presentation of the proposed approach is verbose and partially incomprehensible. What exactly is different to existing approaches? What exactly is the formal definition of the method? All of this is not well presented and, in my opinion, requires another round of editing and reviews.
iclr_2018_rkQkBnJAb
Published as a conference paper at ICLR 2018 IMPROVING GANS USING OPTIMAL TRANSPORT We present Optimal Transport GAN (OT-GAN), a variant of generative adversarial nets minimizing a new metric measuring the distance between the generator distribution and the data distribution. This metric, which we call mini-batch energy distance, combines optimal transport in primal form with an energy distance defined in an adversarially learned feature space, resulting in a highly discriminative distance function with unbiased mini-batch gradients. Experimentally we show OT-GAN to be highly stable when trained with large mini-batches, and we present state-of-the-art results on several popular benchmark problems for image generation.
There have recently been a set of interesting papers on adapting optimal transport to GANs. This makes a lot of sense. The paper makes some very good connections to the state of the art and those competing approaches. The proposal makes sense from the generative standpoint and it is clear from the paper that the key contribution is the design of the transport cost. I have two main remarks and questions. * Regarding the transport cost, the authors say that the Euclidean distance does not work well. Did they try to use normalised vectors with the squared Euclidean distance ? I am asking this question because solving the OT problem with cost defined as in c_eta is equivalent to using a *normalized squared* Euclidean distance in the feature space defined by v_eta. If the answer is yes and it did not work, then there is indeed a real contribution to using the DNN. Otherwise, the contribution has to be balanced. In either case, I would have been happy to see numbers for comparison. * The square mini batch energy distance looks very much like a maximum mean discrepancy criterion (see the work of A. Gretton), up to the sign, and also to regularised approached to MMD optimisation (see the paper of Kim, NIPS'16 and references therein). The MMD is the solution of an optimisation problem which, I suppose, has lots of connections with the dual Wasserstein GAN. The authors should elaborate on the relationships, and eventually discuss regularisation in this context.
iclr_2018_SJlhPMWAW
Deep learning on graphs has become a popular research topic with many applications. However, past work has concentrated on learning graph embedding tasks only, which is in contrast with advances in generative models for images and text. Is it possible to transfer this progress to the domain of graphs? We propose to sidestep hurdles associated with linearization of such discrete structures by having a decoder output a probabilistic fully-connected graph of a predefined maximum size directly at once. Our method is formulated as a variational autoencoder. We evaluate on the challenging task of conditional molecule generation.
This paper studies the problem of learning to generate graphs using deep learning methods. The main challenges of generating graphs as opposed to text or images are said to be the following: (a) Graphs are discrete structures, and incrementally constructing them would lead to non-differentiability (I don't agree with this; see below) (b) It's not clear how to linearize the construction of graphs due to their symmetries. Based on this motivation, the paper decides to generate a graph in "one shot", directly outputting node and edge existence probabilities, and node attribute vectors. A graph is represented by a soft adjacency matrix A (entries are probability of existence of an edge), an edge attribute tensor E (entries are probability of each edge being one of d_e discrete types), and a node attribute matrix F, which has a node vector for each potential node. A cross entropy loss is developed to measure the loss between generated A, E, and F and corresponding targets. The main issue with training models in this formulation is the alignment of the generated graph to the ground truth graph. To handle this, the paper proposes to use a simple graph matching algorithm (Max Pooling Matching) to align nodes and edges. A downside to the algorithm is that it has complexity O(k^4) for graphs with k nodes, but the authors argue that this is not a problem when generating small graphs. Once the best correspondence is found, it is treated as constant and gradients are propagated appropriately. Experimentally, generative models of chemical graphs are trained on two datasets. Qualitative results and ELBO values are reported as the dimensionality of the embeddings is varied. No baseline results are presented. A further small set of experiments evaluates the quality of the matching algorithm on a synthetic setup. Strengths: - Generating graphs is an interesting problem, and the proposed approach seems like an easy-to-implement, mostly reasonable way of approaching the problem. - The exposition is clear (although a bit more detail on MPM matching would be appreciated) However, there are some significant weaknesses. First, the motivation for one-shot graph construction is not very strong: - I don't understand why the non-differentiability argued in (a) above is an issue. If training uses a maximum likelihood objective, then we should be able to decompose the generation of a graph into a sequence of decisions and maximize the sum of the logprobs of the conditionals. People do this all the time with sequence data and non-differentiability is not an issue. - I also don't agree that the one shot graph construction sidesteps the issue of how to linearize the construction of a graph. Even after doing so, the authors need to solve a matching problem to resolve the alignment issue. I see this as equivalent to choosing an order in which to linearize the order of nodes and edges in the graph. Second, the experiments are quite weak. No baselines are presented to back up the claims motivating the formulation. I don't know how to interpret whether the results are good or bad. I would have at least liked to see a comparison to a method that generated SMILES format in an autoregressive manner (similar to previous work on chemical graph generation), and would ideally have liked to see an attempt at solving the alignment problem within an autoregressive formulation (e.g., by greedily constructing the alignment as the graph was generated). If one is willing to spend O(k^4) computation to solve the alignment problem, then there seem like many possibilities that could be easily applied to the autoregressive formulation. The authors might also be interested in a concurrent ICLR submission that approaches the problem from an autoregressive angle (https://openreview.net/pdf?id=Hy1d-ebAb). Finally, I would have expected to see a discussion and comparison to "Learning Graphical State Transitions" (Johnson, 2017). Please also don't make statements like "To the best of our knowledge, we are the first to address graph generation using deep learning." This is very clearly not true. Even disregarding Johnson (2017), which the authors claim to be unaware of, I would consider approaches that generate SMILES format (like Gomez-Bombarelli et al) to be doing graph generation using deep learning. Overall, the paper is about an interesting subject, but in my opinion the execution isn't strong enough to warrant publication at this point.
iclr_2018_HknbyQbC-
Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples resulting from adding small-magnitude perturbations to inputs. Such adversarial examples can mislead DNNs to produce adversary-selected results. Different attack strategies have been proposed to generate adversarial examples, but how to produce them with high perceptual quality and more efficiently requires more research efforts. In this paper, we propose AdvGAN to generate adversarial examples with generative adversarial networks (GANs), which can learn and approximate the distribution of original instances. For AdvGAN, once the generator is trained, it can generate adversarial perturbations efficiently for any instance, so as to potentially accelerate adversarial training as defenses. We apply AdvGAN in both semi-whitebox and black-box attack settings. In semi-whitebox attacks, there is no need to access the original target model after the generator is trained, in contrast to traditional white-box attacks. In black-box attacks, we dynamically train a distilled model for the black-box model and optimize the generator accordingly. Adversarial examples generated by AdvGAN on different target models have high attack success rate under state-of-the-art defenses compared to other attacks. Our attack has placed the first with 92.76% accuracy on a public MNIST black-box attack challenge (Mądry et al., 2017b).
I thank the authors for the thoughtful response and rebuttal. The authors have substantially updated their manuscript and improved the presentation. Re: Speed. I brought up this point because this was a bulleted item in the Introduction in the earlier version of the manuscript. In the revised manuscript, this bullet point is now removed. I will take this point to be moot. Re: High resolution. The authors point to recent GAN literature that provides some first results with high resolution GANs but I do not see quantitative evidence in the high resolution setting for this paper. (Figure 4 provides qualitative examples from ImageNet but no quantitative assessment.) Because the authors improved the manuscript, I upwardly revised my score to 'Ok but not good enough - rejection'. I am not able to accept this paper because of the latter point. ========================== The authors present an interesting new method for generating adversarial examples. Namely, the author train a generative adversarial network (GAN) to adversarial examples for a target network. The authors demonstrate that the network works well in the semi-white box and black box settings. The authors wrote a clear paper with great references and clear descriptions. My primary concern is that this work has limited practical benefit in a realistic setting. Addressing each and every concern is quite important: 1) Speed. The authors suggest that training a GAN provides a speed benefit with respect to other attack techniques. The FGSM method (Goodfellow et al, 2015) is basically 1 inference operation and 1 backward operation. The GAN is 1 forward operation. Granted this results in a small difference in timing 0.06s versus 0.01s, however it would seem that avoiding a backward pass is a somewhat small speed gain. Furthermore, I would want to question the practical usage of having an 'even faster' method for generating adversarial examples. What is the reason that we need to run adversarial attacks 'even faster'? I am not aware of any use-cases, but if there are some, the authors should describe the rationales at length in their paper. 2) High spatial resolution images. Previous methods, e.g. FGSM, may work on arbitrarily sized images. At best, GANs generate reasonable images that are lower resolutions (e.g. < 128x128). Building GAN's that operate above-and-beyond moderate spatial resolution is an open research topic. The best GAN models for generating high resolution images are difficult to train and it is not clear if they would work in this setting. Furthermore, images with even higher resolutions, e.g. 512x512, which is quite common in ImageNet, are difficult to synthesizes using current techniques. 3) Controlling the amount of distortion. A feature of previous optimization based methods is that a user may specify the amount of perturbation (epsilon). This is a key feature if not requirement in an adversarial perturbation because a user might want to examine the performance of a given model as a function of epsilon. Performing such an analysis with this model is challenging (i.e. retraining a GAN) and it is not clear if a given image generated by a GAN will always achieve a given epsilon perturbation/ On a more minor note, the authors suggest that generating a *diversity* of adversarial images is of practical import. I do not see the utility of being able to generate a diversity of adversarial images. The authors need to provide more justification for this motivation.
iclr_2018_r1SnX5xCb
Published as a conference paper at ICLR 2018 DEEP SENSING: ACTIVE SENSING USING MULTI- DIRECTIONAL RECURRENT NEURAL NETWORKS For every prediction we might wish to make, we must decide what to observe (what source of information) and when to observe it. Because making observations is costly, this decision must trade off the value of information against the cost of observation. Making observations (sensing) should be an active choice. To solve the problem of active sensing we develop a novel deep learning architecture: Deep Sensing. At training time, Deep Sensing learns how to issue predictions at various cost-performance points. To do this, it creates a different presentation at each of a variety of different performance levels, each associated with a particular set of measurement rates (costs). This requires learning how to estimate the value of real measurements vs. inferred measurements, which in turn requires learning how to infer missing (unobserved) measurements. To infer missing measurements, we develop a Multi-directional Recurrent Neural Network (M-RNN). An M-RNN differs from a bi-directional RNN in that it sequentially operates across streams in addition to within streams, and because the timing of inputs into the hidden layers is both lagged and advanced. At runtime, the operator prescribes a performance level or a cost constraint, and Deep Sensing determines what measurements to take and what to infer from those measurements, and then issues predictions. To demonstrate the power of our method, we apply it to two real-world medical datasets with significantly improved performance.
This is a very interesting submission that takes an interesting angle on clinical time series modeling, namely, actively choosing when to measure while simultaneously attempting to impute missing measurements and predict outcomes of interest. The proposed solution formulates everything as a giant learning problem that involves learning (a) an interpolation function that predicts a missing measurement from its past and present, (b) an imputation function that predicts a missing measurement from other variables at the same time step, (c) a prediction function that predicts outcomes of interest, including forecasting future measurements, (d) an error estimation function that estimates error of the forecasts in (c). These four pieces are then used in combination with a heuristic to decide when certain variables should be measured. This framework is used with a GRU-RNN architecture and in experiments with two datasets, outperforms a number of strong baselines. I am inclined toward accepting this paper due to the significance of the problem, the ingenuity of their proposed approach, and the strength of the empirical results. However, I think that there is a lot of room for improvement in the current manuscript, which is difficult to read and fully grasp. This will lessen its impact in the long run, so I encourage the authors to strive to make it clearer. If they succeed in improving it during the review period, I will gladly raise my score. NOTE: please do a thorough editorial pass for the next version -- I found at least one typo in the references (Yu, et al. "Active sensin.") QUALITY This is solid research, and I have few complaints about the work itself (most of my feedback will focus on clarity). I will list some strengths (+) and weaknesses (-) below and try to provide actionable feedback: + Very important problem that receives limited attention from the community + I like the formulation of active sensing as a prediction loss optimization problem + The learning problem is pretty intuitive and is well-suited to deep learning architectures since it yields a differentiable (albeit complex) loss function + The results speak for themselves -- for adverse event prediction in the MIMIC-III task, DS improves upon the nearest baseline by almost 9 points in AUC! More interestingly, using Deep Sensing to create a "resampled" version of the data set improves the performance of the baselines. It also achieves much more accurate imputation than standard approaches. - The proposed approach is pretty complex, and it's unclear what is the relative contribution of each component. I think it is incumbent to do an ablation study where different components are removed to see how performance degrades, if at all. For example, how would the model perform with interpolation but not imputation? Is bidirectional interpolation necessary, or would forward interpolation work sufficiently well (the obvious disadvantage of the bidirectional approach is the need to rerun inference at each new time step). Is it necessary to use both the actual AND predicted measurements as inputs (what if we instead used actual measurements when available and predicted otherwise)? - The experiments are thorough with a nice selection of baselines, but I wonder if perhaps Futoma, et al. [1] would be a stronger baseline than Choi, Che, or Lipton. They showed improvements over similar magnitude over baselines for predicting sepsis, and their approach (a differentiable GP-approximating layer) is conceptually simpler and has other benefits. I think it could be combined with the active sensing framework in this paper. - The one question this framework appears incapable of answering in a straightforward manner is WHEN the next set of measurements should be made. One could imagine a heuristic in which predictive loss/gain are assessed at different points in the future, but the search space will be huge, particularly if one wants to optimize over measurements at different points, e.g., maybe the optimal strategy is to take roughly hourly vitals but no labs until 12 hours from now. Indeed, it might be impossible to train such a model properly since the sampling times in the available training data are highly biased. - One thing potentially missing from this paper is a theoretical analysis to understand and analyze its behavior and performance. My very superficial analysis is that the prediction loss/gain framework is related to minimizing entropy and that the heuristic for choosing which variables to measure is a greedy search. A theoretical treatment to understand whether and how this approach might be sub-optimal would be very desirable. - Are the measurement and prediction "confidence intervals" proper confidence intervals (in the formal statistical sense)? I don't think so -- I wonder if there are alternatives for measuring uncertainty (formal CIs or maybe a Bayesian approach?). CLARITY My main complaint about this paper is clarity -- it is not difficult to read per se, but it is difficult to fully grok the details of the approach and the experimental setup. From the current manuscript, I do not feel confident that I could re-implement Deep Sensing or reproduce the experiments. This is especially important in healthcare research, where there is a minor reproducibility crisis, even for resarch using MIMIC (see [2]). Of course, this can be alleviated by publishing the code and using a public benchmark [3], but it can't hurt to clarify these details in the paper itself (and to add an appendix if length is an issue). Here are some potential areas for improvement: - The structure of the paper is a bit weird. In particular section 2 (pages 2-4) seems to be a grab bag of miscellaneous topics, at least by the headers. I think the content is fine -- perhaps section 2 can be renamed as "Background," subsection 2.1 renamed as "Notation," and subsection 2.2 renamed as "Problem Formulation" (or similar). I'd just combine subsection 2.3 with the previous one and explain that Figure 1 illustrates the problem formulation. - The active sensing procedure (subsection 2.2, page 3, equation 1 and the equations just above) is unclear. How are the minimization and maximization performed (gradient descent, line search, etc.)? How is the search for the subset of measurement variables performed (greedy search)? The latter is a discrete search, and I doubt it's, e.g., submodular, so it must be a nontrivial optimization. - Related, I'm a little confused about equation 1: C_T is the set of variables that should be measured, but C_T is being used to index prediction targets -- is this a typo? - The related work section is pretty extensive, but I wonder if it should also include work on active learning (Bayesian active learning, in particular, has been applied to sensing), submodular optimization (for sensor placement, which can be thought of as a spatial version of active sensing), and reinforcement learning. - I don't understand how the training data for the interpolation and imputation functions are constructed. I *think* that is what is described in the Adaptive Sampling subsection on page 8, but that is unclear. The word "representations" is used here, but that's an overloaded term in machine learning, and its meaning here is unclear from context. It appears that maybe there's an iterative procedure in which we alternate between training a model and then resampling the data using the model -- starting with the full data set. - The distinction between training and inference is not clear to me, at least with respect to the active sensing component. Is selective sampling performed during training? If so, what happens if the model elects to sample a variable at time t that is not actually measured in the data? - I don't follow subsection 4.2 (pages 8-9) at all -- what is it describing? If by "runtime" the authors refer to the computational complexity of the algorithm, then I would expect a Big-O analysis (none is provided -- it's just a rather vague discussion of what happens). I'd recommend removing this entire subsection and replacing it with, e.g., an Algorithm figure with pseudocode, as a more succinct description. - For the experiments, the authors provide insufficient detail about the data and task setup. Since MIMIC is publicly available, then readers ought (hypothetically) to be able to reproduce the experiments, but that is not currently possible. As an example, what adverse events are being predicted? How are they defined? - Figure 4 is nice, but it's not immediately obvious what the connection between observation rate and sampling cost. The authors should explain how a given observation rate is encoded as cost in the loss function. ORIGINALITY While active sensing is not a new research topic per se, there has been very limited research into the specific question of choosing what clinical variables to measure when in the context of a given prediction problem. This is a topic that (in my experience) is frequently discussed but rarely studied in clinical informatics circles. Hence, this is a very original line of inquiry, and the prediction loss/gain framing is a unique angle. SIGNIFICANCE I anticipate this paper will generate significant interest and follow-up work, at least among clinical informaticists and machine learning + health researchers. The main blockers to a significant impact are the clarity of writing issues listed above -- and if the authors fail to publish their code. REFERENCES [1] Futoma, et al. An Improved Multi-Output Gaussian Process RNN with Real-Time Validation for Early Sepsis Detection. MLHC 2017. [2] Johnson, et al. Reproducibility in critical care: a mortality prediction case study. MLHC 2017 [3] Harutyunyan, et al. Multitask Learning and Benchmarking with Clinical Time Series Data. arXiv.
iclr_2018_SyJ7ClWCb
Published as a conference paper at ICLR 2018 COUNTERING ADVERSARIAL IMAGES USING INPUT TRANSFORMATIONS This paper investigates strategies that defend against adversarial-example attacks on image-classification systems by transforming the inputs before feeding them to the system. Specifically, we study applying image transformations such as bit-depth reduction, JPEG compression, total variance minimization, and image quilting before feeding the image to a convolutional network classifier. Our experiments on ImageNet show that total variance minimization and image quilting are very effective defenses in practice, in particular, when the network is trained on transformed images. The strength of those defenses lies in their non-differentiable nature and their inherent randomness, which makes it difficult for an adversary to circumvent the defenses. Our best defense eliminates 60% of strong gray-box and 90% of strong black-box attacks by a variety of major attack methods.
To increase robustness to adversarial attacks, the paper fundamentally proposes to transform an input image before feeding it to a convolutional network classifier. The purpose of the transformation is to erase the high-frequency signals potentially embedded by an adversarial attack. Strong points: * To my knowledge, the proposed defense strategy is novel (even if the idea of transformation has been introduced at https://arxiv.org/abs/1612.01401). * The writing is reasonably clear (up to the terminology issues discussed among the weak points), and introduces properly the adversarial attacks considered in the work. * The proposed approach really helps in a black-box scenario (Figure 4). As explained below, the presented investigation is however insufficient to assess whether the proposed defense helps in a true white-box scenario. Weak points: * The black-box versus white-box terminology is not appropriate, and confusing. In general, black-box means that the adversary ignores everything from the decision process. Hence, in this case, the adversary does not know about the classification model, nor the defensive method, when used. This corresponds to Figure 3. On the contrary, white-box means that the adversary knows everything about the classification method, including the transformation implemented to make it more robust to attacks. Assimilating the parameters of the transform to a secret key is not correct because those parameters could be inferred by presenting many image samples to the transform and looking at the outcome of the transformation (which is supposed to be available in a 'white-box' paradigm) for those samples. * Using block diagrams would definitely help in presenting the training/testing and attack/defense schemes investigated in Figure 3, 4, and 5. * The paper does not discuss the impact of the denfense strategy on the classification performance in absence of adversity. * The paper lacks of positioning with respect to recent related works, e.g. 'Adversary Resistant Deep Neural Networks with an Application to Malware Detection' in KDD 2017, or 'Building Adversary-Resistant Deep Neural Networks without Security through Obscurity' at https://arxiv.org/abs/1612.01401. * In a white-box scenario, the adversary knows about the transformation and the classification model. Hence, an effective and realistic attack should exploit this knowledge. Designing an attack in case of a non differentiable transformation is obviously not trivial since back-propagation can not be used. However, since the proposed transformation primarily aim at removing the high frequency pattern induced by the attack, one could for example design an attack that account for a (linear and differentiable) low-pass filter transformation. Another example of attack that account for transformation knowledge (and would hopefully be more robust than the attacks considered in the manuscript) could be one that alternates between a conventional attack and the transformation. * If I understand correctly, the classification model considered in Figure 3 has been trained on original images, while the one in Figure 4 has been trained on transformed images. However, in absence of attack, they both achieve 76% accuracy. Is it correct? Does it mean that the transformation does not affect the classification accuracy at all? Overall, the works investigates an interesting idea, but lacks maturity to be accepted. Therefore, I would only recommend acceptation if room. Minor issues: Typo on p7: to change*s* Clarify poor formulations: * p1: 'enforce model-specific strategies that enforce model properties such as invariance and smoothness via the learning algorithm or regularization schemes'. * p1: 'too simple to remove adversarial perturbations from input images sufficiently'
iclr_2018_HJXyS7bRb
Building chatbots that can accomplish goals such as booking a flight ticket is an unsolved problem in natural language understanding. Much progress has been made to build conversation models using techniques such as sequence2sequence modeling. One challenge in applying such techniques to building goal-oriented conversation models is that maximum likelihood-based models are not optimized toward accomplishing goals. Recently, many methods have been proposed to address this issue by optimizing a reward that contains task status or outcome. However, adding the reward optimization on the fly usually provides little guidance for language construction and the conversation model soon becomes decoupled from the language model. In this paper, we propose a new setting in goal-oriented dialogue system to tighten the gap between these two aspects by enforcing model level information isolation on individual models between two agents. Language construction now becomes an important part in reward optimization since it is the only way information can be exchanged. We experimented our models using self-play and results showed that our method not only beat the baseline sequence2sequence model in rewards but can also generate human-readable meaningful conversations of comparable quality.
I like the idea of coupling the language and the conversation model. This is in line with the latest trends of constructing end-to-end NN models that deal with the conversation in a holistic manner. The idea of enforcing information isolation is brilliant. Creating hidden information and allowing the two-party model to learn through self-play is a very interesting approach and the results seem promising. Having said that, I feel important references are missing and specific statements of the paper, like that "Their success is however limited to conversations with very few turns and without goals" can be argued. There are papers that are goal oriented and have many turns. I will just provide one example, to avoid being overwhelming, although more can be found in the literature. That would be the paper of T.-H. Wen, D. Vandyke, N. Mrksic, M. Gasic, L. Rojas-Barahona, P.-H. Su, S. Ultes and S. Young (2017). "A Network-based End-to-End Trainable Task-oriented Dialogue System." EACL 2017, Valencia, Spain. In fact in this paper even more dialogue modules are coupled. So, the "fresh challenge" of the paper can be argued. It is not clear to me how you did the supervised part of the training. To my experience, although supervised learning can be used, reinforcement learning seems to be the most popular choice. Also, I had to read most of the paper to understand that the system is based on a simulator. Additionally, it is not clear how you got the ground-truth for the training. How are the action and the dialogue generated by the simulator guaranteed to follow the optimal policy? I also disagree with the statement that "based on those... to estimate rewards". If ruled-based systems were sufficient, there would not be a need for statistical dialogue managers. However, the latter is a very active research area. Figure 1 is missing information (for my likings), like not defined symbols. In addition, it's not self-contained. Also, I would prefer a longer, descriptive and informative label to make the figure as self-explained as possible. I believe it would add to the clarity of the paper. Also, fundamental information, according to my opinion is missing. For example, what are the restrictions R and how is the database K formed? What is the size of the database? How many actions do you define? Some of them are defined in the action state decoder, but it is not clear if it is all of them. GRU -> abbreviation not defined I would really appreciate a figure to better explain the subsection "Encoding External Knowledge". In the current form I am struggling to understand what the authors mean. How is the embedding matrix E created? Have you tried different unit sizes d? Have you tried different unit sizes for the customer and the service? "we use 2 transformation matrixes" -> if you could please provide more details How is equation 2 related to figure 1? Typo: "name of he person" "During the supervised learning... and the action states". I am not sure I get what you mean. May you make this statement more clearly by adding an equation for example? What happens if you use random rather than supervised learning weight initialisation? Equation 7: What does T stand for? I cannot find Table 1, 2 and 5 referred to in-text. Moreover I am not sure about quite some items. For example, what is number db? What is the inference set? 500k of data is quite some. A figure on convergence would be nice. Setting generator: You mention the percentage of book and flight not found. What about the rest of the cases? Typo: “table 3 and table 3” The set of the final states of the dialogue is not the same as the ones presented at Fig. 2. Sub section reward generation is poorly described. After all, reward seems to play a very important role for the proposed system. Statements like “things such as” (instead the exhaustive list of rules for example) or “the error against the optimal distance” with no note what should be considered the optimal distance make the paper clarity decreased and the results not possible to be reproduced. Personally I would prefer to see some equations or a flow chart. By the way, have you tried and alternative reward function? Table 4 is not easy for me to understand. For example, what do you mean when you say eval reward? Implementation details. I fail to understand how the supervised learning is used (as said already). Also you make a note for the value network, but not for the policy network. There are some minor issues with the references such as pomdp or lstm not being capitalised In general, I believe that the paper has a great potential and is a noticeable work. However, the paper could be better organised. Personally, I struggled with the clarity of some text portions. For me, the main drawback of the paper is that it was't tested with human users. The actual success of the system when evaluated by humans can be surprisingly different from the one that comes from simulation.
iclr_2018_Sy3fJXbA-
While much of the work in the design of convolutional networks over the last five years has revolved around the empirical investigation of the importance of depth, filter sizes, and number of feature channels, recent studies have shown that branching, i.e., splitting the computation along parallel but distinct threads and then aggregating their outputs, represents a new promising dimension for significant improvements in performance. To combat the complexity of design choices in multi-branch architectures, prior work has adopted simple strategies, such as a fixed branching factor, the same input being fed to all parallel branches, and an additive combination of the outputs produced by all branches at aggregation points. In this work we remove these predefined choices and propose an algorithm to learn the connections between branches in the network. Instead of being chosen a priori by the human designer, the multi-branch connectivity is learned simultaneously with the weights of the network by optimizing a single loss function defined with respect to the end task. We demonstrate our approach on the problem of multi-class image classification using four different datasets where it yields consistently higher accuracy compared to the state-of-the-art "ResNeXt" multi-branch network given the same learning capacity. While much of the work in the design of convolutional networks over the last 1 five years has revolved around the empirical investigation of the importance of 2 depth, filter sizes, and number of feature channels, recent studies have shown that 3 branching, i.e., splitting the computation along parallel but distinct threads and then 4 aggregating their outputs, represents a new promising dimension for significant 5 improvements in performance. To combat the complexity of design choices in 6 multi-branch architectures, prior work has adopted simple strategies, such as a 7 fixed branching factor, the same input being fed to all parallel branches, and an 8 additive combination of the outputs produced by all branches at aggregation points.
The authors extend the ResNeXt architecture. They substitute the simple add operation with a selection operation for each input in the residual module. The selection of the inputs happens through gate weights, which are sampled at train time. At test time, the gates with the highest values are kept on, while the other ones are shut. The authors fix the number of the allowed gates to K out of C possible inputs (C is the multi-branch factor in the ResNeXt modules). They show results on CIFAR-100 and ImageNet (as well as mini ImageNet). They ablate the choice of K, the binary nature of the gate weights. Pros: (+) The paper is well written and the method is well explained (+) The authors ablate and experiment on large scale datasets Cons: (-) The proposed method is a simple extension of ResNeXt (-) The gains are reasonable, yet not SOTA, and come at a price of more complex training protocols (see below) (-) Generalization to other tasks not shown The authors do a great job walking us through the formulation and intutition of their proposed approach. They describe their training procedure and their sampling approach for the gate weights. However, the training protocol gets complicated with the introduction of gate weights. In order to train the gate weights along with the network parameters, the authors need to train the parameters jointly followed by the training of only the network parameters while keeping the gates frozen. This makes training of such networks cumbersome. In addition, the authors report a loss in performance when the gates are not discretized to {0,1}. This means that a liner combination with the real-valued learned gate parameters is suboptimal. Could this be a result of suboptimal, possibly compromised training? While the CIFAR-100 results look promising, the ImageNet-1k results are less impressive. The gains from introducing gate weights in the input of the residual modules vanish when increasing the network size. Last, the impact of ResNeXt/ResNet lies in their ability to generalize to other tasks. Have the authors experimented with other tasks, e.g. object detection, to verify that their approach leads to better performance in a more diverse set of problems?
iclr_2018_BkSDMA36Z
A NEW METHOD OF REGION EMBEDDING FOR TEXT CLASSIFICATION To represent a text as a bag of properly identified "phrases" and use the representation for processing the text is proved to be useful. The key question here is how to identify the phrases and represent them. The traditional method of utilizing n-grams can be regarded as an approximation of the approach. Such a method can suffer from data sparsity, however, particularly when the length of n-gram is large. In this paper, we propose a new method of learning and utilizing task-specific distributed representations of n-grams, referred to as "region embeddings". Without loss of generality we address text classification. We specifically propose two models for region embeddings. In our models, the representation of a word has two parts, the embedding of the word itself, and a weighting matrix to interact with the local context, referred to as local context unit. The region embeddings are learned and used in the classification task, as parameters of the neural network classifier. Experimental results show that our proposed method outperforms existing methods in text classification on several benchmark datasets. The results also indicate that our method can indeed capture the salient phrasal expressions in the texts.
The authors present a model for text classification. The parameters of the model are an embedding for each word and a local context unit. The local context unit can be seen as a filter for a convolutional layer, but which filter is used at location i depends on the word at location i (i.e. there is one filter per vocabulary word). After the filter is applied to the embeddings and after max pooling, the word-context region embeddings are summed and fed into a neural network for the classification task. The embeddings, the context units and the neural net parameters are trained jointly on a supervised text classification task. The authors also offer an alternative model, which changes the role of the embedding an the context unit, and results in context-word region embeddings. Here the embedding of word i is combined with the elements of the context units of words in the context. To get the region embeddings both model (word-context and context-word) combine attributes of the words (embeddings) with how their attributes should be emphasized or deemphasized based on nearby words (local context units and max pooling) while taking into account the relative position of the words in the context (columns of the context units). The method beats existing methods for text classification including d-LSTMs , BoWs, and ngram TFIDFs on held out classification accuracy. the choice of baselines is convincing. What is the performance of the proposed method if the embeddings are initialized to pretrained word embeddings and a) trained for the classification task together with randomly initialized context units b) frozen to pretrained embeddings and only the context units are trained for the classification task? The introduction was fine. Until page 3 the authors refer to the context units a couple of times without giving some simple explanation of what it could be. A simple explanation in the introduction would improve the writing. The related work section only makes sense *after* there is at least a minimal explanation of what the local context units do. A simple explanation of the method, for example in the introduction, would then make the connections to CNNs more clear. Also, in the related work, the authors could include more citations (e.g. the d-LSTM and the CNN based methods from Table 2) and explain the qualitative differences between their method and existing ones. The authors should consider adding equation numbers. The equation on the bottom of page 3 is fine, but the expressions in 3.2 and 3.3 are weird. A more concise explanation of the context-word region embeddings and the word-context region embeddings would be to instead give the equation for r_{i,c}. The included baselines are extensive and the proposed method outperforms existing methods on most datasets. In section 4.5 the authors analyze region and embedding size, which are good analyses to include in the paper. Figure 2 and 3 could be next to each other to save space. I found the idea of multi region sizes interesting, but no description is given on how exactly they are combined. Since it works so well, maybe it could be promoted into the method section? Also, for each data set, which region size worked best? Qualitative analysis: It would have been nice to see some analysis of whether the learned embeddings capture semantic similarities, both at the embedding level and at the region level. It would also be interesting to investigate the columns of the context units, with different columns somehow capturing the importance of relative position. Are there some words for which all columns are similar meaning that their position is less relevant in how they affect nearby words? And then for other words with variation along the columns of the context units, do their context units modulate the embedding more when they are closer or further away? Pros: + simple model + strong quantitative results Cons: - notation (i.e. precise definition of r_{i,c}) - qualitative analysis could be extended - writing could be improved
iclr_2018_BJgd7m0xRZ
Anomaly detection discovers regular patterns in unlabeled data and identifies the non-conforming data points, which in some cases are the result of malicious attacks by adversaries. Learners such as One-Class Support Vector Machines (OCSVMs) have been successfully used in anomaly detection, yet their performance may degrade significantly in the presence of sophisticated adversaries, who target the algorithm itself by compromising the integrity of the training data. With the rise in the use of machine learning in mission critical day-to-day activities where errors may have significant consequences, it is imperative that machine learning systems are made secure. To address this, we propose a defense mechanism that is based on a contraction of the data, and we test its effectiveness using OCSVMs. The proposed approach introduces a layer of uncertainty on top of the OCSVM learner, making it infeasible for the adversary to guess the specific configuration of the learner. We theoretically analyze the effects of adversarial perturbations on the separating margin of OCSVMs and provide empirical evidence on several benchmark datasets, which show that by carefully contracting the data in low dimensional spaces, we can successfully identify adversarial samples that would not have been identifiable in the original dimensional space. The numerical results show that the proposed method improves OCSVMs performance substantially (2-7%).
Although the problem addressed in the paper seems interesting, but there lacks of evidence to support some of the arguments that the authors make. And the paper does not contribute novelty to representation learning, therefore, it is not a good fit for the conference. Detailed critiques are as following: 1. The idea proposed by the authors seems too quite simple. It is just performing random projections for 1000 times and choose the set of projection parameters that results in the highest compactness as the dimensionality reduction model parameter before one-class SVM. 2. It says in the experiments part that the authors have used 3 different S_{attack} values, but they only present results for S_{attack} = 0.5. It would be nicer if they include results for all S_{attack} values that they have used in their experiments, which would also give the reader insights on how the anomaly detection performance degrades when the S_attack value change. 3. The paper claims that the nonlinear random projection is a defence against adversary due to the randomness, but there is no results in the paper proving that other non-random projections are susceptible to adversary that is designed to target that projection mechanism and nonlinear random projection is able to get away with that. And PCA as a non-random projection would a nice baseline to compare against. 4. The paper seems to misuse the term “False positive rate” as the y label of figure 3(d/e/f). The definition of false positive rate is FP/(FP+TN), so if the FPR=1 it means that all negative samples are labeled as positive. So it is surprising to see FPR=1 in Figure 3(d) when feature dimension=784 while the f1 score is still high in Figure 3(a). From what I understand, the paper means to present the percentage of adversarial examples that are misclassified instead of all the anomaly examples that get misclassified. The paper should come up with a better term for that evaluation. 5. The conclusion, that robustness of the learned model increases wrt the integrity attacks increases when the projection dimension becomes lower, cannot be drawn from Figure 3(d). Need more experiment on more dimensionality to prove that. 6. In the appendix B results part, sometimes the word ’S_attack’ is typed wrong. And the values in “distorted/distorted” columns in Table 5 do not match up with the ones in Figure 3(c).
iclr_2018_rJ6iJmWCW
In this paper, we propose the generation of accented speech using generative adversarial networks (GANs). Through this work we make two main contributions a) The ability to condition latent representations while generating realistic speech samples b) The ability to efficiently generate long speech samples by using a novel latent variable transformation module that is trained using policy gradients. Previous methods are limited in being able to generate only relatively short samples or are not very efficient at generating long samples. The generated speech samples are validated through a number of various evaluation measures viz, a Wasserstein-GAN critic loss and through subjective scores on user evaluations against a competitive speech synthesis baseline. The evaluations demonstrate that the model generates realistic long speech samples conditioned on accent efficiently.
This paper presents a method for generating speech audio in a particular accent. The proposed approach relies on a generative adversarial network (GAN), combined with a policy approach for joining together generated speech segments. The latter is used to deal with the problem of generating very long sequences (which is generally difficult with GANs). The problem of generating accented speech is very relevant since accent plays a large role in human communication and speech technology. Unfortunately, this paper is hard to follow. Some of the approach details are unclear and the research is not motivated well. The evaluation does not completely support the claims of the paper, e.g., there is no human judgment of whether the generated audio actually matches the desired accent. Detailed comments, suggestions, and questions: - It would be very useful to situate the research within work from the speech community. Why is accented modelling important? How is this done at the moment in speech synthesis systems? The paper gives some references, but without context. The paper from Ikeno and Hansen below might be useful. - Accents are also a big problem in speech recognition (references below). Could your approach give accent-invariant representations for recognition? - Figure 1: Add $x$, $y$, and the other variables you mention in Section 3 to the figure. - What is $o$ in eq. (1)? - Could you add a citation for eq. (2)? This would also help justifying that "it has a smoother curve and hence allows for more meaningful gradients". - With respect to the critic $C_\nu$, I can see that it might be helpful to add structure to the hidden representation. In the evaluation, could you show the effect of having/not having this critic (sorry if I missed it)? The statement about "more efficient layers" is not clear. - Section 3.4: If I understand correctly, this is a nice idea for ensuring that generated segments are combined sensibly. It would be helpful defining with "segments" refer to, and stepping through the audio generation process. - Section 4.1: "using which we can" - typo. - Section 5.1: "Figure 1 shows how the Wasserstein distance ..." I think you refer to the figure with Table 1? - Figure 4: Add (a), (b) and (c) to the relevant parts in the figure. References that might be useful: - Ikeno, Ayako, and John HL Hansen. "The effect of listener accent background on accent perception and comprehension." EURASIP Journal on Audio, Speech, and Music Processing 2007, no. 3 (2007): 4. - Van Compernolle, Dirk. "Recognizing speech of goats, wolves, sheep and… non-natives." Speech Communication 35, no. 1 (2001): 71-79. - Benzeghiba, Mohamed, Renato De Mori, Olivier Deroo, Stephane Dupont, Teodora Erbes, Denis Jouvet, Luciano Fissore et al. "Automatic speech recognition and speech variability: A review." Speech communication 49, no. 10 (2007): 763-786. - Wester, Mirjam, Cassia Valentini-Botinhao, and Gustav Eje Henter. "Are We Using Enough Listeners? No!—An Empirically-Supported Critique of Interspeech 2014 TTS Evaluations." In Sixteenth Annual Conference of the International Speech Communication Association. 2015. The paper tries to address an important problem, and there are good ideas in the approach (I suspect Sections 3.3 and 3.4 are sensible). Unfortunately, the work is not presented or evaluated well, and I therefore give a week reject.
iclr_2018_HktRlUlAZ
POLAR TRANSFORMER NETWORKS Convolutional neural networks (CNNs) are inherently equivariant to translation. Efforts to embed other forms of equivariance have concentrated solely on rotation. We expand the notion of equivariance in CNNs through the Polar Transformer Network (PTN). PTN combines ideas from the Spatial Transformer Network (STN) and canonical coordinate representations. The result is a network invariant to translation and equivariant to both rotation and scale. PTN is trained end-to-end and composed of three distinct stages: a polar origin predictor, the newly introduced polar transformer module and a classifier. PTN achieves stateof-the-art on rotated MNIST and the newly introduced SIM2MNIST dataset, an MNIST variation obtained by adding clutter and perturbing digits with translation, rotation and scaling. The ideas of PTN are extensible to 3D which we demonstrate through the Cylindrical Transformer Network.
This paper presents a new convolutional network architecture that is invariant to global translations and equivariant to rotations and scaling. The method is combination of a spatial transformer module that predicts a focal point, around which a log-polar transform is performed. The resulting log-polar image is analyzed by a conventional CNN. I find the basic idea quite compelling. Although this is not mentioned in the article, the proposed approach is quite similar to human vision in that people choose where to focus their eyes, and have an approximately log-polar sampling grid in the retina. Furthermore, dealing well with variations in scale is a long-standing and difficult problem in computer vision, and using a log-spaced sampling grid seems like a sensible approach to deal with it. One fundamental limitation of the proposed approach is that although it is invariant to global translations, it does not have the built-in equivariance to local translations that a ConvNet has. Although we do not have data on this, I would guess that for more complex datasets like imagenet / ms coco, where a lot of variation can be reasonably well modelled by diffeomorphisms, this will result in degraded performance. The use of the heatmap centroid as the prediction for the focal point is potentially problematic as well. It would not work if the heatmap is multimodal, e.g. when there are multiple instances in the same image or when there is a lot of clutter. There is a minor conceptual confusion on page 4, where it is written that "Group-convolution requires integrability over a group and identification of the appropriate measure dg. We ignore this detail as implementation requires application of the sum instead of integral." When approximating an integral by a sum, one should generally use quadrature weights that depend on the measure, so the measure cannot be ignored. Fortunately, in the chosen parameterization, the Haar measure is equal to the standard Lebesque measure, and so when using equally-spaced sampling points in this parameterization, the quadrature weights should be one. (Please double-check this - I'm only expressing my mathematical intuition but have not actually proven this). It does not make sense to say that "The above convolution requires computation of the orbit which is feasible with respect to the finite rotation group, but not for general rotation-dilations", and then proceed to do exactly that (in canonical coordinates). Since the rotation-dilation group is 2D, just like the 2D translation group used in ConvNets, this is entirely feasible. The use of canonical coordinates is certainly a sensible choice (for the reason given above), but it does not make an infeasible computation feasible. The authors may want to consider citing - Warped Convolutions: Efficient Invariance to Spatial Transformations, Henriques & Vedaldi. This paper also uses a log-polar transform, but lacks the focal point prediction / STN. Likewise, although the paper makes a good effort to rewiev the literature on equivariance / steerability, it missed several recent works in this area: - Steerable CNNs, Cohen & Welling - Dynamic Steerable Blocks in Deep Residual Networks, Jacobsen et al. - Learning Steerable Filters for Rotation Equivariant CNNs, Weiler et al. The last paper reports 0.71% error on MNIST-rot, which is slightly better than the PTN-CNN-B++ reported on in this paper. The experimental results presented in this paper are quite good, but both MNIST and ModelNet40 seem like simple / toyish datasets. For reasons outlined above, I am not convinced that this approach in its current form would work very well on more complicated problems. If the authors can show that it does (either in its current form or after improving it, e.g. with multiple saccades, or other improvements) I would recommend this paper for publication. Minor issues & typos - Section 3.1, psi_gh = psi_g psi_h. I suppose you use psi for L and L', but this is not very clear. - L_h f = f(h^{-1}), p. 4 - "coordiantes", p. 5
iclr_2018_SyqAPeWAZ
In recent years Convolutional Neural Networks (CNN) have been used extensively for Superresolution (SR). In this paper, we use inverse problem and sparse representation solutions to form a mathematical basis for CNN operations. We show how a single neuron is able to provide the optimum solution for inverse problem, given a low resolution image dictionary as an operator. Introducing a new concept called Representation Dictionary Duality, we show that CNN elements (filters) are trained to be representation vectors and then, during reconstruction, used as dictionaries. In the light of theoretical work, we propose a new algorithm which uses two networks with different structures that are separately trained with low and high coherency image patches and show that it performs faster compared to the state-of-the-art algorithms while not sacrificing from performance.
The method proposes a new architecture for solving image super-resolution task. They provide an analysis that connects aims to establish a connection between how CNNs for solving super resolution and solving sparse regularized inverse problems. The writing of the paper needs improvement. I was not able to understand the proposed connection, as notation is inconsistent and it is difficult to figure out what the authors are stating. I am willing to reconsider my evaluation if the authors provide clarifications. The paper does not refer to recent advances in the problem, which are (as far as I know), the state of the art in the problem in terms of quality of the solutions. This references should be added and the authors should put their work into context. 1) Arguably, the state of the art in super resolution are techniques that go beyond L2 fitting. Specifically, methods using perceptual losses such as: Johnson, J. et al "Perceptual losses for real-time style transfer and super-resolution." European Conference on Computer Vision. Springer International Publishing, 2016. Ledig, Christian, et al. "Photo-realistic single image super-resolution using a generative adversarial network." arXiv preprint arXiv:1609.04802 (2016). PSNR is known to not be directly related to image quality, as it favors blurred solutions. This should be discussed. 2) The overall notation of the paper should be improved. For instance, in (1), g represents the observation (the LR image), whereas later in the text, g is the HR image. 3) The description of Section 2.1 is quite confusing in my view. In equation (1), y is the signal to be recovered and K is just the downsampling plus blurring. So assuming an L1 regularization in this equation assumes that the signal itself is sparse. Equation (2) changes notation referring y as f. 4) Equation (2) seems wrong. The term multiplying K^T is not the norm (should be parenthesis). 5) The first statement of Section 2.2. seems wrong. DL methods do state the super resolution problem as an inverse problem. Instead of using a pre-defined basis function they learn an over-complete dictionary from the data, assuming that natural images can be sparsely represented. Also, this section does not explain how DL is used for super resolution. The cited work by Yang et al learns a two coupled dictionaries (one for LR and HL), such that for a given patch, the same sparse coefficients can reconstruct both HR and LR patches. The authors just state the sparse coding problem. 6) Equation (10) should not contain the \leq \epsilon. 7) In the second paragraph of Section 3, the authors mention that the LR image has to be larger than the HR image to prevent border effects. This makes sense. However, with the size of the network (20 layers), the change in size seems to be quite large. Could you please provide the sizes? When measuring PSNR, is this taken into account? 8) It would be very helpful to include an image explaining the procedure described in the second paragraph of Section 3. 9) I find the description in Section 3 quite confusing. The authors relate the training of a single filter (or neuron) to equation (7), but they define D, that is not used in all of Section 2.1. And K does not show in any of the analysis given in the last paragraph of page 4. However, D and K seem two different things (it is not just one for the other), see bellow. 10) I cannot understand the derivation that the authors do in the last paragraph of page 4 (and beginning of page 5). What is phi_l here? K in equation (7) seems to match to D here, but D here is a collection of patches and in (7) is a blurring and downsampling operator. I cannot review this section. I will wait for the author's response clarifications. 11) The authors describe a change in roles between the representations and atoms in the training and testing phase respectively. I do not understand this. If I understand correctly, the final algorithm, the authors train a CNN mapping LR to HR images. The network is used in the same way at training and testing. 12) It would be useful to provide more details about the training of the network. Please describe the training set used by Kim et al. Are the two networks trained independently? One could think of fine-tuning them jointly (including the aggregation). 13) The authors show the advantage of separating networks on a single image, Barbara. It would be good to quantify this better (maybe in terms of PSNR?). This observation might be true only because the training loss, say than the works cited above. Please comment on this. 14) In figures 3 and 4, the learned filters are those on the top (above the yellow arrow). It is not obvious to me that the reflect the predominant structure in the data. (maybe due to the low resolution). 15) This work is related to (though clearly different) that of LISTA (Learned ISTA) type of networks, proposed in: Gregor, K., & LeCun, Y. (2010). Learning fast approximations of sparse coding. In Proceedings of the 27th International Conference on Machine Learning (ICML) Which connect the network architecture with the optimization algorithm used for solving the sparse coding problem. Follow up works have used these ideas for solving inverse problems as well.
iclr_2018_Hy_o3x-0b
There have been multiple attempts with variational auto-encoders (VAE) to learn powerful global representations of complex data using a combination of latent stochastic variables and an autoregressive model over the dimensions of the data. However, for the most challenging natural image tasks the purely autoregressive model with stochastic variables still outperform the combined stochasticautoregressive models. In this paper, we present simple additions to the VAE framework that generalize to natural images by embedding spatial information in the stochastic layers. We significantly improve the state-of-the-art results on MNIST, OMNIGLOT, CIFAR10 and ImageNet when the feature map parameterization of the stochastic variables are combined with the autoregressive PixelCNN approach. Interestingly, we also observe close to state-of-the-art results without the autoregressive part. This opens the possibility for high quality image generation with only one forward-pass.
The paper combines several recent advances on generative modelling including a ladder variational posterior and a PixelCNN decoder together with the proposed convolutional stochastic layers to boost the NLL results of the current VAEs. The numbers in the tables are good but I have several comments on the motivation, originality and experiments. Most parts of the paper provide a detailed review of the literature. However, the resulting model is quite like a combination of the existing advances and the main contribution of the paper, i.e. the convolution stochastic layer, is not well discussed. Why should we introduce the convolution stochastic layers? Could the layers encode the spatial information better than a deterministic convolutional layer with the same architecture? What's the exact challenge of training VAEs addressed by the convolution stochastic layer? Please strengthen the motivation and originality of the paper. Though the results are good, I still wonder what is the exact contribution of the convolutional stochastic layers to the NLL results? Can the authors provide some results without the ladder variational posterior and the PixelCNN decoder on both the gray-scaled and the natural images? According to the experimental setting in the Section 3 (Page 5 Paragraph 2), "In case of gray-scaled images the stochastic latent layers are dense with sizes 64, 32, 16, 8, 4 (equivalent to Sønderby et al. (2016)) and for the natural images they are spatial (cf. Table 1). There was no significant difference when using feature maps (as compared to dense layers) for modelling gray-scaled images." there is no stochastic convolutional layer. Then is there anything new in FAME on the gray images? Furthermore, how could FAME advance the previous state-of-the-art? It seems because of other factors instead of the stochastic convolutional layer. The results on the natural images are not complete. Please present the generation results on the ImageNet dataset and the reconstruction results on both the CIFAR10 and ImageNet datasets. The quality of the samples on the CIFAR10 dataset seems not competitive to the baseline papers listed in the table. Though the visual quality does not necessarily agree with the NLL results but such large gap is still strange. Besides, why FAME can obtain both good NLL and generation results on the MNIST and OMNIGLOT datasets when there is no stochastic convolutional layer? Meanwhile, why FAME cannot obtain good generation results on the CIFAR10 dataset? Is it because there is a lot randomness in the stochastic convolutional layer? It is better to provide further analysis and it is not safe to say that the stochastic convolutional layer helps learn better latent representations based on only the NNL results. Minor things: Please rewrite the sentence "When performing reconstructions during training ... while also using the stochastic latent variables z = z 1 , ..., z L." in the caption of Figure 1.
iclr_2018_r1kP7vlRb
Designing a metric manually for unsupervised sequence generation tasks, such as text generation, is essentially difficult. In a such situation, learning a metric of a sequence from data is one possible solution. The previous study, SeqGAN, proposed the framework for unsupervised sequence generation, in which a metric is learned from data, and a generator is optimized with regard to the learned metric with policy gradient, inspired by generative adversarial nets (GANs) and reinforcement learning. In this paper, we make two proposals to learn better metric than SeqGAN's: partial reward function and expert-based reward function training. The partial reward function is a reward function for a partial sequence of a certain length. SeqGAN employs a reward function for completed sequence only. By combining long-scale and short-scale partial reward functions, we expect a learned metric to be able to evaluate a partial correctness as well as a coherence of a sequence, as a whole. In expert-based reward function training, a reward function is trained to discriminate between an expert (or true) sequence and a fake sequence that is produced by editing an expert sequence. Expert-based reward function training is not a kind of GAN frameworks. This makes the optimization of the generator easier. We examine the effect of the partial reward function and expert-based reward function training on synthetic data and real text data, and show improvements over SeqGAN and the model trained with MLE. Specifically, whereas SeqGAN gains 0.42 improvement of NLL over MLE on synthetic data, our best model gains 3.02 improvement, and whereas SeqGAN gains 0.029 improvement of BLEU over MLE, our best model gains 0.250 improvement.
This paper considers the problem of improving sequence generation by learning better metrics. Specifically, it focuses on addressing the exposure bias problem, where traditional methods such as SeqGAN uses GAN framework and reinforcement learning. Different from these work, this paper does not use GAN framework. Instead, it proposed an expert-based reward function training, which trains the reward function (the discriminator) from data that are generated by randomly modifying parts of the expert trajectories. Furthermore, it also introduces partial reward function that measures the quality of the subsequences of different lengths in the generated data. This is similar to the idea of hierarchical RL, which divide the problem into potential subtasks, which could alleviate the difficulty of reinforcement learning from sparse rewards. The idea of the paper is novel. However, there are a few points to be clarified. In Section 3.2 and in (4) and (5), the authors explains how the action value Q_{D_i} is modeled and estimated for the partial reward function D_i of length L_{D_i}. But the authors do not explain how the rewards (or action value functions) of different lengths are aggregated together to update the model using policy gradient. Is it a simple sum of all of them? It is not clear why the future subsequences that do not contain y_{t+1} are ignored for estimating the action value function Q in (4) and (5). The authors stated that it is for reducing the computation complexity. But it is not clear why specifically dropping the sequences that do not contain y_{t+1}. Please clarify more on this point.
iclr_2018_Sy8XvGb0-
Published as a conference paper at ICLR 2018 LATENT CONSTRAINTS: LEARNING TO GENERATE CONDITIONALLY FROM UNCONDITIONAL GENERATIVE MODELS Deep generative neural networks have proven effective at both conditional and unconditional modeling of complex data distributions. Conditional generation enables interactive control, but creating new controls often requires expensive retraining. In this paper, we develop a method to condition generation without retraining the model. By post-hoc learning latent constraints, value functions that identify regions in latent space that generate outputs with desired attributes, we can conditionally sample from these regions with gradient-based optimization or amortized actor functions. Combining attribute constraints with a universal "realism" constraint, which enforces similarity to the data distribution, we generate realistic conditional images from an unconditional variational autoencoder. Further, using gradient-based optimization, we demonstrate identity-preserving transformations that make the minimal adjustment in latent space to modify the attributes of an image. Finally, with discrete sequences of musical notes, we demonstrate zero-shot conditional generation, learning latent constraints in the absence of labeled data or a differentiable reward function.
UPDATE: I think the authors' rebuttal and updated draft address my points sufficiently well for me to update my score and align myself with the other reviewers. ----- ORIGINAL REVIEW: The paper proposes a method for learning post-hoc to condition a decoder-based generative model which was trained unconditionally. Starting from a VAE trained with an emphasis on good reconstructions (and at the expense of sample quality, via a small hard-coded standard deviation on the conditional p(x | z)), the authors propose to train two "critic" networks on the latent representation: 1. The "realism" critic receives either a sample z ~ q(z) (which is implicitly defined as the marginal of q(z | x) over all empirical samples) or a sample z ~ p(z) and must tell them apart. 2. The "attribute" critic receives either a (latent code, attribute) pair from the dataset or a synthetic (latent code, attribute) pair (obtained by passing both the attribute and a prior sample z ~ p(z) through a generator) and must tell them apart. The goal is to find a latent code which satisfies both the realism and the attribute-exhibiting criteria, subject to a regularization penalty that encourages it to stay close to its starting point. It seems to me that the proposed realism constraint hinges exclusively on the ability to implictly capture the marginal distribution q(z) via a trained discriminator. Because of that, any autoencoder could be used in conjunction with the realism constraint to obtain good-looking samples, including the identity encoder-decoder pair (in which case the problem reduces to generative adversarial training). I fail to see why this observation is VAE-specific. The authors do mention that the VAE semantics allow to provide some weak form of regularization on q(z) during training, but the way in which the choice of decoder standard deviation alters the shape of q(z) is not explained, and there is no justification for choosing one standard deviation value in particular. With that in mind, the fact that the generator mapping prior samples to "realistic" latent codes works is expected: if the VAE is trained in a way that encourages it to focus almost exclusively on reconstruction, then its prior p(z) and its marginal q(z) have almost nothing to do with each other, and it is more convenient to view the proposed method as a two-step procedure in which an autoencoder is first trained, and an appropriate prior on latent codes is then learned. In other words, the generator represents the true prior by definition. The paper is also rather sparse in terms of comparison with existing work. Table 1 does compare with Perarnau et al., but as the caption mentions, the two methods are not directly comparable due to differences in attribute labels. Some additional comments: - BiGAN [1] should be cited as concurrent work when citing (Dumoulin et al., 2016). - [2] and [3] should be cited as concurrent work when citing (Ulyanov et al., 2016). Overall, the relative lack of novelty and comparison with previous work make me hesitant to recommend the acceptance of this paper. References: [1] Donahue, J., Krähenbühl, P., and Darrell, T. (2017). Adversarial feature learning. In Proceedings of the International Conference on Learning Representations. [2] Li, C., and Wand, M. (2016). Precomputed real-time texture synthesis with markovian generative adversarial networks. In European Conference on Computer Vision. [3] Johnson, J., Alahi, A., and Fei-Fei, L. (2016). Perceptual losses for real-time style transfer and super-resolution. In European Conference on Computer Vision.
iclr_2018_Skvd-myR-
Measuring visual (dis)similarity between two or more instances within a data distribution is a fundamental task in many applications, especially in image retrieval. Theoretically, non-metric distances are able to generate a more complex and accurate similarity model than metric distances, provided that the non-linear data distribution is precisely captured by the similarity model. In this work, we analyze a simple approach for deep learning networks to be used as an approximation of non-metric similarity functions and we study how these models generalize across different image retrieval datasets.
The authors of this work propose learning a similarity measure for visual similarity and obtain, by doing that, an improvement in the very well-known datasets of Oxford and Paris for image retrieval. The work takes high-level image representations generated with an existing architecture (R-MAC), and train on top a neural network of two fully connected layers. The training of such network is performed in three stages: firstly approximating the cosine similarity with a large amount of random feature vectors, secondly using image pairs from the same class, and finally using the hard examples. PROS P1. Results indicate the benefit of this approach in terms of similarity estimation and, overall, the paper present results that extend the state of the art in well-known datasets. P2. The authors make a very nice effort in motivation the paper, relating it with the state of the art and funding their proposal on studies regarding human visual perception. The whole text is very well written and clear to follow. CONS C1. As already observed by the authors, training a similarity function without considering images from the target dataset is actually harmful. In this sense, the simple cosine similarity does not present this drawback in terms of lack of generalization. This observation is not new, but relevant in the field of image retrieval, where in many applications the object of interest for a query is actually not present in the training dataset. C2. The main drawback of this approach is in terms of computation. Feed-forwarding the two samples through the trained neural network is far more expensive that computing the simple cosine similarity, which is computed very quickly with a GPU as a matrix multiplication. The authors already point at this in Section 4.3. C3. I am somehow surprised that the authors did not explore also training the network that would extract the high-level representations, that is, a complete end-to-end approach. While I would expect to have the weights frozen in the first phase of training to miimic the cosine similarity, why not freeing the rest of layers when dealing with pairs of images ? C4. There are a couple of recent papers that include results of the state of the art which are closer and sometimes better than the ones presented in this work. I do not think they reduce at all the contribution of this work, but they should be cited and maybe included in the tables: A. Gordo, J. Almazan, J. Revaud, and D. Larlus. End-to-end learning of deep visual representations for image retrieval. International Journal of Computer Vision, 124(2):237–254, 2017. Albert Jimenez, Jose M. Alvarez, and Xavier Giro-i-Nieto. “Class-Weighted Convolutional Features for Visual Instance Search.” In Proceedings of the 28th British Machine Vision Conference (BMVC). 2017.
iclr_2018_H1Nyf7W0Z
Neural sequence generation is commonly approached by using maximumlikelihood (ML) estimation or reinforcement learning (RL). However, it is known that they have their own shortcomings; ML presents training/testing discrepancy, whereas RL suffers from sample inefficiency. We point out that it is difficult to resolve all of the shortcomings simultaneously because of a tradeoff between ML and RL. In order to counteract these problems, we propose an objective function for sequence generation using α-divergence, which leads to an ML-RL integrated method that exploits better parts of ML and RL. We demonstrate that the proposed objective function generalizes ML and RL objective functions because it includes both as its special cases (ML corresponds to α → 0 and RL to α → 1). We provide a proposition stating that the difference between the RL objective function and the proposed one monotonically decreases with increasing α. Experimental results on machine translation tasks show that minimizing the proposed objective function achieves better sequence generation performance than ML-based methods.
This paper considers a dichitomy between ML and RL based methods for sequence generation. It is argued that the ML approach has some "discrepancy" between the optimization objective and the learning objective, and the RL approach suffers from bad sample complexity. An alpha-divergence formulation is considered to combine both methods. Unfortunately, I do not understand main points made in this paper and am thus not able to give an accurate evaluation of the technical content of this paper. I therefore have no option but to vote for reject of this paper, based on my educated guess. Below are the points that I'm particularly confused about: 1. For the ML formulation, the paper made several particularly confusing remarks. Some of them are blatantly wrong to me. For example, 1.1 The q(.|.) distribution in Eq. (1) *cannot* really be the true distribution, because the true distribution is unknown and therefore cannot be used to construct estimators. From the context, I guess the authors mean "empirical training distribution"? 1.2 I understand that the ML objective is different from what the users really care about (e.g., blue score), but this does not seem a "discrepancy" to me. The ML estimator simply finds a parameter that is the most consistent to the observed sequences; and if it fails to perform well in some other evaluation criterion such as blue score, it simply means the model is inadequate to describe the data given, or the model class is so large that the give number of samples is insufficient, and as a result one should change his/her modeling to make it more apt to describe the data at hand. In summary, I'm not convinced that the fact that ML optimizes a different objective than the blue score is a problem with the ML estimator. In addition, I don't see at all why this discrepancy is a discrepancy between training and testing data. As long as both of them are identically distributed, then no discrepancy exists. 1.3 In point (ii) under the maximum likelihood section, I don't understand it at all and I think both sentences are wrong. First, the model is *not* trained on the true distribution which is unknown. The model is trained on an empirical distribution whose points are sampled from the true distribution. I also don't understand why it is evaluated using p_theta; if I understand correctly, the model is evaluated on a held-out test data, which is also generated from the underlying true distribution. 2. For the RL approach, I think it is very unclear as a formulation of an estimator. For example, in Eq. (2), what is r and what is y*? It is mentioned that r is a "reward" function, but I don't know what it means and the authors should perhaps explain further. I just don't see how one obtains an estimated parameter theta from the formulation in Eq. (2), using training examples.
iclr_2018_BkpiPMbA-
DECISION BOUNDARY ANALYSIS OF ADVERSARIAL EXAMPLES Deep neural networks (DNNs) are vulnerable to adversarial examples, which are carefully crafted instances aiming to cause prediction errors for DNNs. Recent research on adversarial examples has examined local neighborhoods in the input space of DNN models. However, previous work has limited what regions to consider, focusing either on low-dimensional subspaces or small balls. In this paper, we argue that information from larger neighborhoods, such as from more directions and from greater distances, will better characterize the relationship between adversarial examples and the DNN models. First, we introduce an attack, OPT-MARGIN, which generates adversarial examples robust to small perturbations. These examples successfully evade a defense that only considers a small ball around an input instance. Second, we analyze a larger neighborhood around input instances by looking at properties of surrounding decision boundaries, namely the distances to the boundaries and the adjacent classes. We find that the boundaries around these adversarial examples do not resemble the boundaries around benign examples. Finally, we show that, under scrutiny of the surrounding decision boundaries, our OPTMARGIN examples do not convincingly mimic benign examples. Although our experiments are limited to a few specific attacks, we hope these findings will motivate new, more evasive attacks and ultimately, effective defenses.
Summary of paper: The authors present a novel attack for generating adversarial examples, deemed OptMargin, in which the authors attack an ensemble of classifiers created by classifying at random L2 small perturbations. They compare this optimization method with two baselines in MNIST and CIFAR, and provide an analysis of the decision boundaries by their adversarial examples, the baselines and non-altered examples. Review summary: I think this paper is interesting. The novelty of the attack is a bit dim, since it seems it's just the straightforward attack against the region cls defense. The authors fail to include the most standard baseline attack, namely FSGM. The authors also miss the most standard defense, training with adversarial examples. As well, the considered attacks are in L2 norm, and the distortion is measured in L2, while the defenses measure distortion in L_\infty (see detailed comments for the significance of this if considering white-box defenses). The provided analysis is insightful, though the authors mostly fail to explain how this analysis could provide further work with means to create new defenses or attacks. If the authors add FSGM to the batch of experiments (especially section 4.1) and address some of the objections I will consider updating my score. A more detailed review follows. Detailed comments: - I think the novelty of the attack is not very strong. The authors essentially develop an attack targeted to the region cls defense. Designing an attack for a specific defense is very well established in the literature, and the fact that the attack fools this specific defense is not surprising. - I think the authors should make a claim on whether their proposed attack works only for defenses that are agnostic to the attack (such as PGD or region based), or for defenses that know this is a likely attack (see the following comment as well). If the authors want to make the second claim, training the network with adversarial examples coming from OptMargin is missing. - The attacks are all based in L2, in the sense that the look for they measure perturbation in an L2 sense (as the paper evaluation does), while the defenses are all L_\infty based (since the region classifier method samples from a hypercube, and PGD uses an L_\infty perturbation limit). This is very problematic if the authors want to make claims about their attack being effective under defenses that know OptMargin is a possible attack. - The simplest most standard baseline of all (FSGM) is missing. This is important to compare properly with previous work. - The fact that the attack OptMargin is based in L2 perturbations makes it very susceptible to a defense that backprops through the attack. This and / or the defense of training to adversarial examples is an important experiment to assessing the limitations of the attack. - I think the authors rush to conclude that "a small ball around a given input distance can be misleading". Wether balls are in L2 or L_\infty, or another norm makes a big difference in defense and attacks, given that they are only equivalent to a multiplicative factor of sqrt(d) where d is the dimension of the space, and we are dealing with very high dimensional problems. I find the analysis made by the authors to be very simplistic. - The analysis of section 4.1 is interesting, it was insightful and to the best of my knowledge novel. Again I would ask the authors to make these plots for FSGM. Since FSGM is known to be robust to small random perturbations, I would be surprised that for a majority of random directions, the adversarial examples are brought back to the original class. - I think a bit more analysis is needed in section 4.2. Do the authors think that this distinguishability can lead to a defense that uses these statistics? If so, how? - I think the analysis of section 5 is fairly trivial. Distinguishability in high dimensions is an easy problem (as any GAN experiment confirms, see for example Arjovsky & Bottou, ICLR 2017), so it's not surprising or particularly insightful that one can train a classifier to easily recognize the boundaries. - Will the authors release code to reproduce all their experiments and methods? Minor comments: - The justification of why OptStrong is missing from Table2 (last three sentences of 3.3) should be summarized in the caption of table 2 (even just pointing to the text), otherwise a first reader will mistake this for the omission of a baseline. - I think it's important to state in table 1 what is the amount of distortion noticeable by a human. ========================================= After the rebuttal I've updated my score, due to the addition of FSGM added as a baseline and a few clarifications. I now understand more the claims of the paper, and their experiments towards them. I still think the novelty, significance of the claims and protocol are still perhaps borderline for publication (though I'm leaning towards acceptance), but I don't have a really high amount of experience in the field of adversarial examples in order to make my review with high confidence.
iclr_2018_Hksj2WWAW
Published as a conference paper at ICLR 2018 COMBINING SYMBOLIC EXPRESSIONS AND BLACK- BOX FUNCTION EVALUATIONS IN NEURAL PROGRAMS Neural programming involves training neural networks to learn programs, mathematics, or logic from data. Previous works have failed to achieve good generalization performance, especially on problems and programs with high complexity or on large domains. This is because they mostly rely either on black-box function evaluations that do not capture the structure of the program, or on detailed execution traces that are expensive to obtain, and hence the training data has poor coverage of the domain under consideration. We present a novel framework that utilizes black-box function evaluations, in conjunction with symbolic expressions that define relationships between the given functions. We employ tree LSTMs to incorporate the structure of the symbolic expression trees. We use tree encoding for numbers present in function evaluation data, based on their decimal representation. We present an evaluation benchmark for this task to demonstrate our proposed model combines symbolic reasoning and function evaluation in a fruitful manner, obtaining high accuracies in our experiments. Our framework generalizes significantly better to expressions of higher depth and is able to fill partial equations with valid completions.
Summary This paper presents a dataset of mathematical equations and applies TreeLSTMs to two tasks: verifying and completing mathematical equations. For these tasks, TreeLSTMs outperform TreeNNs and RNNs. In my opinion, the main contribution of this paper is this potentially useful dataset, as well as an interesting way of representing fixed-precision floats. However, the application of TreeNNs and TreeLSTMs is rather straight-forward, so in my (subjective) view there are only a few insights salvageable for the ICLR community and compared to Allamanis et al. (2017) this paper is a rather incremental extension. Strengths The authors present a new datasets for mathematical identities. The method for generating additional correct identities could be useful for future research in this area. I find the representation of fixed-precision floats presented in this paper intriguing. I believe this contribution should be emphasized more as it allows the model to generalize to unseen numbers and I am wondering whether the authors see some wider application of this representation for neural programming models. I liked the categorization of the related work. Weaknesses p2: It is mentioned that the framework is the first to combine symbolic expressions with black-box function evaluations, but I would argue that Neural Programmer-Interpreters (NPI; Reed & De Freitas) are already doing that (see Fig 1 in that paper where the execution trace is a symbolic expression and some expressions "Act(LEFT)" are black-box function applications directly changing the image). The differences to Allamanis et al. (2017) are not worked out well. For instance, the authors use the TreeNN model from that paper as a baseline but the EqNet model is not mentioned at all. The obvious question is whether EqNets can be applied to the two tasks (verifying and completing mathematical equations) and if so why this has not been done. The contribution regarding black box function application is unclear to me. On page 6, it is unclear to me what "handles […] function evaluation expressions". As far as I understand, the TreeLSTM learns to the return value of function evaluation expressions in order to predict equality of equations, but this should be clarified. I find the connection of the proposed model and task to "neural programming" weak. For instance, as far as I understand there is no support for stateful programs. Furthermore, it would be interesting to hear how this work can be applied to existing programming languages such as Haskell. What are the limitations of the architecture? Could it learn to identify equality of two lists in Haskell? p6: The paragraph on baseline models is rather uninformative. TreeLSTMs have been shown to outperform Tree NN's in various prior work. The statement that "LSTM cell […] helps the model to have a better understanding of the underlying functions in the domain" is vague. LSTM cells compared to fully-connected layers in Tree NNs ameliorate vanishing and exploding gradients along paths in the tree. Furthermore, I would like to see a qualitative analysis of the reasoning capabilities that are mentioned here. Did you observe any systematic differences in the ~4% of equations where the TreeLSTM fails to generalize (Table 3; first column). Minor Comments Abstract: "Our framework generalizes significantly better" I think it would be good to already mention in comparison to what this statement is. p1: "aim to solve tasks such as learn mathematical" -> "aim to solve tasks such as learning mathematical" p2: You could add a citation for Theano, Tensorflow and Mxnet. p2: Could you elaborate how equation completion is used in Mathematical Q&A? p3: Could you expand on "mathematical equation verification and completion […] has broader applicability" by maybe giving some concrete examples. p3 Eq. 5: What precision do you consider? Two digits? p3: "division because that they can" -> "division because they can" p4 Fig. 1: Is there a reason 1 is represented as 10^0 here? Do you need the distinction between 1 (the integer) and 1.0 (the float)? p5: "we include set of changes" -> "we include the set of changes" p5: In my view there is enough space to move appendix A to section 2. In addition, it would be great to see more examples of generated identities at this stage (including negative ones). p5: "We generate all possible equations (with high probability)" – what is probabilistic about this? p5: I don't understand why function evaluation results in identities of depth 2 and 3. Is it both or one of them? p6: The modules "symbol" and "number" are not shown in the figure. I assume they refer to projections using Wsymb and Wnum? p6: "tree structures neural networks" -> "tree structured neural networks" p6: A reference for the ADAM optimizer should be added. p6: Which method was used for optimizing these hyperparameters? If a grid search was used, what intervals were used? p7: "the superiority of Tree LSTM to Tree NN shows that is important to incorporate cells that have memory" is not a novel insight. p8: When you mention "you give this set of equations to the models look at the top k predictions" I assume you ranked the substituted equations by the probability that the respective model assigns to it? p8: Do you have an intuition why prediction function evaluations for "cos" seem to plateau certain points? Furthermore, it would be interesting to see what effect the choice of non-linearity on the output of the TreeLSTM has on how accurately it can learn to evaluate functions. For instance, one could replace the tanh with cos and might expect that the model has now an easy time to learn to evaluate cos(x). p8 Fig 4b; p9: Relating to the question regarding plateaus in the function evaluation: "in Figure 4b […] the top prediction (0.28) is the correct value for tan with precision 2, but even other predictions are quite close" – they are all the same and this bad, right? p9: "of the state-of-the-art neural reasoning systems" is very broad and in my opinion misleading too. First, there are other reasoning tasks (machine reading/Q&A, Visual Q&A, knowledge base inference etc.) too and it is not obvious how ideas from this paper translate to these domains. Second, for other tasks TreeLSTMs are likely not state-of-the-art (see for example models on the SQuAD leaderboard: https://rajpurkar.github.io/SQuAD-explorer/) . p9: "exploring recent neural models that explicitly use memory cells" – I think what you mean is models with addressable differentiable memory. # Update after the rebuttal Thank you for the in-depth response and clarifications. I am increasing my score by one point. I have looked at the revised paper and I strongly suggest that you add the clarifications and in particular comments regarding comparison to related work (NPI, EqNet etc) to the paper. Regarding Fig. 4b, I am still not sure why all scores are the same (0.9977) -- I assume this is not the desired behavior?
iclr_2018_HJhIM0xAW
Published as a conference paper at ICLR 2018 LEARNING A NEURAL RESPONSE METRIC FOR RETINAL PROSTHESIS Retinal prostheses for treating incurable blindness are designed to electrically stimulate surviving retinal neurons, causing them to send artificial visual signals to the brain. However, electrical stimulation generally cannot precisely reproduce typical patterns of neural activity in the retina. Therefore, an electrical stimulus must be selected so as to produce a neural response as close as possible to the desired response. This requires a technique for computing the distance between a desired response and an achievable response that is meaningful in terms of the visual signal being conveyed. We propose a method to learn a metric on neural responses directly from recorded light responses of a population of retinal ganglion cells (RGCs) in the primate retina. The learned metric produces a measure of similarity of RGC population responses that accurately reflects the similarity of visual inputs. Using data from electrical stimulation experiments, we demonstrate that the learned metric could produce improvements in the performance of a retinal prosthesis.
* Summary of paper: The paper addresses the problem of optimizing metrics in the context of retinal prosthetics: Their goal is to learn a metric which assumes spike-patterns generated by the same stimulus to be more similar to each other than spike-patterns generated by different stimuli. They compare a conventional, quadratic metric to a neural-network based representation and a simple Hamming metric, and show that the neural-network based on achieves higher performance, but that the quadratic metric does not substantially beat the simple Hamming baseline. They subsequently evaluate the metric (unfortunately, only the quadratic metric) in two interesting applications involving electrical stimulation, with the goal of selecting stimulations which elicit spike-patterns which are maximally similar to spike-patterns evoked by particular stimuli. * Quality: Overall, the paper is of high quality. What puzzled me, however is the fact that, in the applications using electrical stimulation in the paper (i.e. the applications targeted to retinal prosthetics, Secs 3.3 and 3.4), the authors do not actually used the well-performing neural-network based metric, but rather the quadratic metric, which is no better than the baseline Hamming metric? It would be valuable for them to comment on what additional challenges would arise by using the neural network instead, and whether they think they could be surmonted. * Clarity: The paper is overall clear, but specific aspects could be improved: First, it took me a while to understand (and is not entirely clear to me) what the goal of the paper is, in particular outside the setting studied by the authors (in which there is a small number of stimuli to be distinguished). Second, while the paper does not claim to provide a new metric-learning approach, it would benefit from more clearly explaining if and how their approach relates to previous approaches to metric learning. Third, the paper, in my view, overstating some of the implications. As an example, Figure 5 is titled 'Learned quadratic response metric gives better perception than using a Hamming metric.': there is no psychophysical evaluation of perception in the paper, and even the (probably hand-picked?) examples in the figure do not look amazing. * Originality: To the best of my knowledge, this is the first paper addressing the question of learning similarity metrics in the context of retinal prosthetics. Therefore, this specific paper and approach is certainly novel and original. From a machine-learning perspective, however, this seems like pretty standard metric learning with neural networks, and no attempt is made to either distinguish or relate their approach to prior work in this field (e.g. Chopra et al 2005, Schroff et al 2015 or Oh Song et al 2016.) In addition, there is a host of metrics and kernels which have been proposed for measuring similarity between spike trains (Victor-Purpura) -- while they might not have been developed in the context of prosthetics, they might still be relevant to this tasks, and it would have been useful to see a comparison of how well they do relative to a Hamming metric. The paper states this as a goal ("This measure should expand upon...), but then never does that- why not? * Significance: The general question the authors are approaching (how to improve retinal prosthetics) is, an extremely important one both from a scientific and societal perspective. How important is the specific advance presented in this paper? The authors learn a metric for quantifying similarity between neural responses, and show that it performs better than a Hamming metric. It would be useful for the paper to comment on how they think that metric to be useful for retinal prosthetics. In a real prosthetic device, one will not be able learn a metric, as the metric learning her requires access to multiple trials of visual stimulation data, neuron-by-neuron. Clearly, any progress on the way to retinal prosthetics is important and this approach might contribute that. However, the current presentation of the manuscripts gives a somewhat misleading presentation of what has been achieved, and a more nuanced presentation would be important and appropriate. Overall, this is a nice paper which could be of interest to ICLR. Its strengths are that i) they identified a novel, interesting and potentially impactful problem that has not been worked on in machine learning before, ii) they provide a solution to it based on metric learning, and show that it performs better than a non-learned metrics. Its limitations are that i) no novel machine-learning methodology is used (and relationship to prior work in machine learning is not clearly described) ii) comparisons with previously proposed similarity measures of spike trains are lacking, iii) the authors do not actually use their learned, network based metric, but the metric which performs no better than the baseline in their main results, and iv) it is not well explained how this improved metric could actually be used in the context of retinal prosthetics. Minor comments: - p.2 The authors write that the element-wise product is denoted by $A \bullet B = \Tr(A^{\intercal}) B$ This seems to be incorrect, as the r.h.s. corresponds to a scalar. - p.3 What exactly is meant by “mining”? - p.4 It would be useful to give an example of what is meant by “similarity learning”. - p.4 “Please the Appendix” -> “Please see the Appendix” - p.5 (Fig. 3) The abbreviation “AUC” is not defined. - p.5 (Fig. 3B) The figure giving 'recall' should have a line indicating perfect performance, for comparison. - Sec 3.3: How was the decoder obtained ? - p.6 (Fig. 4) Would be useful to state that column below 0 is the target. Or just replace “0” by “target”. - p.6 (3rd paragraph) The sentence “Figure 4A bottom left shows the spatial profile of the linear decoding 20ms prior to the target response.” is unclear. It took me a very long time to realize that "bottom left" meant "column 0, 'decoded stimulus'" row. It's also unclear why the authors chose to look at 20ms prior to the target response. - p.6 The text says RMS distance, but the Fig. 4B caption says MSE— is this correct?
iclr_2018_SyGT_6yCZ
The quality of the features used in visual recognition is of fundamental importance for the overall system. For a long time, low-level hand-designed feature algorithms as SIFT and HOG have obtained the best results on image recognition. Visual features have recently been extracted from trained convolutional neural networks. Despite the high-quality results, one of the main drawbacks of this approach, when compared with hand-designed features, is the training time required during the learning process. In this paper, we propose a simple and fast way to train supervised convolutional models to feature extraction while still maintaining its high-quality. This methodology is evaluated on different datasets and compared with state-of-the-art approaches.
This paper deals with early stopping but the contributions are limited. This work would fit better a workshop as a preliminary result, furthermore it is too short. Following a short review section per section. Intro: The name SFC is misleading as the method consists in stopping early the training with an optimized learning schedule scheme. Furthermore, the work is not compared to the appropriate baselines. Proposal: The first motivation is not clear. The training time of the feature extractor has never been a problem for transfer learning tasks for example: once it is trained, you can reuse the architecture in a wide range of tasks. Besides, the training time of a CNN on CIFAR10 or even ImageNet is now quite small(for reasonable architectures), which allows fast benchmarking. The second motivation, w.r.t. IB seems interesting but this should be empirically motivated(e.g. figures) in the subsection 2.1, and this is not done. The section 3 is quite long and could be compressed to improve the relevance of this experimental section. All the accuracies(unsup dict, unsup, etc) on CIFAR10/CIFAR100 are reported from the paper (Oyallon & Mallat, 2015), ignoring 2-3 years of research that leads to new numerical results. Furthermore, this supervised technique is only compared to unsupervised or predefined methods, which is is not fair and the training time of the Scattering Transform is not reported, for example. Finally, extracting features is mainly useful on ImageNet (for realistic images) and this is not reported here. I believe re-thinking new learning rate schedules is interesting, however I recommend the rejection of this paper.
iclr_2018_SkqV-XZRZ
Recurrent neural networks like long short-term memory (LSTM) are important architectures for sequential prediction tasks. LSTMs (and RNNs in general) model sequences along the forward time direction. Bidirectional LSTMs (Bi-LSTMs), which model sequences along both forward and backward directions, generally perform better at such tasks because they capture a richer representation of the data. In the training of Bi-LSTMs, the forward and backward paths are learned independently. We propose a variant of the Bi-LSTM architecture, which we call Variational Bi-LSTM, that creates a dependence between the two paths (during training, but which may be omitted during inference). Our model acts as a regularizer and encourages the two networks to inform each other in making their respective predictions using distinct information. We perform ablation studies to better understand the different components of our model and evaluate the method on various benchmarks, showing state-of-the-art performance.
*Quality* The paper is easy to parse, with clear diagrams and derivations at the start. The problem context is clearly stated, as is the proposed model. The improvements in terms of average log-likelihood are clear. The model does improve over state-of-the-art in some cases, but not all. Based on the presented findings, it is difficult to determine the quality of the learned models overall, since they are only evaluated in terms of average log likelihood. It is also difficult to determine whether the improvements are due to the model change, or some difference in how the models themselves were trained (particularly in the case of Z-Forcing, a closely related technique). I would like to see more exploration of this point, as the section titled “ablation studies” is short and does not sufficiently address the issue of what component of the model is contributing to the observed improvements in average log-likelihood. Hence, I have assigned a score of "4" for the following reasons: the quality of the generated models is unclear; the paper does not clearly distinguish itself from the closely-related Z-Forcing concept (published at NIPS 2017); and the reasons for the improvements shown in average log-likelihood are not explored sufficiently, that is, the ablation studies don't eliminate key parts of the model that could provide this information. More information on this decision is given in the remainder. *Clarity* A lack of generated samples in the Experimental Results section makes it difficult to evaluate the performance of the models; log-likelihood alone can be an inadequate measure of performance without some care in how it is calculated and interpreted (refer, e.g., to Theis et al. 2016, “A Note on the Evaluation of Generative Models”). There are some typos and organizational issues. For example, VAEs are reintroduced in the Related Works section, only to provide an explanation for an unrelated optimization challenge with the use of RNNs as encoders and decoders. I also find the motivations for the proposed model itself a little unclear. It seems unnatural to introduce a side-channel-cum-regularizer between a sequence moving forward in time and the same sequence moving backwards, through a variational distribution. In the introduction, improved regularization for LSTM models is cited as a primary motivation for introducing and learning two approximate distributions for latent variables between the forward and backward paths of a bi-LSTM. Is there a serious need for new regularization in such models? The need for this particular regularization choice is not particularly clear based on this explanation, nor are the improvements state-of-the-art in all cases. This weakens a possible theoretical contribution of the paper. *Originality* The proposed modification appears to amount to a regularizer for bi-LSTMs which bears close similarity to Z-Forcing (cited in the paper). I recommend a more careful comparison between the two methods. Without such a comparison, they are a little hard to distinguish, and the originality of this paper is hard to evaluate. Both appear to employ the same core idea of regularizing an LSTM using a learned variational distributions. The differences *seem* to be in the small details, and these details appear to provide better performance in terms of average log-likelihood on all tasks compared to Z-Forcing--but, crucially, not compared to other models in all cases.
iclr_2018_S1XXq6lRW
Labeled text classification datasets are typically only available in a few select languages. In order to train a model for e.g news categorization in a language L t without a suitable text classification dataset there are two options. The first option is to create a new labeled dataset by hand, and the second option is to transfer label information from an existing labeled dataset in a source language L s to the target language L t . In this paper we propose a method for sharing label information across languages by means of a language independent text encoder. The encoder will give almost identical representations to multilingual versions of the same text. This means that labeled data in one language can be used to train a classifier that works for the rest of the languages. The encoder is trained independently of any concrete classification task and can therefore subsequently be used for any classification task. We show that it is possible to obtain good performance even in the case where only a comparable corpus of texts is available.
This paper addresses the problem of learning a cross-language text categorizer with no labelled information in the target language. The suggested solution relies on learning cross-lingual embeddings, and training a classifier using labelled data in the source language only. The idea of using cross-lingual or multilingual representations to seamlessly handle documents across languages is not terribly novel as it has been use in multilignual categorization or semantic similarity for some time. This contribution however proposes a clean separation of the multiligual encoder and classifier, as well as a good (but long) section on related prior art. One concern is that the modelling section stays fairly high level and is hardly sufficient, for example to re-implement the models. Many design decisions (e.g. #layers, #units) are not justified. They likely result from preliminary experiments, in that case it should be said. The main concern is that the experiments could be greatly improved. Given the extensive related work section, it is odd that no alternate model is compared to. The details on the experiments are also scarce. For example, are all accuracy results computed on the same 8k test set? If so this should be clearly stated. Why are models tested on small subsets of the available data? You have 493k Italian documents, yet the largest model uses 158k... It is unclear where many such decisions come from -- e.g. Fig 4b misses results for 1000 and 1250 dimensions and Fig 4b has nothing between 68k and 137k, precisely where a crossover happens. In short, it feels like the paper would greatly improve from a clearer modeling description and more careful experimental design. Misc: - Clarify early on what "samples" are in your categorization context. - Given the data set, why use a single-label multiclass setup, rather than multilabel? - Table 1 caption claims an average of 2.3 articles per topic, yet for 200 topics you have 500k to 1.5M articles? - Clarify the use of the first 200 words in each article vs. snippets - Put overall caption in Figs 2-4 on top of (a), (b), otherwise references like Fig 4b are unclear.
iclr_2018_HJGv1Z-AW
EMERGENCE OF LINGUISTIC COMMUNICATION FROM REFERENTIAL GAMES WITH SYMBOLIC AND PIXEL INPUT The ability of algorithms to evolve or learn (compositional) communication protocols has traditionally been studied in the language evolution literature through the use of emergent communication tasks. Here we scale up this research by using contemporary deep learning methods and by training reinforcement-learning neural network agents on referential communication games. We extend previous work, in which agents were trained in symbolic environments, by developing agents which are able to learn from raw pixel data, a more challenging and realistic input representation. We find that the degree of structure found in the input data affects the nature of the emerged protocols, and thereby corroborate the hypothesis that structured compositional language is most likely to emerge when agents perceive the world as being structured.
-------------- Summary: -------------- This paper presents a series of experiments on language emergence through referential games between two agents. They ground these experiments in both fully-specified symbolic worlds and through raw, entangled, visual observations of simple synthetic scenes. They provide rich analysis of the emergent languages the agents produce under different experimental conditions. This analysis (especially on raw pixel images) make up the primary contribution of this work. -------------- Evaluation: -------------- Overall I think the paper makes some interesting contributions with respect to the line of recent 'language emergence' papers. The authors provide novel analysis of the learned languages and perceptual system across a number of environmental settings, coming to the (perhaps uncontroversial) finding that varying the environment and restrictions on language result in variations in the learned communication protocols. In the context of existing literature, the novelty of this work is somewhat limited -- consisting primarily of the extension of multi-agent reference games to raw-pixel inputs. While this is a non-trivial extension, other works have demonstrated language learning in similar referring-expression contexts (essentially modeling only the listener model [Hermann et.al 2017]). I have a number of requests for clarification in the weaknesses section which I think would improve my understanding of this work and result in a stronger submission if included by the authors. -------------- Strengths: -------------- - Clear writing and document structure. - Extensive experimental setting tweaks which ablate the information and regularity available to the agents. The discussion of the resulting languages is appropriate and provides some interesting insights. - A number of novel analyses are presented to evaluate the learned languages and perceptual systems. -------------- Weaknesses: -------------- - How stable are the reported trends / languages across multiple runs within the same experimental setting? The variance of REINFORCE policy gradients (especially without a baseline) plus the general stochasticity of SGD on randomly initialized networks leads me to believe that multiple training runs of these agents might result is significantly different codes / performance. I am interested in hearing the author's experiences in this regard and if multiple runs present similar quantitative and qualitative results. I admit that expecting identical codes is unrealistic, but the form of the codes (i.e. primarily encoding position) might be consistent even if the individual mappings are not). - I don't recall seeing descriptions of the inference-time procedure used to evaluate training / test accuracy. I will assume argmax decoding for both speaker and listener. Please clarify or let me know if I missed something. - There is ambiguity in how the "protocol size" metric is computed. In Table 1, it is defined as 'the effective number of unique message used'. This comes back to my question about decoding I suppose, but does this count the 'inference-time' messages or those produced during training? Furthermore, Table 2 redefines "protocol size" as the percentage of novel message. I assume this is an editing error given the values presented and take these columns as counts. It also seems "protocol size" is replaced with the term "lexicon" from 4.1 onward. - I'm surprised by how well the agents generalize in the raw pixel data experiments. In fact, it seems that across all games the test accuracy remains very close to the train accuracy. Given the dataset is created by taking all combinations of color / shape and then sampling 100 location / floor color variations, it is unlikely that a shape / color combo has not been seen in training. Such that the only novel variations are likely location and floor color. However, taking Game A as an example, the probe classifiers are relatively poor at these attributes -- indicating the speaker's representation is not capturing these attributes well. Then how do the agents effectively differentiate so well between 20 images leveraging primarily color and shape? I think some additional analysis of this setting might shed some light on this issue. One thought is to compute upper-bounds based on ground truth attributes. Consider a model which knows shape perfectly, but cannot predict other attributes beyond chance. To compute the performance of such a model, you could take the candidate set, remove any instances not matching the ground truth shape, and then pick randomly from the remaining instances. Something similar could be repeated for all attributes independently as well as their combinations -- obviously culminating in 100% accuracy given all 4. It could be that by dataset construction, object location and shape are sufficient to achieve high accuracy because the odds of seeing the same shape at the same location (but different color) is very low. Given these are operations on annotations and don't require time-consuming model training, I hope to see this analysis in the rebuttal to put the results into appropriate context. - What is random chance for the position and floor color probe classifiers? I don't think it is mentioned how many locations / floor colors are used in generation. - Relatively minor complaint: Both agents are trained via the REINFORCE policy gradient update rule; however, the listener agent makes a fairly standard classification decision and could be trained with a standard cross-entropy loss. That is to say, the listener policy need not make intermediate discrete policy decisions. This decision to withhold available supervision is not discussed in the paper (as far as I noticed), could the authors speak to this point? -------------- Curiosities: -------------- - I got the impression from the results (specifically the lack of discussion about message length) that in these experiments agents always issued full length messages even though they did not need to do so. If true, could the authors give some intuition as to why? If untrue, what sort of distribution of lengths do you observe? - There is no long term planning involved in this problem, so why use reinforcement learning over some sort of differentiable sampler? With some re-parameterization (i.e. Gumbel-Softmax), this model could be end-to-end differentiable. -------------- Minor errors: -------------- [2.2 paragraph 1] LSTM citation should not be in inline form. [3 paragraph 1] 'Note that these representations do care some' -> carry [3.3.1 last paragraph] 'still able comprehend' --> to ------- Edit ------- Updating rating from 6 to 7.
iclr_2018_SJa1Nk10b
Workshop track -ICLR 2018 ANYTIME NEURAL NETWORK: A VERSATILE TRADE- OFF BETWEEN COMPUTATION AND ACCURACY We present an approach for anytime predictions in deep neural networks (DNNs). For each test sample, an anytime predictor produces a coarse result quickly, and then continues to refine it until the test-time computational budget is depleted. Such predictors can address the growing computational problem of DNNs by automatically adjusting to varying test-time budgets. In this work, we study a general augmentation to feed-forward networks to form anytime neural networks (ANNs) via auxiliary predictions and losses. Specifically, we point out a blindspot in recent studies in such ANNs: the importance of high final accuracy. In fact, we show on multiple recognition data-sets and architectures that by having near-optimal final predictions in small anytime models, we can effectively double the speed of large ones to reach corresponding accuracy level. We achieve such speed-up with simple weighting of anytime losses that oscillate during training. We also assemble a sequence of exponentially deepening ANNs, to achieve both theoretically and practically near-optimal anytime results at any budget, at the cost of a constant fraction of additional consumed budget.
1. Paper Summary This paper adds a separate network at every layer of a residual network that performs classification. They minimize the loss of every classifier using two proposed weighting schemes. They also ensemble this model. 2. High level paper The organization of this paper is a bit confusing. Two weighing schemes are introduced in Section 3.1, then the ensemble model is described in Section 3.2, then the weighing schemes are justified in Section 4.1. Overall this method is essentially an cascade where each cascade classifier is a residual block. Every input is passed through as many stages as possible until the budget is reached. While this model is likely quite useful in industrial settings, I don't think the model itself is wholly original. The authors have done extensive experiments evaluating their method in different settings. I would have liked to see a comparison with at least one other anytime method. I think it is slightly unfair to say that you are comparing with Xie & Tu, 2015 and Huang et al., 2017 just because they use the CONSTANT weighing schemes. 3. High level technical I have a few concerns: - Why does AANN+LINEAR nearly match the accuracy of EANN+SIEVE near 3e9 FLOPS in Figure 4b but EANN+LINEAR does not in Figure 4a? Shouldn't EANN+LINEAR be strictly better than AANN+LINEAR? - Why do the authors choose these specific weighing schemes? Section 4.1 is devoted to explaining this but it is still unclear to me. They talk about there being correlation between the predictors near the end of the model so they don't want to distribute weight near the final predictors but this general observation doesn't obviously lead to these weighing schemes, they still seem a bit adhoc. A few other comments: - Figure 3b seems to contain strictly less information than Figure 4a, I would remove Figure 3b and draw lines showing the speedup you get for one or two accuracy levels. Questions: - Section 3.1: "Such an ideal θ* does not exist in general and often does not exist in practice." Why is this the case? - Section 3.1: " In particular, spreading weights evenly as in (Lee et al., 2015) keeps all i away from their possible respective minimum" Why is this true? - Section 3.1: "Since we will evaluate near depth b3L/4e, and it is the center of L/2 low-weight layers, we increase it weight by 1/8." I am completely lost here, why do you do this? 4. Review summary Ultimately because the model itself resembles previous cascade models, the selected weighings have little justification, and there isn't a comparison with another anytime method, I think this paper isn't yet ready for acceptance at ICLR.
iclr_2018_Sy21R9JAW
TOWARDS BETTER UNDERSTANDING OF GRADIENT-BASED ATTRIBUTION METHODS FOR DEEP NEURAL NETWORKS Understanding the flow of information in Deep Neural Networks (DNNs) is a challenging problem that has gain increasing attention over the last few years. While several methods have been proposed to explain network predictions, there have been only a few attempts to compare them from a theoretical perspective. What is more, no exhaustive empirical comparison has been performed in the past. In this work, we analyze four gradient-based attribution methods and formally prove conditions of equivalence and approximation between them. By reformulating two of these methods, we construct a unified framework which enables a direct comparison, as well as an easier implementation. Finally, we propose a novel evaluation metric, called Sensitivity-n and test the gradient-based attribution methods alongside with a simple perturbation-based attribution method on several datasets in the domains of image and text classification, using various network architectures.
The paper summarizes and compares some of the current explanation techniques for deep neural networks that rely on the redistribution of relevance / contribution values from the output to the input space. The main contributions are the introduction of a unified framework that expresses 4 common attribution techniques (Gradient * Input, Integrated Gradient, eps-LRP and DeepLIFT) in a similar way as modified gradient functions and the definition of a new evaluation measure ('sensitivity n') that generalizes the earlier defined properties of 'completeness' and 'summation to delta'. The unified framework is very helpful since it points out equivalences between the methods and makes the implementation of eps-LRP and DeepLIFT substantially more easy on modern frameworks. However, as correctly stated by the authors some of the unification (e.g. relation between LRP and Gradient*Input) has been already mentioned in prior work. Sensitivity-n as a measure tries to tackle the difficulty of estimating the importance of features that can be seen either separately or in combination. While the measure shows interesting trends towards a linear behaviour for simpler methods, it does not persuade me as a measure of how well the relevance attribution method mimics the decision making process and does not really point out substantial differences between the different methods. Furthermore, The authors could comment on the relation between sensitivity-n and region perturbation techniques (Samek et al., IEEE TNNLS, 2017). Sensitivtiy-n seems to be an extension of the region perturbation idea to me. It would be interesting to see the relation between the "unified" gradient-based explanation methods and approaches (e.g. Saliency maps, alpha-beta LRP, Deep Taylor, Deconvolution Networks, Grad-CAM, Guided Backprop ...) which do not fit into the unification framework. It's good that the author mention these works, still it would be great to see more discussion on the advantages/disadvantages, because these methods may have some nice theoretically properties (see e.g. the discussion on gradient vs. decompositiion techniques in Montavon et al., Digital Signal Processing, 2017) which can not be incorporated into the unified framework.
iclr_2018_B16_iGWCW
In this paper, a deep boosting algorithm is developed to learn more discriminative ensemble classifier by seamlessly combining a set of base deep CNNs (base experts) with diverse capabilities, e.g., these base deep CNNs are sequentially trained to recognize a set of object classes in an easy-to-hard way according to their learning complexities. Our experimental results have demonstrated that our deep boosting algorithm can significantly improve the accuracy rates on largescale visual recognition.
This paper consider a version of boosting where in each iteration only class weights are updated rather than sample weights and apply that to a series of CNNs for object recognition tasks. While the paper is comprehensive in their derivations (very similar to original boosting papers and in many cases one to one translation of derivations), it lacks addressing a few fundamental questions: - AdaBoost optimises exponential loss function via functional gradient descent in the space of weak learners. It's not clear what kind of loss function is really being optimised here. It feels like it should be the same, but the tweaks applied to fix weights across all samples for a class doesn't make it not clear what is that really gets optimised at the end. - While the motivation is that classes have different complexities to learn and hence you might want each base model to focus on different classes, it is not clear why this methods should be better than normal boosting: if a class is more difficult, it's expected that their samples will have higher weights and hence the next base model will focus more on them. And crudely speaking, you can think of a class weight to be the expectation of its sample weights and you will end up in a similar setup. - Choice of using large CNNs as base models for boosting isn't appealing in practical terms, such models will give you the ability to have only a few iterations and hence you can't achieve any convergence that often is the target of boosting models with many base learners. - Experimentally, paper would benefit with better comparisons and studies: 1) state-of-the-art methods haven't been compared against (e.g. ImageNet experiment compares to 2 years old method) 2) comparisons to using normal AdaBoost on more complex methods haven't been studied (other than the MNIST) 3) comparison to simply ensembling with random initialisations. Other comments: - Paper would benefit from writing improvements to make it read better. - "simply use the weighted error function": I don't think this is correct, AdaBoost loss function is an exponential loss. When you train the base learners, their loss functions will become weighted. - "to replace the softmax error function (used in deep learning)": I don't think we have softmax error function
iclr_2018_SyjsLqxR-
Classifiers such as deep neural networks have been shown to be vulnerable against adversarial perturbations on problems with high-dimensional input space. While adversarial training improves the robustness of classifiers against such adversarial perturbations, it leaves classifiers sensitive to them on a non-negligible fraction of the inputs. We argue that there are two different kinds of adversarial perturbations: shared perturbations which fool a classifier on many inputs and singular perturbations which only fool the classifier on a small fraction of the data. We find that adversarial training increases the robustness of classifiers against shared perturbations. Moreover, it is particularly effective in removing universal perturbations, which can be seen as an extreme form of shared perturbations. Unfortunately, adversarial training does not consistently increase the robustness against singular perturbations on unseen inputs. However, we find that adversarial training decreases robustness of the remaining perturbations against image transformations such as changes to contrast and brightness or Gaussian blurring. It thus makes successful attacks on the classifier in the physical world less likely. Finally, we show that even singular perturbations can be easily detected and must thus exhibit generalizable patterns even though the perturbations are specific for certain inputs.
Summary: This paper empirically studies adversarial perturbations dx and what the effects are of adversarial training (AT) with respect to shared (dx fools for many x) and singular (only for a single x) perturbations. Experiments use a (previously published) iterative fast-gradient-sign-method and use a Resnet on CIFAR. The authors conclude that in this experimental setting: - AT seems to defend models against shared dx's. - This is visible on universal perturbations, which become less effective as more AT is applied. - AT decreases the effectiveness of adversarial perturbations, e.g. AT decreases the number of adversarial perturbations that fool both an input x and x with e.g. a contrast change. - Singular perturbations are easily detected by a detector model, as such perturbations don't change much when applying AT. Pro: - Paper addresses an important problem: qualitative / quantitative understanding of the behavior of adversarial perturbations is still lacking. - The visualizations of universal perturbations as they change during AT are nice. - The basic observation wrt the behavior of AT is clearly communicated. Con: - The experiments performed are interesting directions, although unfocused and rather limited in scope. For instance, does the same phenomenon happen for different datasets? Different models? - What happens when we use adversarial attacks different from FGSM? Do we get similar results? - The papers lacks a more in-depth theoretical analysis. Is there a principled reason AT+FGSM defends against universal perturbations? Overall: - As is, it seems to me the paper lacks a significant central message (due to limited and unfocused experiments) or significant new theoretical insight into the effect of AT. A number of questions addressed are interesting starting points towards a deeper understanding of *how* the observations can be explained and more rigorous empirical investigations. Detailed: -
iclr_2018_rk3mjYRp-
Policy gradients methods often achieve better performance when the change in policy is limited to a small Kullback-Leibler divergence. We derive policy gradients where the change in policy is limited to a small Wasserstein distance (or trust region). This is done in the discrete and continuous multi-armed bandit settings with entropy regularisation. We show that in the small steps limit with respect to the Wasserstein distance W 2 , policy dynamics are governed by the heat equation, following the Jordan-Kinderlehrer-Otto result. This means that policies undergo diffusion and advection, concentrating near actions with high reward. This helps elucidate the nature of convergence in the probability matching setup, and provides justification for empirical practices such as Gaussian policy priors and additive gradient noise.
The paper ‘Diffusing policies: Towards Wasserstein policy gradient flows’ explores the connections between reinforcement learning and the theory of quadratic optimal transport (i.e. using the Wasserstein_2 as a regularizer of an iterative problem that converges toward an optimal policy). Following a classical result from Jordan-Kinderlehrer-Otto, they show that the policy dynamics are governed by the heat equation, that translates in an advection-diffusion scheme. This allows to draw insights on the convergence of empirical practices in the field. The paper is clear and well-written, and provides a comprehensive survey of known results in the field of Optimal Transport. The insights on why empirical strategies such as additive gradient noise are very interesting and helps in understanding why they work in practical settings. That being said, most of the results presented in the paper are already known (e.g. from the book of Samtambrogio or the work of G. Peyré on entropic Wasserstein gradient flows) and it is not exactly clear what are the original contributions of the paper. The fact that the objective is to learn policies has little to no impact on the derivations of calculus. It clearly suggests that the entropy regularized Wasserstein_2 distance should be used in numerical experiments but this point is not supported by experimental results. Their direct applications is rapidly ruled out by highlighting the computational complexity of solving such gradient flows but in the light of recent papers (see the work of Genevay https://arxiv.org/abs/1706.00292 or another paper submitted to ICLR on large scale optimal transport https://openreview.net/forum?id=B1zlp1bRW) numerical applications should be tractable. For these reasons I feel that the paper would clearly be more interesting for the practitioners (and maybe to some extent for the audience of ICLR) if numerical applications of the presented theory were discussed or sketched in classical reinforcement learning settings. Minor comments: - in Equation (10) why is there a ‘d’ in front of the coupling \gamma ? - in Section 4.5, please provide references for why numerical estimators of gradient of Wasserstein distances are biased.
iclr_2018_Sy2ogebAW
Published as a conference paper at ICLR 2018 UNSUPERVISED NEURAL MACHINE TRANSLATION In spite of the recent success of neural machine translation (NMT) in standard benchmarks, the lack of large parallel corpora poses a major practical problem for many language pairs. There have been several proposals to alleviate this issue with, for instance, triangulation and semi-supervised learning techniques, but they still require a strong cross-lingual signal. In this work, we completely remove the need of parallel data and propose a novel method to train an NMT system in a completely unsupervised manner, relying on nothing but monolingual corpora. Our model builds upon the recent work on unsupervised embedding mappings, and consists of a slightly modified attentional encoder-decoder model that can be trained on monolingual corpora alone using a combination of denoising and backtranslation. Despite the simplicity of the approach, our system obtains 15.56 and 10.21 BLEU points in WMT 2014 French → English and German → English translation. The model can also profit from small parallel corpora, and attains 21.81 and 15.24 points when combined with 100,000 parallel sentences, respectively. Our implementation is released as an open source project 1 .
unsupervised neural machine translation This is an interesting paper on unsupervised MT. It trains a standard architecture using: 1) word embeddings in a shared embedding space, learned using a recent approach that works with only tens of bilingual word papers. 2) A encoder-decoder trained using only monolingual data (should cite http://www.statmt.org/wmt17/pdf/WMT15.pdf). Training uses a “denoising” method which is not new: it uses the same idea as contrastive estimation (http://www.aclweb.org/anthology/P05-1044, a well-known method which should be cited). 3) Backtranslation. All though none of these ideas are new, they haven’t been combined in this way before, and that what’s novel here. The paper is essentially a neat application of (1), and is an empirical/ systems paper. It’s essentially a proof-of-concept that it is that it’s possible to get anything at all using no parallel data. That’s surprising and interesting, but I learned very little else from it. The paper reads as preliminary and rushed, and I had difficulty answering some basic questions: * In Table (1), I’m slightly puzzled by why 5 is better than 6, and this may be because I’m confused about what 6 represents. It would be natural to compare 5 with a system trained on 100K parallel text, since the systems would then (effectively) differ only in that 5 also exploits additional monolingual data. But the text suggests that 6 is trained on much more than 100K parallel sentences; that is, it differs in at least two conditions (amount of parallel text and use of monolingual text). Since this paper’s primary contribution is empirical, this comparison should be done in a carefully controlled way, differing each of these elements in turn. * I’m very confused by the comment on p. 8 that “the modifications introduced by our proposal are also limiting” to the “comparable supervised NMT system”. According to the paper, the architecture of the system is unchanged, so why would this be the case? This comment makes it seem like something else has been changed in the baseline, which in turn makes it somewhat hard to accept the results here. Comment: * The qualitative analysis is not really an analysis: it’s just a few cherry-picked examples and some vague observations. While it is useful to see that the system does indeed generate nontrivial content in these cases, this doesn’t give us further insight into what the system does well or poorly outside these examples. The BLEU scores suggest that it also produces many low-quality translations. What is different about these particular examples? (Aside: since the cross-lingual embedding method is trained on numerals, should we be concerned that the system fails at translating numerals?) Questions: * Contrastive estimation considers other neighborhood functions (“random noise” in the parlance of this paper), and it’s natural to wonder what would happen if this paper also used these or other neighborhood functions. More importantly, I suspect the the neighborhood functions are important: when translating between Indo-European languages as in these experiments, local swaps are reasonable; but in translating between two different language families (as would often be the case in the motivating low-resource scenario that the paper does not actually test), it seems likely that other neighborhood functions would be important, since structural differences would be much larger. Presentational comments (these don’t affect my evaluation, they’re mostly observations but they contribute to a general feeling that the paper is rushed and preliminary): * BPE does not “learn”, it’s entirely deterministic. * This paper is at best tangentially related to decipherment. Decipherment operates under two quite different assumptions: there is no training data for the source language ciphertext, only the ciphertext itself (which is often very small); and the replacement function is deterministic rather than probabilistic (and often monotonic). The Dou and Knight papers are interesting, but they’re an adaptation of ideas rather than decipherment per se. Since none of those ideas are used here this feels like hand-waving. * Future work is vague: “we would like to detect and mitigate the specific causes…” “we also think that a better handling of rare words…” That’s great, but how will you do these things? Do you have specific reasons to think this, or ideas on how to approach them? Otherwise this is just hand-waving.
iclr_2018_BJJLHbb0-
Published as a conference paper at ICLR 2018 DEEP AUTOENCODING GAUSSIAN MIXTURE MODEL FOR UNSUPERVISED ANOMALY DETECTION Unsupervised anomaly detection on multi-or high-dimensional data is of great importance in both fundamental machine learning research and industrial applications, for which density estimation lies at the core. Although previous approaches based on dimensionality reduction followed by density estimation have made fruitful progress, they mainly suffer from decoupled model learning with inconsistent optimization goals and incapability of preserving essential information in the low-dimensional space. In this paper, we present a Deep Autoencoding Gaussian Mixture Model (DAGMM) for unsupervised anomaly detection. Our model utilizes a deep autoencoder to generate a low-dimensional representation and reconstruction error for each input data point, which is further fed into a Gaussian Mixture Model (GMM). Instead of using decoupled two-stage training and the standard Expectation-Maximization (EM) algorithm, DAGMM jointly optimizes the parameters of the deep autoencoder and the mixture model simultaneously in an end-to-end fashion, leveraging a separate estimation network to facilitate the parameter learning of the mixture model. The joint optimization, which well balances autoencoding reconstruction, density estimation of latent representation, and regularization, helps the autoencoder escape from less attractive local optima and further reduce reconstruction errors, avoiding the need of pre-training. Experimental results on several public benchmark datasets show that, DAGMM significantly outperforms state-of-the-art anomaly detection techniques, and achieves up to 14% improvement based on the standard F 1 score.
The paper presents a new technique for anomaly detection where the dimension reduction and the density estimation steps are jointly optimized. The paper is rigorous and ideas are clearly stated. The idea to constraint the dimension reduction to fit a certain model, here a GMM, is relevant, and the paper provides a thorough comparison with recent state-of-the-art methods. My main concern is that the method is called unsupervised, but it uses the class information in the training, and also evaluation. I'm also not convinced of how well the Gaussian model fits the low-dimensional representation and how well can a neural network compute the GMM mixture memberships. 1. The framework uses the class information, i.e., “only data samples from the normal class are used for training”, but it is still considered unsupervised. Also, the anomaly detection in the evaluation step is based on a threshold which depends on the percentage of known anomalies, i.e., a priori information. I would like to see a plot of the sample energy as a function of the number of data points. Is there an elbow that indicates the threshold cut? Better yet it would be to use methods like Local Outlier Factor (LOF) (Breunig et al., 2000 – LOF:Identifying Density-based local outliers) to detect the outliers (these methods also have parameters to tune, sure, but using the known percentage of anomalies to find the threshold is not relevant in a purely unsupervised context when we don't know how many anomalies are in the data). 2. Is there a theoretical justification for computing the mixture memberships for the GMM using a neural network? 3. How do the regularization parameters \lambda_1 and \lambda_2 influence the results? 4. The idea to jointly optimize the dimension reduction and the clustering steps was used before neural nets (e.g., Yang et al., 2014 - Unsupervised dimensionality reduction for Gaussian mixture model). Those approaches should at least be discussed in the related work, if not compared against. 5. The authors state that estimating the mixture memberships with a neural network for GMM in the estimation network instead of the standard EM algorithm works better. Could you provide a comparison with EM? 6. In the newly constructed space that consists of both the extracted features and the representation error, is a Gaussian model truly relevant? Does it well describe the new space? Do you normalize the features (the output of the dimension reduction and the representation error are quite different)? Fig. 3a doesn't seem to show that the output is a clear mixture of Gaussians. 7. The setup of the KDDCup seems a little bit weird, where the normal samples and anomalies are reversed (because of percentage), where the model is trained only on anomalies, and it detects normal samples as anomalies ... I'm not convinced that it is the best example, especially that is it the one having significantly better results, i.e. scores ~ 0.9 vs. scores ~0.4/0.5 score for the other datasets. 8. The authors mention that “we can clearly see from Fig. 3a that DAGMM is able to well separate ...” - it is not clear to me, it does look better than the other ones, but not clear. If there is a clear separation from a different view, show that one instead. We don't need the same view for all methods. 9. In the experiments the reduced dimension used is equal to 1 for two of the experiments and 2 for one of them. This seems very drastic! Minor comments: 1. Fig.1: what dimension reduction did you use? Add axis labels. 2. “DAGMM preserves the key information of an input sample” - what does key information mean? 3. In Fig. 3 when plotting the results for KDDCup, I would have liked to see results for the best 4 methods from Table 1, OC-SVM performs better than PAE. Also DSEBM-e and DSEBM-r seems to perform very well when looking at the three measures combined. They are the best in terms of precision. 4. Is the error in Table 2 averaged over multiple runs? If yes, how many? Quality – The paper is thoroughly written, and the ideas are clearly presented. It can be further improved as mentioned in the comments. Clarity – The paper is very well written with clear statements, a pleasure to read. Originality – Fairly original, but it still needs some work to justify it better. Significance – Constraining the dimension reduction to fit a certain model is a relevant topic, but I'm not convinced of how well the Gaussian model fits the low-dimensional representation and how well can a neural network compute the GMM mixture memberships.
iclr_2018_rkTS8lZAb
Published as a conference paper at ICLR 2018 BOUNDARY-SEEKING GENERATIVE ADVERSARIAL NETWORKS Generative adversarial networks (GANs, Goodfellow et al., 2014) are a learning framework that rely on training a discriminator to estimate a measure of difference between a target and generated distributions. GANs, as normally formulated, rely on the generated samples being completely differentiable w.r.t. the generative parameters, and thus do not work for discrete data. We introduce a method for training GANs with discrete data that uses the estimated difference measure from the discriminator to compute importance weights for generated samples, thus providing a policy gradient for training the generator. The importance weights have a strong connection to the decision boundary of the discriminator, and we call our method boundary-seeking GANs (BGANs). We demonstrate the effectiveness of the proposed algorithm with discrete image and character-based natural language generation. In addition, the boundary-seeking objective extends to continuous data, which can be used to improve stability of training, and we demonstrate this on Celeba, Large-scale Scene Understanding (LSUN) bedrooms, and Imagenet without conditioning.
Thanks for the feedback and for clarifying the 1) algorithm and the assumptions in the multivariate case 2) comparison to RL based methods 3) connection to estimating importance sampling weights using GAN discriminator. I think the paper contribution is now more clear and strengthened with additional convincing experiments and I am increasing my score to 7. The paper would still benefit from doing the experiment with importance weights by pixel , rather then a global one as done in the paper now. I encourage the authors to still do the experiment, see if there is any benefit. ==== Original Review ===== Summary of the paper: The paper presents a method based on importance sampling and reinforcement learning to learn discrete generators in the GAN framework. The GAN uses an f-divergence cost function for training the discriminator. The generator is trained to minimize the KL distance between the discrete generator q_{\theta}(x|z), and the importance weight discrete real distribution estimator w(x|z)q(\theta|z). where w(x|z) is estimated in turn using the discriminator. The methodology is also extended to the continuous case. Experiments are conducted on quantized image generation, and text generation. Quality: the paper is overall well written and supported with reasonable experiments. Clarity: The paper has a lot of typos that make sometimes the paper harder to follow: - page (2) Eq 3 max , min should be min, max if we want to keep working with f-divergence - Definition 2.1 \mathbb{Q}_{\theta} --> \mathbb{Q} - page 5 the definition of \tilde{w}(x^{(m})) in the normalization it is missing \tilde{w} - Equation (10) \nabla_{\theta}\log(x|z) --> \nabla_{\theta}\log(x^{(m)}|z) - In algorithm 1, again missing indices in the update of theta --> \nabla_{\theta}\log(x^{(m|n)}|z^{n}) Originality: The main ingredients of the paper are well known and already used in the literature (Reinforce for discrete GAN with Disc as a reward for e.g GAN for image captioning Dai et al). The perspective from importance sampling coming from f-divergence for discrete GAN has some novelty although the foundations of this work relate also to previous work: - Estimating ratios using the discriminator is well known for e.g learning implicit models , Mohamed et al - The relation of importance sampling to reinforce is also well known" On a Connection between Importance Sampling and the Likelihood Ratio Policy Gradient," Tang and Abbeel. General Review: - when the generator is producing only *one* discrete distribution the theory is presented in Section 2.3. When we move to experiments, for image generation for example, we need to have a generator that produces a distribution by pixel. It would be important for 1) understanding the work 2) the reproducibility of the work to parallel algorithm 1 and have it *in the paper*, for this 'multi discrete distribution ' generation case. If we have N pixels \log(p(x_1,...x_N|z))= \Pi_i g_{\theta}(x_i|z) (this should be mentioned in the paper if it is the case ), it would be instructive to comment on the assumptions on independence/conditional dependence of this model, also to state clearly how the generator is updated in this case and what are importance sampling weights. - Would it make sense in this N pixel discrete case generation to have also the discriminator produce N probabilities of real and fake as in PixelGAN in Isola et al? then see in this case what are the importance sampling weights this would parallel the instantaneous reward in RL?
iclr_2018_BJk59JZ0b
GUIDE ACTOR-CRITIC FOR CONTINUOUS CONTROL Actor-critic methods solve reinforcement learning problems by updating a parameterized policy known as an actor in a direction that increases an estimate of the expected return known as a critic. However, existing actor-critic methods only use values or gradients of the critic to update the policy parameter. In this paper, we propose a novel actor-critic method called the guide actor-critic (GAC). GAC firstly learns a guide actor that locally maximizes the critic and then it updates the policy parameter based on the guide actor by supervised learning. Our main theoretical contributions are two folds. First, we show that GAC updates the guide actor by performing second-order optimization in the action space where the curvature matrix is based on the Hessians of the critic. Second, we show that the deterministic policy gradient method is a special case of GAC when the Hessians are ignored. Through experiments, we show that our method is a promising reinforcement learning method for continuous controls.
The paper presents a clever trick for updating the actor in an actor-critic setting: computing a guide actor that diverges from the actor to improve critic value, then updating the actor parameters towards the guide actor. This can be done since, when the parametrized actor is Gaussian and the critic value can be well-approximated as quadratic in the action, the guide actor can be optimized in closed form. The paper is mostly clear and well-presented, except for two issues: 1) there is virtually nothing novel presented in the first half of the paper (before Section 3.3); and 2) the actual learning step is only presented on page 6, making it hard to understand the motivation behind the guide actor until very late through the paper. The presented method itself seems to be an important contribution, even if the results are not overwhelmingly positive. It'd be interesting to see a more elaborate analysis of why it works well in some domains but not in others. More trials are also needed to alleviate any suspicion of lucky trials. There are some other issues with the presentation of the method, but these don't affect the merit of the method: 1. Returns are defined from an initial distribution that is stationary for the policy. While this makes sense in well-mixing domains, the experiment domains are not well-mixing for most policies during training, for example a fallen humanoid will not get up on its own, and must be reset. 2. The definition of beta(a|s) as a mixture of past actors is inconsistent with the sampling method, which seems to be a mixture of past trajectories. 3. In the first paragraph of Section 3.3: "[...] the quality of a guide actor mostly depends on the accuracy of Taylor's approximation." What else does it depend on? Then: "[...] the action a_0 should be in a local vicinity of a."; and "[...] the action a_0 should be similar to actions sampled from pi_theta(a|s)." What do you mean "should"? In order for the Taylor approximation to be good? 4. The line before (19) is confusing, since (19) is exact and not an approximation. For the approximation (20), it isn't clear if this is a good approximation. Why/when is the 2nd term in (19) small? 5. The parametrization nu of \hat{Q} is never specified in Section 3.6. This is important in order to evaluate the complexities involved in computing its Hessian.
iclr_2018_ryDNZZZAW
While domain adaptation has been actively researched in recent years, most theoretical results and algorithms focus on the single-source-single-target adaptation setting. Naive application of such algorithms on multiple source domain adaptation problem may lead to suboptimal solutions. We propose a new generalization bound for domain adaptation when there are multiple source domains with labeled instances and one target domain with unlabeled instances. Compared with existing bounds, the new bound does not require expert knowledge about the target distribution, nor the optimal combination rule for multisource domains. Interestingly, our theory also leads to an efficient learning strategy using adversarial neural networks: we show how to interpret it as learning feature representations that are invariant to the multiple domain shifts while still being discriminative for the learning task. To this end, we propose two models, both of which we call multisource domain adversarial networks (MDANs): the first model optimizes directly our bound, while the second model is a smoothed approximation of the first one, leading to a more data-efficient and task-adaptive model. The optimization tasks of both models are minimax saddle point problems that can be optimized by adversarial training. To demonstrate the effectiveness of MDANs, we conduct extensive experiments showing superior adaptation performance on three real-world datasets: sentiment analysis, digit classification, and vehicle counting.
Quality: The paper appears to be correct. Clarity: The paper is very clear Originality: The theoretical contribution extends the seminal work of Ben-David et al., the idea of using adversarial learning is not new, the novelty is mediaum Significance: The theoretical analysis is interested but for me limited, the idea of the algorithm is not new but as far as I know the first explicitly presented for multi-source. Pros: -new theoretical analysis for multisource problem -paper clear -smoothed version is interesting Cons -Learning bounds with worst case standpoint is probably not the best analysis for multisource learning -experimental evaluation limited in the sense that similar algorithms in the literature are not compared -Extension a bit direct from the seminal work of Ben-David et al. Summary: This paper presents a multiple source domain adaptation approach based on adversarial learning. The setting considered contains multiple source domains with labeled instances and one target domain with unlabeled instances. The authors propose learning bounds in this context that extend seminal work of Ben-David and co-authors(2007,2010) where they essentially consider the max source error and the max divergence between target and source with the presence of empirical estimate. Then, they propose an adversarial algorithm to optimize this bound, with another version optimizing a smoothed version, following the approach of Ganin et al.(2016). An experimental evaluation on 3 known tasks is presented. Comments: Comments: -I am not particularly convinced that the proposed theory explains best multi-source learning. In multi-source, you expect that one source may compensate the others when needed for classification of particular instances. The paper considers a kind of worst case by taking the max error over the sources and the max divergence between target and source, but not really representative of what happens for real problems in the sense that you do not take into account how the different sources interact. The experimental results confirm this aspect actually. Maybe, the authors could propose a learning bound that correspond to the smoothed version proposed in the paper and that works best. The Hard version of the algorithm seems here to comply with the bound, while the algorithm that is really interesting is the smoothed version. -Experimental evaluation is a bit limited, there is no comparison with other (deep learning methods) tackling multi-source scenarios (or equivalent), while I think it is easy to find related approaches : -E. Tzeng, J. Hoffman, T. Darrell, K. Saenko. Simultaneous Deep Transfer Across Domains and Tasks. ICCV 2015. -I-H Jhuo, D Liu, D.T. Lee, and S-Fu. Chang. Robust visual domain adaptation with low-rank reconstruction. In IEEE CVPR, 2012. -Muhammad Ghifary, W. Bastiaan Kleijn, Mengjie Zhang, and David Balduzzi. Domain generalization for object recognition with multi-task autoencoders. In IEEE International Conference on Computer Vision (ICCV), 2015. -Chuang Gan, Tianbao Yang, and Boqing Gong. Learning attributes equals multi-source domain generalization. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. -R. Gopalan,R. Li,and R. Chellappa. Unsupervised Adaptation Across Domain shifts by generating intermediate data representations. PAMI, 36(11), 2014. Note also this paper at CVPR'17: about Domain adversarial adaptation. E. Tzeng, J. Hoffman, K. Saenko, T. Darrell. Adversarial Discriminative Domain Adaptation, CVPR 2017. -Nothing is said about the complexity of applying the algorithm on the different datasets (convergence, tuning, ...) For the smoothed version, it could be interesting to see if the weights w_i associated to each source are related to each (original) source error and see how the sources are complementary. -- After rebuttal -- The new results and experimental evaluation have improved the paper. I increased my score.
iclr_2018_rJWechg0Z
MINIMAL-ENTROPY CORRELATION ALIGNMENT FOR UNSUPERVISED DEEP DOMAIN ADAPTATION In this work, we face the problem of unsupervised domain adaptation with a novel deep learning approach which leverages our finding that entropy minimization is induced by the optimal alignment of second order statistics between source and target domains. We formally demonstrate this hypothesis and, aiming at achieving an optimal alignment in practical cases, we adopt a more principled strategy which, differently from the current Euclidean approaches, deploys alignment along geodesics. Our pipeline can be implemented by adding to the standard classification loss (on the labeled source domain), a source-to-target regularizer that is weighted in an unsupervised and data-driven fashion. We provide extensive experiments to assess the superiority of our framework on standard domain and modality adaptation benchmarks.
This paper improves the correlation alignment approach to domain adaptation from two aspects. One is to replace the Euclidean distance by the geodesic Log-Euclidean distance between two covariance matrices. The other is to automatically select the balancing cost by the entropy on the target domain. Experiments are conducted from SVHN to MNIST and from SYN MNIST to SVHN. Additional experiments on cross-modality recognition are reported from RGB to depth. Strengths: + It is a sensible idea to improve the Euclidean distance by the geodesic Log-Euclidean distance to better explore the manifold structure of the PSD matrices. + It is also interesting to choose the balancing cost using the entropy on the target. However, this point is worth further exploring (please see below for more detailed comments). + The experiments show that the geodesic correlation alignment outperforms the original alignment method. Weaknesses: - It is certainly interesting to have a scheme to automatically choose the hyper-parameters in unsupervised domain adaptation, and the entropy over the target seems like a reasonable choice. This point is worth further exploring for the following reasons. 1. The theoretical result is not convincing given it relies on many unrealistic assumptions, such as the null performance degradation under perfect correlation alignment, the Dirac’s delta function as the predictions over the target, etc. 2. The theorem actually does not favor the correlation alignment over the geodesic alignment. It does not explain that, in Figure 2, the entropy is able to find the best balancing cost \lamba for geodesic alignment but not for the Euclidean alignment. 3. The entropy alignment seems an interesting criterion to explore in general. Could it be used to find fairly good hyper-parameters for the other methods? Could it be used to determine the other hyper-parameters (e..g, learning rate, early stopping) for the geodesic alignment? 4. If one leaves a subset of the target domain out and use its labels for validation, how different would the selected balancing cost \lambda differ from that by the entropy? - The cross-modality setup (from RGB to depth) is often not considered as domain adaptation. It would be better to replace it by another benchmark dataset. The Office-31 dataset is still a good benchmark to compare different methods and for the study in Section 5.1, though it is not necessary to reach state-of-the-art results on this dataset because, as the authors noted, it is almost saturated. Question: - I am not sure how the gradients were computed after the eigendecomposition in equation (8). I like the idea of automatically choosing free parameters using the entropy over the target domain. However, instead of justifying this point by the theorem that relies on many assumptions, it is better to further test it using experiments (e.g., on Office31 and for other adaptation methods). The geodesic correlation alignment is a reasonable improvement over the Euclidean alignment.
iclr_2018_r1TA9ZbA-
Planning problems are among the most important and well-studied problems in artificial intelligence. They are most typically solved by tree search algorithms that simulate ahead into the future, evaluate future states, and back-up those evaluations to the root of a search tree. Among these algorithms, Monte-Carlo tree search (MCTS) is one of the most general, powerful and widely used. A typical implementation of MCTS uses cleverly designed rules, optimised to the particular characteristics of the domain. These rules control where the simulation traverses, what to evaluate in the states that are reached, and how to back-up those evaluations. In this paper we instead learn where, what and how to search. Our architecture, which we call an MCTSnet, incorporates simulation-based search inside a neural network, by expanding, evaluating and backing-up a vector embedding. The parameters of the network are trained end-to-end using gradient-based optimisation. When applied to small searches in the well-known planning problem Sokoban, the learned search algorithm significantly outperformed MCTS baselines.
This paper designs a deep learning architecture that mimics the structure of the well-known MCTS algorithm. From gold standard state-action pairs, it learns each component of this architecture in order to predict similar actions. I enjoyed reading this paper. The presentation is very clear, the design of the architecture is beautiful, and I was especially impressed with the related work discussion that went back to identify other game search and RL work that attempts to learn parts of the search algorithm. Nice job overall. The main flaw of the paper is in its experiments. If I understand them correctly, the comparison is between a neural network that has been learned on 250,000 trajectories of 60 steps each where each step is decided by a ground truth close-to-optimal algorithm, say MCTS with 1000 rollouts (is this mentioned in the paper). That makes for a staggering 15 billion rollouts of prior data that goes into the MCTSNet model. This is compared to 25 rollouts of MCTS that make the decision for the baseline. I suspect that generating the training data and learning the model takes an enormous amount of CPU time, while 25 MCTS rollouts can probably be done in a second or two. I'm sure I'm misinterpreting some detail here, but how is this a fair comparison? Would it be fair to have a baseline that learns the MCTS coefficient on the training data? Or one that uses the value function that was learned with classic search? I find it difficult to understand the details of the experimental setup, and maybe some of these experiments are reported. Please clarify. Also: colors are not distinguishable in grey print. How would the technique scale with more MCTS iterations? I suspect that the O(N^2) complexity is very prohibitive and will not allow this to scale up? I'm a bit worried about the idea of learning to trade off exploration and exploitation. In the end you'll just allow for the minimal amount of exploration to solve the games you've already seen. This seems risky, and I suspect that UCB and more statistically principled approaches would be more robust in this regard? Are these Sokoban puzzles easy for classical AI techniques? I know that many of them can be solved by A* search with a decent heuristic. It would be fair to discuss this. The last sentence of conclusions is too far reaching; there is really no evidence for that claim.
iclr_2018_HJWLfGWRb
Published as a conference paper at ICLR 2018 MATRIX CAPSULES WITH EM ROUTING A capsule is a group of neurons whose outputs represent different properties of the same entity. Each layer in a capsule network contains many capsules. We describe a version of capsules in which each capsule has a logistic unit to represent the presence of an entity and a 4x4 matrix which could learn to represent the relationship between that entity and the viewer (the pose). A capsule in one layer votes for the pose matrix of many different capsules in the layer above by multiplying its own pose matrix by trainable viewpoint-invariant transformation matrices that could learn to represent part-whole relationships. Each of these votes is weighted by an assignment coefficient. These coefficients are iteratively updated for each image using the Expectation-Maximization algorithm such that the output of each capsule is routed to a capsule in the layer above that receives a cluster of similar votes. The transformation matrices are trained discriminatively by backpropagating through the unrolled iterations of EM between each pair of adjacent capsule layers. On the smallNORB benchmark, capsules reduce the number of test errors by 45% compared to the state-of-the-art. Capsules also show far more resistance to white box adversarial attacks than our baseline convolutional neural network.
The paper proposes a novel architecture for capsule networks. Each capsule has a logistic unit representing the presence of an entity plus a 4x4 pose matrix representing the entity/viewer relationship. This new representation comes with a novel iterative routing scheme, based on the EM algorithm. Evaluated on the SmallNORB dataset, the approach proves to be more accurate than previous work (beating also the recently proposed "routing-by-agreement" approach for capsule networks by Sabour et al.). It also generalizes well to new, unseen viewpoints and proves to be more robust to adversarial examples than traditional CNNs. Capsule networks have recently gained attention from the community. The paper addresses important shortcomings exhibited by previous work (Sabour et al.), introducing a series of valuable technical novelties. There are, however, some weaknesses. The proposed routing scheme is quite complex (involving an EM-based step at each layer); it's not fully clear how efficiently it can be performed / how scalable it is. Evaluation is performed on a small dataset for shape recognition; as noted in Sec. 6, the approach will need to be tested on larger, more challenging datasets. Clarity could be improved in some parts of the paper (e.g.: Sec. 1.1 may not be fully clear if the reader is not already familiar with (Sabour et al., 2017); the authors could give a better intuition about what is kept and what is discarded, and why, from that approach. Sec. 2: the sentence "this is incorrect because the transformation matrix..." could be elaborated more. V_{ih} in eq. 1 is defined only a few lines below; perhaps, defining the variables before the equations could improve clarity. Sec. 2.1 could be accompanied by mathematical formulation). All in all, the paper brings an original contribution and will encourage further research / discussion on an important research question (how to effectively leverage knowledge about the part-whole relationships). Other notes: - There are a few typos (e.g. Sec. 1.2 "(Jaderberg et al. (2015)", Sec. 2 "the the transformation", Sec. 4 "cetral crop" etc.). - The authors could discuss in more detail why the approach does not show significant improvement on NORB with respect to the state of the art. - The authors could provide more insights about why capsule gradients are smaller than CNN ones. - It would be interesting to discuss how the network could potentially be adapted, in the future, to: 1. be more efficient 2. take into account other changes produced by viewpoint changes (pixel intensities, as noted in Sec. 1). - In Sec, 4, the authors could provide more details about the network training. - In Procedure 1, for indexing tensors and matrices it might be better to use a comma to separate dimensions (e.g. V_{:,c,:} instead of V_{:c:}).
iclr_2018_HkepKG-Rb
This paper develops a novel methodology for using symbolic knowledge in deep learning. From first principles, we derive a semantic loss function that bridges between neural output vectors and logical constraints. This loss function captures how close the neural network is to satisfying the constraints on its output. An experimental evaluation shows that our semantic loss function effectively guides the learner to achieve (near-)state-of-the-art results on semi-supervised multi-class classification. Moreover, it significantly increases the ability of the neural network to predict structured objects, such as rankings and paths. These discrete concepts are tremendously difficult to learn, and benefit from a tight integration of deep learning and symbolic reasoning methods.
SUMMARY The paper proposes a new form of regularization utilizing logical constraints. The semantic loss function is built on the exploitation of symbolic knowledge extracted from data and connecting the logical constraints to the outputs of a neural network. The use of Boolean logic as a constraint provides a secondary regularization term to prevent over-fitting and improve predictions. The benefit of using the function is found primarily with semi-supervised tasks where data is partially unlabelled. The logical constraints provided by the semantic loss function allow for improved classification of unlabeled data. Output constraints for the semantic loss function are represented with one-hot encoding, prefer- ence rankings, and paths in a grid. These three different output constraints are designed to explore different learning purposes. The semantic function was tested on both semi-supervised classifica- tion tasks as well as structure learning. The paper primarily focuses on the one-hot encoding constraint as it is viewed as a capable technique for multi-class classification. POSITIVES In terms of structure, the paper was written very well. Sufficient background information was con- veyed which helped in understanding the proposed semantic loss function. A thorough breakdown is also carried out on the semantic loss function itself by explaining its axioms which help explain how the outputs of a neural network match a given constraint. As a scientific contribution, I would say results from the experiments were able to justify the proposal of the semantic loss function. The function was able to perform better than most other implementations for semi-supervised learning tasks, and the function was tested on multiple datasets. The paper also made use of testing the function against other notable machine learning approaches, and in most cases the function performed better, but this usually was confined to semi-supervised learning tasks. During supervised learning tasks the function did not perform markedly better than older implementations. Given that, the semantic loss function did prove to be a seemingly simple approach to improving semi-supervised classification tasks. • The background section covers the knowledge required in understanding the semantic loss function. The paper also clearly explains the meaning for some of the notation used in the definitions. • Experiments which clearly show the benefit of using the semantic loss function. Multiple experiment types were done as well which showed evidence of the broad applicability of the function. • In depth description of the definitions, axioms, and propositions of the semantic loss function. • A large number of experiments exploring the usefulness of the function for multiple learning tasks, and on multiple datasets. NEGATIVES I was not clear if the logical constraints are to be instantiated before learning, i.e. they are defined by hand prior to being implemented in the neural network. This is a pretty important question and drastically changes the nature of the learning process. Beyond that complaint, the paper did not suffer from any critical issues. There were some issues with spelling, and the section titled ’Algorithm’ fails to clearly define a complete algorithm using the semantic loss function. It would have helped to have two algorithms. One defining the pipeline for the semantic loss function, and another showing the implementation of the function in a machine learning framework. The semantic loss function found success only in cases were the learning task was semi-supervised, and not in cases of total supervised learning. This is not a true negative, but an observation on the effectiveness of the function. - A few typos in the paper. - The axioms for the semantic loss function where defined but there seemed to be a lack of a clear algorithm provided showing the pipeline implementation of the semantic loss function. - While the semantic loss function does improve learning performance in most cases, the im- provements are confined to semi-supervised learning tasks, and with the MNIST dataset another methodology, Ladder Nets, was able to outperform the semantic loss function. RELATED WORK The paper proposed that logic constraints applied to the output of neural networks have the capacity to improve semi-supervised classification tasks as well as finding the shortest path. In the introduction, the paper lists Zhiting Hu et al. paper titled Harnessing Deep Neural Networks with Logic Rules as an example of a similar approach. Hu et al. paper utilized logic constraints in conjunction with neural nets as well. A key difference was that Hu et al. applied their network architecture to supervised classification tasks. Since the performance of the current papers semantic loss function with supervised tasks did not improve upon other methods, it may benefit to utilize the research by Hu et al. as a means of direct comparison for supervised learning tasks, and possibly incorporate their methods with the semantic loss function in order to improve upon supervised learning tasks. CONCLUSION Given the success of the semantic loss function with semi-supervised tasks, I would accept this paper. The semantic loss was able to improve learning with respect to the tested datasets, and the paper clearly described the properties of the functions. The paper would benefit by including a more concrete algorithm describing the flow of data through a given neural net to the semantic loss function, as well as the process by which the semantic loss function constrains the data based on propositional logic, but in general this complaint is more nit picking. The semantic loss function and the experiments which tested the function showed clearly that there is a benefit to this research and there are areas for it to improve.
iclr_2018_SkOb1Fl0Z
Workshop track -ICLR 2018 A FLEXIBLE APPROACH TO AUTOMATED RNN ARCHITECTURE GENERATION The process of designing neural architectures requires expert knowledge and extensive trial and error. While automated architecture search may simplify these requirements, the recurrent neural network (RNN) architectures generated by existing methods are limited in both flexibility and components. We propose a domain-specific language (DSL) for use in automated architecture search which can produce novel RNNs of arbitrary depth and width. The DSL is flexible enough to define standard architectures such as the Gated Recurrent Unit and Long Short Term Memory and allows the introduction of non-standard RNN components such as trigonometric curves and layer normalization. Using two different candidate generation techniques, random search with a ranking function and reinforcement learning, we explore the novel architectures produced by the RNN DSL for language modeling and machine translation domains. The resulting architectures do not follow human intuition yet perform well on their targeted tasks, suggesting the space of usable RNN architectures is far larger than previously assumed.
This paper investigates meta-learning strategy for automated architecture search in the context of RNN. To constraint the architecture search space, authors propose a DSL that specifies the RNN recurrent operations. This DSL allows to explore RNN architectures using either random search or a reinforcement-learning strategy. Candidate architectures are ranked using a TreeLSTM that tries to predict the architecture performances. The top-k architectures are then evaluated by fully training them on a given task. Authors evaluate their approach on PTB/Wikitext 2 language modeling and Multi30k/IWSLT'16 machine translation. In both experiments, authors show that their approach obtains competitive results and can sometime outperforms RNN cells such as GRU/LSTM. In the PTB experiment, their architecture however underperforms other LSTM variant in the literatures. - Quality/Clarity The paper is overall well written and pleasant to read. Few details can be clarified. In particular how did you initialize the weight and bias for both the LSTM/GRU baselines and the found architectures? Is there other works leveraging RNN that report results on the Multi30k/IWSLT datasets? You state in paragraph 3.2 that human experts can inject the previous best known architecture when training the ranking networks. Did you use this in the experiments? If yes, what was the impact of this online learning strategy on the final results? - Originality The idea of using DSL + ranking for architecture search seems novel. - Significance Automated architecture search is a promising way to design new networks. However, it is not clear why the proposed approach is not able to outperforms other LSTM-based architectures on the PTB task. Could the problem arise from the DSL that constraint too much the search space ? It would be nice to have other tasks that are commonly used as benchmark for RNN to see where this approach stand. In addition, authors propose both a DSL, a random and RL generator and a ranking function. It would be nice to disentangle the contributions of the different components. In particular, did the authors compare the random search vs the RL based generator or the performances of the RL-based generator when the ranking network is not used? Although authors do show that they outperform NAScell in one setting, it would be nice to have an extended evaluation (using character level PTB for instance).
iclr_2018_HJWGdbbCW
We propose a general deep reinforcement learning method and apply it to robot manipulation tasks. Our approach leverages demonstration data to assist a reinforcement learning agent in learning to solve a wide range of tasks, mainly previously unsolved. We train visuomotor policies end-to-end to learn a direct mapping from RGB camera inputs to joint velocities. Our experiments indicate that our reinforcement and imitation approach can solve contact-rich robot manipulation tasks that neither the state-of-the-art reinforcement nor imitation learning method can solve alone. We also illustrate that these policies achieved zero-shot sim2real transfer by training with large visual and dynamics variations.
Paper summary: The authors propose a number of tricks to enable training policies for pick and place style tasks using a combination of GAIL-based imitation learning and hand-specified rewards, as well as use of unobserved state information during training and hand-designed curricula. The results demonstrate manipulation policies for stacking blocks and moving objects, as well as preliminary results for zero-shot transfer from simulation to a real robot for a picking task and an attempt at a stacking task. Review summary: The paper proposes a limited but interesting contribution that will be especially of interest to practitioners, but the scope of the contribution is somewhat incremental in light of recent work, and the results, while interesting, could certainly be better. In the balance, I think the paper should be accepted, because it will be of value to practitioners, and I appreciate the detail and real-world experiments. However, some of the claims should be revised to better reflect what the paper actually accomplishes: the contribution is a bit limited in places, but that's *OK* -- the authors should just be up-front about it. Pros: - Interesting tasks that combine imitation and reinforcement in a logical (but somewhat heuristic) way - Good simulated results on a variety of pick-and-place style problems - Some initial attempt at real-world transfer that seems promising, but limited - Related work is very detailed and I think many will find it to be a very valuable overview Cons: - Some of the claims (detailed below) are a bit excessive in my opinion - The paper would be better if it was scoped more narrowly - Contribution is a bit incremental and somewhat heuristic - The experimental results are difficult to interpret in simulation - The real-world experimental results are not great - There are a couple of missing citations (but overall related work is great) Detailed discussion of potential issues and constructive feedback: > "Our approach leverages demonstration data to assist a reinforcement learning agent in learning to solve a wide range of tasks, mainly previously unsolved." >> This claim is a bit peculiar. Picking up and placing objects is certainly not "unsolved," there are many examples. If you want image-based pick and place with demonstrations for example, see Chebotar '17 (not cited). If you want stacking blocks, see Nair '17. While it's true that there is a particular combination of factors that doesn't exactly appear in prior work, the statement the authors make is way too strong. Chebotar '17 shows picking and placing a real-world objective with a much higher success rate than reported here, without simulation. Nair '17 shows a much harder stacking task, but without images -- would that method have worked just as well with image-based distillation? Very likely. Rajeswaran '17 shows tasks that arguably are much harder. Maybe a more honest statement is that this paper proposes some tasks that prior methods don't show, and some prior methods show tasks that the proposed method can't solve. But as-is, this statement misrepresents prior work. > Previous RL-based robot manipulation policies (Nair et al., 2017; Popov et al., 2017) largely rely on low-level states as input, or use severely limited action spaces that ignore the arm and instead learn Cartesian control of a simple gripper. This limits the ability of these methods to represent and solve more complex tasks (e.g., manipulating arbitrary 3D objects) and to deploy in real environments where the privileged state information is unavailable. >> This is a funny statement. Some use images, some don't. There is a ton of prior work on RL-based robot manipulation that does use images. The current paper does use object state information during training, which some prior works manage to avoid. The comments about Cartesian control are a bit peculiar... the proposed method controls fingers, but the hand is simple. Some prior works have simpler grippers (e.g., Nair) and some have much more complex hands (e.g., Rajeswaran). So this one falls somewhere in the middle. That's fine, but again, this statement overclaims a bit. > To sidestep the constraints of training on real hardware we embrace the sim2real paradigm which has recently shown promising results (James et al., 2017; Rusu et al., 2016a). >> Probably should cite Sadeghi et al. and Tobin et al. in regard to randomization, both of which precede James '17. > we can, during training, exploit privileged information about the true system state >> This was done also in Pinto et al. and many of the cited GPS papers > our policies solve the tasks that the state-of-the-art reinforcement and imitation learning cannot solve >> I don't think this statement is justified without much wider comparisons -- the authors don't attempt any comparisons to prior work, such as Chebotar '17 (which arguably is closest in terms of demonstrated behaviors), Nair '17 (which is also close but doesn't use images, though it likely could). > An alternative strategy for dealing with the data demand is to train in simulation and transfer >> Aside from previously mentioned citations, should probably cite Devin "Towards Adapting Deep Visuomotor Representations" > Sec 3.2.1 >> This method seems a bit heuristic. It's logical, but can you say anything about what this will converge to? GAIL will try to match the demonstration distribution, and RL will try to maximize expected reward. What will this method do? > Experiments >> Would it be possible to indicate some measure of success rate for the simulated experiments? As-is, it's hard to tell how well either the proposed method or the baselines actually work. > Transfer >> My reading of the transfer experiments is that they are basically unsuccessful. Picking up a rectangular object with 80% success rate is not very good. The stacking success rate is too low to be useful. I do appreciate the authors trying out their method on a real robotic platform, but perhaps the more honest assessment of the outcome of these experiments is that the approach didn't work very well, and more research is needed. Again, it's *OK* to say this! Part of the purpose of publishing a paper is to stimulate future research directions. I think the transfer experiments should definitely be kept, but the authors should discuss the limitations to help future work address them, and present the transfer appropriately in the intro. > Diverse Visuomotor Skills >> I think this is a peculiar thing to put in the title. Is the implication that prior work is not diverse? Arguably several prior papers show substantially more diverse skills. It seems that all the skills here are essentially pick and place skills, which is fine (these are interesting skills), but the title seems like a peculiar jab at prior work not being "diverse" enough, which is simply misleading.
iclr_2018_BJj6qGbRW
FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS We propose to study the problem of few-shot learning with the prism of inference on a partially observed graphical model, constructed from a collection of input images whose label can be either observed or not. By assimilating generic message-passing inference algorithms with their neural-network counterparts, we define a graph neural network architecture that generalizes several of the recently proposed few-shot learning models. Besides providing improved numerical performance, our framework is easily extended to variants of few-shot learning, such as semi-supervised or active learning, demonstrating the ability of graph-based models to operate well on 'relational' tasks.
This paper introduces a graph neural net approach to few-shot learning. Input examples form the nodes of the graph and edge weights are computed as a nonlinear function of the absolute difference between node features. In addition to standard supervised few-shot classification, both semi-supervised and active learning task variants are introduced. The proposed approach captures several popular few-shot learning approaches as special cases. Experiments are conducted on both Omniglot and miniImagenet datasets. Strengths - Use of graph neural nets for few-shot learning is novel. - Introduces novel semi-supervised and active learning variants of few-shot classification. Weaknesses - Improvement in accuracy is small relative to previous work. - Writing seems to be rushed. The originality of applying graph neural networks to the problem of few-shot learning and proposing semi-supervised and active learning variants of the task are the primary strengths of this paper. Graph neural nets seem to be a more natural way of representing sets of items, as opposed to previous approaches that rely on a random ordering of the labeled set, such as the FCE variant of Matching Networks or TCML. Others will likely leverage graph neural net ideas to further tackle few-shot learning problems in the future, and this paper represents a first step in that direction. Regarding the graph, I am wondering if the authors can comment on what scenarios is the graph structure expected to help? In the case of 1-shot, the graph can only propagate information about other classes, which seems to not be very useful. Though novel, the motivation behind the semi-supervised and active learning setup could use some elaboration. By including unlabeled examples in an episode, it is already known that they belong to one of the K classes. How realistic is this set-up and in what application is it expected that this will show up? For active learning, the proposed method seems to be specific to the case of obtaining a single label. How can the proposed method be scaled to handle multiple requested labels? Overall the paper is well-structured and related work covers the relevant papers, but the details of the paper seem hastily written. In the problem set-up section, it is not immediately clear what the distinction between s, r, and t is. Stating more explicitly that s is for the labeled data, etc. would make this section easier to follow. In addition, I would suggest stating the reason why t=1 is a necessary assumption for the proposed model in the few-shot and semi-supervised cases. Regarding the Omniglot dataset, Vinyals et al. (2016) augmented the classes so that 4,800 classes were used for training and 1,692 for test. Was the same procedure done for the experiments in the paper? If yes, please update 6.1.1 to make this distinction more clear. If not, please update the experiments to be consistent with the baselines. In the experiments, does the \varphi MLP explicitly enforce symmetry and identity or is it learned? Regarding the Omniglot baselines, it appears that Koch et al. (2015), Edwards & Storkey (2016), and Finn et al. (2017) use non-standard class splits relative to the other methods. This should probably be noted. The results for Prototypical Networks appear to be incorrect in the Omniglot and Mini-Imagenet tables. According to Snell et al. (2017) they should be 49.4% and 68.2% for miniImagenet. Moreover, Snell et al. (2017) only used 64 classes for training instead of 80 as utilized in the proposed approach. Given this, I am wondering if the authors can comment on the performance difference in the 5-shot case, even though Prototypical Networks is a special case of GNNs? For semi-supervised and active-learning results, please include error bars for the miniImagenet results. Also, it would be interesting to see 20-way results for Omniglot as the gap between the proposed method and the baseline would potentially be wider. Other Comments: - In Section 4.2, Gc(.) is defined in Equation 2 but not mentioned in the text. - In Section 4.3, adding an equation to clarify the relationship with Matching Networks would be helpful. - I believe there is a typo in section 4.3 in that softmax(\varphi) should be softmax(-\varphi), so that more similar pairs will be more heavily weighted. - The equation in 5.1 appears to be missing a minus sign. Overall, the paper is novel and interesting, though the clarity and experimental results could be better explained. EDIT: I have read the author's response. The writing is improved and my concerns have largely been addressed. I am therefore revising my rating of the paper to a 7.
iclr_2018_BkVsWbbAW
Despite advances in deep learning, artificial neural networks do not learn the same way as humans do. Today, neural networks can learn multiple tasks when trained on them jointly, but cannot maintain performance on learnt tasks when tasks are presented one at a time -this phenomenon called catastrophic forgetting is a fundamental challenge to overcome before neural networks can learn continually from incoming data. In this work, we derive inspiration from human memory to develop an architecture capable of learning continuously from sequentially incoming tasks, while averting catastrophic forgetting. Specifically, our model consists of a dual memory architecture to emulate the complementary learning systems (hippocampus and the neocortex) in the human brain and maintains a consolidated long-term memory via generative replay of past experiences. We (i) substantiate our claim that replay should be generative, (ii) show the benefits of generative replay and dual memory via experiments, and (iii) demonstrate improved performance retention even for small models with low capacity. Our architecture displays many important characteristics of the human memory and provides insights on the connection between sleep and learning in humans.
This paper introduces a neural network architecture for continual learning. The model is inspired by current knowledge about long term memory consolidation mechanisms in humans. As a consequence, it uses: - One temporary memory storage (inspired by hippocampus) and a long term memory - A notion of memory replay, implemented by generative models (VAE), in order to simultaneously train the network on different tasks and avoid catastrophic forgetting of previously learnt tasks. Overall, although the result are not very surprising, the approach is well justified and extensively tested. It provides some insights on the challenges and benefits of replay based memory consolidation. Comments: 1- The results are somewhat unsurprising: as we are able to learn generative models of each tasks, we can use them to train on all tasks at the same time, a beat algorithms that do not use this replay approach. 2- It is unclear whether the approach provides a benefit for a particular application: as the task information has to be available, training separate task-specific architectures or using classical multitask learning approaches would not suffer from catastrophic forgetting and perform better (I assume). 3- So the main benefit of the approach seems to point towards the direction of what possibly happens in real brains. It is interesting to see how authors address practical issues of training based on replay and it show two differences with real brains: 1/ what we know about episodic memory consolidation (the system modeled in this paper) is closer to unsupervised learning, as a consequence information such as task ID and dictionary for balancing samples would not be available, 2/ the cortex (long term memory) already learns during wakefulness, while in the proposed algorithm this procedure is restricted to replay-based learning during sleep. 4- Due to these differences, I my view, this work avoids addressing directly the most critical and difficult issues of catastrophic forgetting, which relates more to finding optimal plasticity rules for the network in an unsupervised setting 5- The writing could have been more concise and the authors could make an effort to stay closer to the recommended number of pages.
iclr_2018_r1gs9JgRZ
Published as a conference paper at ICLR 2018 MIXED PRECISION TRAINING Increasing the size of a neural network typically improves accuracy but also increases the memory and compute requirements for training the model. We introduce methodology for training deep neural networks using half-precision floating point numbers, without losing model accuracy or having to modify hyperparameters. This nearly halves memory requirements and, on recent GPUs, speeds up arithmetic. Weights, activations, and gradients are stored in IEEE halfprecision format. Since this format has a narrower range than single-precision we propose three techniques for preventing the loss of critical information. Firstly, we recommend maintaining a single-precision copy of weights that accumulates the gradients after each optimizer step (this copy is rounded to half-precision for the forward-and back-propagation). Secondly, we propose loss-scaling to preserve gradient values with small magnitudes. Thirdly, we use half-precision arithmetic that accumulates into single-precision outputs, which are converted to halfprecision before storing to memory. We demonstrate that the proposed methodology works across a wide variety of tasks and modern large scale (exceeding 100 million parameters) model architectures, trained on large datasets.
The paper considers the problem of training neural networks in mixed precision (MP), using both 16-bit floating point (FP16) and 32-bit floating point (FP32). The paper proposes three techniques for training networks in mixed precision: first, keep a master copy of network parameters in FP32; second, use loss scaling to ensure that gradients are representable using the limited range of FP16; third, compute dot products and reductions with FP32 accumulation. Using these techniques allows the authors to match the results of traditional FP32 training on a wide variety of tasks without modifying any training hyperparameters. The authors show results on ImageNet classification (with AlexNet, VGG, GoogLeNet, Inception-v1, Inception-v3, and ResNet-50), VOC object detection (with Faster R-CNN and Multibox SSD), speech recognition in English and Mandarin (with CNN+GRU), English to French machine translation (with multilayer LSTMs), language modeling on the 1 Billion Words dataset (with a bigLSTM), and generative adversarial networks on CelebFaces (with DCGAN). Pros: - Three simple techniques to use for mixed-precision training - Matches performance of traditional FP32 training without modifying any hyperparameters - Very extensive experiments on a wide variety of tasks Cons: - Experiments do not validate the necessity of FP32 accumulation - No comparison of training time speedup from mixed precision With new hardware (such as NVIDIA’s Volta architecture) providing large computational speedups for MP computation, I expect that MP training will become standard practice in deep learning in the near future. Naively porting FP32 training recipes can fail due to the reduced numeric range of FP16 arithmetic; however by adopting the techniques of this paper, practitioners will be able to migrate their existing FP32 training pipelines to MP without modifying any hyperparameters. I expect these techniques to be hugely impactful as more people begin migrating to new MP hardware. The experiments in this paper are very exhaustive, covering nearly every major application of deep learning. Matching state-of-the-art results on so many tasks increases my confidence that I will be able to apply these techniques to my own tasks and architectures to achieve stable MP training. My first concern with the paper is that there are no experiments to demonstrate the necessity of FP32 accumulation. With an FP32 master copy of the weights and loss scaling, can all arithmetic be performed solely in FP16, or are there some tasks where training will still diverge? My second concern is that there is no comparison of training-time speedup using MP. The main reason that MP is interesting is because new hardware promises to accelerate it. If people are willing to endure the extra engineering overhead of implementing the techniques from this paper, what kind of practical speedups can they expect to see from their workloads? NVIDIA’s marketing material claims that the Tensor Cores in the V100 offer an 8x speedup over its general-purpose CUDA cores (https://www.nvidia.com/en-us/data-center/tesla-v100/). Since in this paper some operations are performed in FP32 (weight updates, batch normalization) and other operations are bound by memory and not compute bandwidth, what kinds of speedups do you see in practice when moving from FP32 to MP on V100? My other concerns are minor. Mandarin speech recognition results are reported on “our internal test set”. Is there any previously published work on this dataset, or any publicly available test set for this task? The notation around the Inception architectures should be clarified. According to [3] and [4], “Inception-v1” and “GoogLeNet” both refer to the architecture used in [1]. The architecture used in [2] is referred to as “BN-Inception” by [3] and “Inception-v2” by [4]. “Inception-v3” is the architecture from [3], which is not currently cited. To improve clarity in Table 1, I suggest renaming “GoogLeNet” to “Inception-v1”, changing “Inception-v1” to “Inception-v2”, and adding explicit citations to all rows of the table. In Section 4.3 the authors note that “half-precision storage format may act as a regularizer during training”. Though the effect is most obvious from the speech recognition experiments in Section 4.3, MP also achieves slightly higher performance than baseline for all ImageNet models but Inception-v1 and for both object detection models; these results add support to the idea of FP16 as a regularizer. Minor typos: Section 3.3, Paragraph 3: “either FP16 or FP16 math” -> “either FP16 or FP32 math” Section 4.1, Paragraph 4: “ pre-ativation” -> “pre-activation” Overall this is a strong paper, and I believe that it will be impactful as MP hardware becomes more widely used. References [1] Szegedy et al, “Going Deeper with Convolutions”, CVPR 2015 [2] Ioffe and Szegedy, “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift”, ICML 2015 [3] Szegedy et al, “Rethinking the Inception Architecture for Computer Vision”, CVPR 2016 [4] Szegedy et al, “Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning”, ICLR 2016 Workshop
iclr_2018_r1Kr3TyAb
We conduct mathematical analysis on the effect of batch normalization (BN) on gradient backpropogation in residual network training, which is believed to play a critical role in addressing the gradient vanishing/explosion problem, in this work. By analyzing the mean and variance behavior of the input and the gradient in the forward and backward passes through the BN and residual branches, respectively, we show that they work together to confine the gradient variance to a certain range across residual blocks in backpropagation. As a result, the gradient vanishing/explosion problem is avoided. Furthermore, we use the same analysis to discuss the tradeoff between depths and widths of a residual network and demonstrate that shallower yet wider resnets have stronger learning performance that deeper yet thinner resnets.
This paper attempts to analyze the gradient flow through a batchNorm-ReLU ResNet and make suggestions for reducing gradient explosion. Firstly, the paper has a fatal mathematical flaw. Consider equation (10). There, you show the variance of y_{L,i} taken over BOTH random weights AND the batch. Now consider equation (32). In that equation, Var(y_{L,i}) appears in the denominator but this variance is taken over ONLY the batch and NOT the random weights. This Var(y_{L,i}) came from batch normalization, which divides its incoming activation values by their standard deviation. However, batch normalization only sees the variation in the activations given to it by a SPECIFIC set of weights. It does not know about the random variation of the weights because that randomness is in a sense a superstructure imposed on the network that the network operations themselves cannot see. Therefore, your substitution and therefore equation (13) is incorrect. If you replace the variance in equation (32) by the correct value, you will get a very different result from which very different (and very interesting!) conclusions can be drawn. Secondly, in section 4, your analysis depends on the specific type of ResNet you chose. Specifically, when transitioning from one "scale" to the next, you chose to insert not just a convolutional layer, but also a batch normalization and ReLU layer on the residual path. To achieve scale transitions, in general, people use a single convolutional layer with 1*1 receptive field on the residual path. It is not a problem in itself to use a nonstandard architecture, but you do not discuss how your results would generalize to other ResNet architectures. Therefore your results have very limited relevance. (Note that again, of course, your results are corrupted by the variance problem I described earlier.) Finally, with regards to section 5, let me be honest. (I hope that my area chair agrees with me that honesty is the best and kindest policy.) This section makes no sense. You do not understand the work by Veit et al. You do not know how to interpret gradient variances. While I won't be able to dicuss "gradient variance" as a concept in full detail in this review, here's a quick summary. (A) Veit et al. argued that a deep ResNet behaves as an ensemble of shallower networks as long as the gradient flowing through the residual paths is not larger than the gradient flowing through the skip paths. (B) The exploding gradient problem refers to the size of the gradient growing exponentially. The vanishing gradient problem refers to the size of the gradient shrinking exponentially. This can make it difficult to train the network. See "DEEP INFORMATION PROPAGATION" by Schoenholz et al. from ICLR 2017 to learn more about how gradient explosion can arise. (C) For a neural network to model a ground truth function exactly, the gradients of the network with respect to the input data have to match the gradients of the ground truth function with respect to the input. From observations (A) through (C), we can derive three guidelines for gradient conditioning: (A) have the gradient flowing through residual paths be not too small relative to the gradient flowing through skip paths, (B) have the gradient not grow or shrink exponentially with too large a rate, (C) have the data gradient match that of the ground truth function. However, you seem to be arguing that it is a problem if the gradient scale does increases too little from one residual block to the next. I am not aware of an established argument that this is indeed a problem. To be fair, one might make an argument as follows: "the point of deep nets is to be expressive, expressiveness of a layer relates to the spetrum of the layer-Jacobian, a small increase in gradient scale implies the layer-Jacobian has many similar singular values, therefore a small increase in gradient scale implies low expressiveness of the layer, therefore the layer is pathological". However, much more analysis, detail and care would be required to make this argument successfully. In any case, I also don't think that was the argument you were trying to make. Note that after I skimmed through the submissions to this conference, there seem to be interesting papers on the topic of gradients. Those papers plus the references provided in those papers should provide a good introduction to the topic of gradients in neural networks. Other comments: - your notation is quite sloppy and may have lead to errors. Example: in the beginning of section 4, you say that from one "scale" to the next, the filter number increases k times. But in appendix C you say "Since the receptive field for the last scale is k times smaller". So is k the change in the receptive field size or the filter number of both? I would strongly recommend using dedicated variables to denote the width of the receptive field in each convolutional layer, the height of the receptive field in each convolutional layer as well as the filter number and then express all assumptions made in equation form. - Equation (20) deals with the change of gradient variance within a scale. Where is the equation that shows the change of gradient variance between scales? - I would suggest making all derivations in appendices A through D much more detailed.