id
stringlengths
11
20
paper_text
stringlengths
29
163k
review
stringlengths
666
24.3k
iclr_2018_SJA7xfb0b
Published as a conference paper at ICLR 2018 SOBOLEV GAN We propose a new Integral Probability Metric (IPM) between distributions: the Sobolev IPM. The Sobolev IPM compares the mean discrepancy of two distributions for functions (critic) restricted to a Sobolev ball defined with respect to a dominant measure µ. We show that the Sobolev IPM compares two distributions in high dimensions based on weighted conditional Cumulative Distribution Functions (CDF) of each coordinate on a leave one out basis. The Dominant measure µ plays a crucial role as it defines the support on which conditional CDFs are compared. Sobolev IPM can be seen as an extension of the one dimensional VonMises Cramér statistics to high dimensional distributions. We show how Sobolev IPM can be used to train Generative Adversarial Networks (GANs). We then exploit the intrinsic conditioning implied by Sobolev IPM in text generation. Finally we show that a variant of Sobolev GAN achieves competitive results in semisupervised learning on CIFAR-10, thanks to the smoothness enforced on the critic by Sobolev GAN which relates to Laplacian regularization.
The paper deals with the increasingly popular GAN approach to constructing generative models. Following the first formulation of GANs in 2014, it was soon realized that the training dynamics was highly unstable, leading to significant difficulties in achieving stable results. The paper by Arjovsky et al (2017) provided a framework based on the Wasserstein distance, a distance measure between probability distributions belonging to the class of so-called Integral Probability Metrics (IPMs). This approach solved the stability issues of GANs and demonstrated improved empirical results. Several other works were then developed to deal with these stability issues, specifically the Fisher IPM. Both these methods relied on discriminating between distributions P and Q based on computing a function f, belonging to an appropriate function class {\cal F}, that maximizes the deviation E_{x~P}f(x)-E_{x~Q}f(x). The main issue relates to the choice of the class {\cal F}. For the Wasserstein distance this was the class of L_1 Lipschitz functions, while for the Fisher distance it was the class of square integrable functions. The present paper introduces a new notion of distance, where {\cal F} is the defined through the Sobolev norm, based on the L_2 norm of the gradient of f(x), with respect to a measure \mu(x), where the latter can be freely chosen under certain assumptions. The authors prove a theorem related to the properties of the Sobolev norm, and express it in terms of the component-wise conditional distributions. Moreover, they show that the optimal critic f is obtained by solving a PDE subject to zero boundary conditions. They then use their suggested metric in order to develop a GAN algorithm, and present experimental results demonstrating its utility. The Sobolev IPM has two nice features. First, it is based on the component-wise conditional distribution of the CDFs, and, second, its relation to the Laplacian regularizer from manifold learning. Its 1D version also relates to the well-known von Mises Cramer statistics used in hypothesis testing. The paper belongs to a class of recent papers attempting to suggest improvements to the original GAN algorithm, relying on the KL divergence. It is well conceived and articulated, and provides an interesting and potentially powerful new direction to improve GANs in practice. However, it is somewhat difficult to follow the paper, and would urge the authors to improve and augment their presentation of the following issues. 1) One often poses regularization schemes based on optimality criteria. Is there any optimality principle under which the Sobolev IPM is a desired choice? 2) The authors argue that their approach is especially well suited for discrete sequential data. This issue was not clear to me, and it would be good if the authors could expand on this issue and provide a clearer explanation. 3) How would the Sobolev norm behave under a change of coordinates or a homeomorphism of the space? Would it make sense to require some invariance in this respect? 4) The Lagrangian in eq. (9) contains both a Lagrange constraint on the Sobolev norm and a penalty term. Why are both needed? Why do the updates of \lambda and p in Algorithm 1 used different schemes (SGD and ADAM, respectively). 5) Table 2, p. 13 – it would be nice to see a comparison to the recently introduced gradient penalty approach, Gulrajani et al., Improved training of wasserstein gans. arXiv preprint arXiv:1704.00028, 2017. 6) The integral defining F_p(x) on p. 3 has x as an argument on the LHS and as an integrand of the RHS. Please correct this. Also specify that x=(x_1,\ldots,x_d).
iclr_2018_rJLTTe-0W
Time series forecasting plays a crucial role in marketing, finance and many other quantitative fields. A large amount of methodologies has been developed on this topic, including ARIMA, Holt-Winters, etc. However, their performance is easily undermined by the existence of change points and anomaly points, two structures commonly observed in real data, but rarely considered in the aforementioned methods. In this paper, we propose a novel state space time series model, with the capability to capture the structure of change points and anomaly points, as well as trend and seasonality. To infer all the hidden variables, we develop a Bayesian framework, which is able to obtain distributions and forecasting intervals for time series forecasting, with provable theoretical properties. For implementation, an iterative algorithm with Markov chain Monte Carlo (MCMC), Kalman filter and Kalman smoothing is proposed. In both synthetic data and real data applications, our methodology yields a better performance in time series forecasting compared with existing methods, along with more accurate change point detection and anomaly detection.
Minor comments: - page 3. “The observation equation and transition equations together (i.e., Equation (1,2,3)) together define “ - one “together” should be removed - page 4. “From Figure 2, the joint distribution (i.e., the likelihood function ” - there should be additional bracket - page 7. “We can further integral out αn “ -> integrate out Major comments: The paper is well-written. The paper considers structural time-series model with seasonal component and stochastic trend, which allow for change-points and structural breaks. Such type of parametric models are widely considered in econometric literature, see e.g. [1] Jalles, João Tovar, Structural Time Series Models and the Kalman Filter: A Concise Review (June 19, 2009). FEUNL Working Paper No. 541. Available at SSRN: https://ssrn.com/abstract=1496864 or http://dx.doi.org/10.2139/ssrn.1496864 [2] Jacques J. F. Commandeur, Siem Jan Koopman, Marius Ooms. Statistical Software for State Space Methods // May 2011, Volume 41, Issue 1. [3] Scott, Steven L. and Varian, Hal R., Predicting the Present with Bayesian Structural Time Series (June 28, 2013). Available at SSRN: https://ssrn.com/abstract=2304426 or http://dx.doi.org/10.2139/ssrn.2304426 [4] Phillip G. Gould, Anne B. Koehler, J. Keith Ord, Ralph D. Snyder, Rob J. Hyndman, Farshid Vahid-Araghi, Forecasting time series with multiple seasonal patterns, In European Journal of Operational Research, Volume 191, Issue 1, 2008, Pages 207-222, ISSN 0377-2217, https://doi.org/10.1016/j.ejor.2007.08.024. [5] A.C. Harvey, S. Peters. Estimation Procedures for structural time series models // Journal of Forecasting, Vol. 9, 89-108, 1990 [6] A. Harvey, S.J. Koopman, J. Penzer. Messy Time Series: A Unified approach // Advances in Econometrics, Vol. 13, pp. 103-143. They also use Kalman filter and MCMC-based approaches to sample posterior to estimate hidden components. There are also non-parametric approaches to extraction of components from quasi-periodic time-series, see e.g. [7] Artemov A., Burnaev E. Detecting Performance Degradation of Software-Intensive Systems in the Presence of Trends and Long-Range Dependence // 16th International Conference on Data Mining Workshops (ICDMW), IEEE Conference Publications, pp. 29 - 36, 2016. DOI: 10.1109/ICDMW.2016.0013 [8] Alexey Artemov, Evgeny Burnaev and Andrey Lokot. Nonparametric Decomposition of Quasi-periodic Time Series for Change-point Detection // Proc. SPIE 9875, Eighth International Conference on Machine Vision, 987520 (December 8, 2015); 5 P. doi:10.1117/12.2228370;http://dx.doi.org/10.1117/12.2228370 In some of these papers models of structural brakes and change-points are also considered, see e.g. - page 118 in [6] - papers [7, 8] There were also Bayesian approaches for change-point detection, which are similar to the model of change-point, proposed in the considered paper, e.g. [9] Ryan Prescott Adams, David J.C. MacKay. Bayesian Online Changepoint Detection // https://arxiv.org/abs/0710.3742 [10] Ryan Turner, Yunus Saatçi, and Carl Edward Rasmussen. Adaptive sequential Bayesian change point detection. In Zaïd Harchaoui, editor, NIPS Workshop on Temporal Segmentation, Whistler, BC, Canada, December 2009. Thus, - the paper does not provide comparison with relevant econometric literature on parametric structural time-series models, - the paper does not provide comparison with relevant advanced change-point detection methods e.g. [7,8,9,10]. The comparison is provided only with very simple methods, - the proposed model itself looks very similar to what can be found across econometric literature, - the datasets, used for comparison, are very scarce. There are datasets for anomaly detection in time-series data, which should be used for extensive comparison, e.g. Numenta Anomaly Detection Benchmark. Therefore, also the paper is well-written, - it lacks novelty, - its topic does not perfectly fit topics of interest for ICLR, So, I do not recommend this paper to be published.
iclr_2018_ry_WPG-A-
Published as a conference paper at ICLR 2018 ON THE INFORMATION BOTTLENECK THEORY OF DEEP LEARNING The practical successes of deep neural networks have not been matched by theoretical progress that satisfyingly explains their behavior. In this work, we study the information bottleneck (IB) theory of deep learning, which makes three specific claims: first, that deep networks undergo two distinct phases consisting of an initial fitting phase and a subsequent compression phase; second, that the compression phase is causally related to the excellent generalization performance of deep networks; and third, that the compression phase occurs due to the diffusion-like behavior of stochastic gradient descent. Here we show that none of these claims hold true in the general case. Through a combination of analytical results and simulation, we demonstrate that the information plane trajectory is predominantly a function of the neural nonlinearity employed: double-sided saturating nonlinearities like tanh yield a compression phase as neural activations enter the saturation regime, but linear activation functions and single-sided saturating nonlinearities like the widely used ReLU in fact do not. Moreover, we find that there is no evident causal connection between compression and generalization: networks that do not compress are still capable of generalization, and vice versa. Next, we show that the compression phase, when it exists, does not arise from stochasticity in training by demonstrating that we can replicate the IB findings using full batch gradient descent rather than stochastic gradient descent. Finally, we show that when an input domain consists of a subset of task-relevant and task-irrelevant information, hidden representations do compress the task-irrelevant information, although the overall information about the input may monotonically increase with training time, and that this compression happens concurrently with the fitting process rather than during a subsequent compression period.
The authors address the issue of whether the information bottleneck (IB) theory can provide insight into the working of deep networks. They show, using some counter-examples, that the previous understanding of IB theory and its application to deep networks is limited. PROS: The paper is very well written and makes its points very clearly. To the extent of my knowledge, the content is original. Since it clearly elucidates the limitations of IB theory in its ability to analyse deep networks, I think it is a significant contribution worthy of acceptance. The experiments are also well designed and executed. CONS: On the downside, the limitations exposed are done so empirically, but the underlying theoretical causes are not explored (although this could be potentially because this is hard to do). Also, the paper exposes the limitations of another paper published in a non-peer reviewed location (arXiv) which potentially limits its applicability and significance. Some detailed comments: In section 2, the influence of binning on how the mutual information is calculated should be made clear. Since the comparison is between a bounded non-linearity and an unbounded one, it is not self-evident how the binning in the latter case should be done. A justification for the choice made for binning the relu case would be helpful. In the same section, it is claimed that the dependence of the mutual information I(X; T) on the magnitude of the weights of the network explains why a tanh non-linearity shows the compression effect (non-monotonicity vs I(X; T)) in the information plane dynamics. But the claim that large weights are required for doing anything useful is unsubstantiated and would benefit from having citations to papaers that discuss this issue. If networks with small weights are able to learn most datasets, the arguments given in this section wouldn't be applicable in its entirety. Additionally, figures that show the phase plane dynamics for other non-linearities e.g. relu+ or sigmoid, should be added, at least in the supplementary section. This is important to complete the overall picture of how the compression effect depends on having specific activation functions. In section 3, a sentence or two should be added to describe what a "teacher-student setup" is, and how it is relevant/interesting. Also in section 3, the cases where batch gradient descent is used and where stochastic gradient descent is used should be pointed out much more clearly. It is mentioned in the first line of page 7 that batch gradient descent is used, but it is not clear why SGD couldn't have been used to keep things consistent. This applies to figure 4 too. In section 4, it seems inconsistent that the comparison of SGD vs BGD is done using linear network as opposed to a relu network which is what's used in Section 2. At the least, a comparison using relu should be added to the supplementary section. Minor comments The different figure styles using in Fig 4A and C that have the same quantities plotted makes it confusing. An additional minor comment on the figures: some of the labels are hard to read on the manuscript.
iclr_2018_HJvvRoe0W
Published as a conference paper at ICLR 2018 AN IMAGE REPRESENTATION BASED CONVOLUTIONAL NETWORK FOR DNA CLASSIFICATION The folding structure of the DNA molecule combined with helper molecules, also referred to as the chromatin, is highly relevant for the functional properties of DNA. The chromatin structure is largely determined by the underlying primary DNA sequence, though the interaction is not yet fully understood. In this paper we develop a convolutional neural network that takes an image-representation of primary DNA sequence as its input, and predicts key determinants of chromatin structure. The method is developed such that it is capable of detecting interactions between distal elements in the DNA sequence, which are known to be highly relevant. Our experiments show that the method outperforms several existing methods both in terms of prediction accuracy and training time.
Dear editors, the authors addressed all of my comments and clearly improved their manuscript over multiple iterations. I therefore increased my rating from ‘6: Marginally above acceptance threshold’ to ‘7: Good paper, accept’. Please note that the authors made important edits to their manuscript after the ICLR deadline and could hence not upload their most current version, which you can from https://file.io/WIiEw9. If you decided to publish the manuscript, I hence highly suggest using this (https://file.io/WIiEw9) version. Best, -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- The authors present Hilbert-CNN, a convolutional neural network for DNA sequence classification. Unlike existing methods, their model does not use the raw one-dimensional (1D) DNA sequence as input, but two-dimensional (2D) images obtained by mapping sequences to images using spacing-filling Hilbert-Curves. They further present a model (Hilbert-CNN) that is explicitly designed for Hilbert-transformed DNA sequences. The authors show that their approach can increase classification accuracy and decrease training time when applied to predicting histone-modification marks and splice junctions. Major comments ============= 1. The motivation of transforming sequences into images is unclear and claimed benefits are not sufficiently supported by experiments. The essence of deep neural networks is to learn a hierarchy of features from the raw data instead of engineering features manually. Using space filling methods such as Hilbert-curves to transform (DNA) sequences into images can be considered as unnecessary feature-engineering. The authors claim that ‘CNNs have proven to be most powerful when operating on multi-dimensional input, such as in image classification’, which is wrong. Sequence-based convolutional and recurrent models have been successfully applied for modeling natural languages (translation, sentiment classification, …), acoustic signals (speech recognition, audio generation), or biological sequences (e.g. predicting various epigenetic marks from DNA as reviewed in Angermueller et al). They further claim that their method can ‘better take the spatial features of DNA sequences into account’ and can better model ‘long-term interactions’ between distant regions. This is not obvious since Hilbert-curves map adjacent sequence characters to pixels that are close to each other as described by the authors, but distant characters to distant pixels. Hence, 2D CNN must be deep enough for modeling interactions between distant image features, in the same way as a 1D CNN. Transforming sequences to images has several drawbacks. 1) Since the resulting images have a small width and height but many channels, existing 2D CNNs such as ResNet or Inception can not be applied, which also required the authors to design a specific model (Hilbert-CNN). 2) Hilbert-CNN requires more memory due to empty image regions. 3) Due to the high number of channels, convolutional filters have more parameters. 4) The sequence-to-image transformation makes model-interpretability hard, which is in particular important in biology. For example, motifs of the first convolutional layers can not be interpreted as sequence motifs (as described in Angermueller et al) and it is unclear how to analyze the influence of sequence characters using attention or gradient-based methods. The authors should more clearly motivate their model in the introduction, tone-down the benefit of sequence-to-image transformations, and discuss drawbacks of their model. This requires major changes of introduction and discussion. 2. The authors should more clearly describe which and how they optimized hyper-parameters. The authors should optimize the most important hyper-parameters of their model (learning rate, batch size, weight decay, max vs. average pooling, ELU vs. ReLU, …) and baseline models on a holdout validation set. The authors should also report the validation accuracy for different sequence lengths, k-mer sizes, and space filling functions. Can their model be applied to longer sequences (>= 1kbp) which had been shown to improve performance (e.g. 10.1101/gr.200535.115)? Does Figure 4 show the performance on the training, validation, or test set? 3. It is unclear if the performance gain is due the proposed sequence-to-image transformation, or due to the proposed network architecture (Hilbert-CNN). It is also unclear if Hilbert-CNNs are applicable to DNA sequence classification tasks beyond predicting chromatin states and splice junctions. To address these points, the authors should compare Hilbert-CNN to models of the same capacity (number of parameters) and optimize hyper-parameters (k-mer size, convolutional filter size, learning rate, …) in the same way as they did for Hilbert-CNN. The authors should report the number of parameters of all models (Hilbert-CNN, Seq-CNN, 1D-sequence-CNN (Table 5), and LSTM (Table 6), …) in an additional table. The authors should also compare Hilbert-CNN to the DanQ architecture on predicting epigenetic markers using the same dataset as reported in the DanQ publication (DOI: 10.1093/nar/gkw226). The authors should also compare Hilbert-CNNs to gapped-kmer SVM, a shallow model that had been successfully applied for genomic prediction tasks. 4. The authors should report the AUC and area under precision-recall curve (APR) in additional to accuracy (ACC) in Table 3. 5. It is unclear how training time was measured for baseline models (Seq-CNN, LSTM, …). The authors should use the same early stopping criterion as they used for training Hilber-CNNs. The authors should also report the training time of SVM and gkm-SVM (see comment 3) in Table 3. Minor comments ============= 1. The authors should avoid uninformative adjectives and clutter throughout the manuscript, for example ‘DNA is often perceived’, ‘Chromatin can assume’, ‘enlightening’, ‘very’, ‘we first have to realize’, ‘do not mean much individually’, ‘very much like the tensor’, ‘full swing’, ‘in tight communication’, ‘two methods available in the literature’. The authors should point out in section two that k-mers can be overlapping. 2. Section 2.1: One-hot vectors is not the only way for embedding words. The authors should also mention Glove and word2vec. Similar approaches had been applied to protein sequences (DOI: 10.1371/journal.pone.0141287) 3. The authors should more clearly describe how Hilbert-curves map sequences to images and how images are cropped. What does ‘that is constructed in a recursive manner’ mean? Simply cropping the upper half of Figure 1c would lead to two disjoint sequences. What is the order of Figure 1e? 4. The authors should consistently use ‘channels’ instead of ‘full vector of length’ to denote the dimensionality of image pixels. 5. The authors should use ‘Batch norm’ instead of ‘BN’ in Figure 2 for clarification. 6. Hilber-CNN is similar to ResNet (DOI: 10.1371/journal.pone.0141287), which consists of multiple ‘residual blocks’, where each block is a sequence of ‘residual units’. A ‘computational block’ in Hilbert-CNN contains two parallel ‘residual blocks’ (Figure 3) instead of a sequence of ‘residual units’. The authors should use ‘residual block’ instead of ‘computational block’, and ‘residual units’ as in the original ResNet publication. The authors should also motivate why two residual units/blocks are applied parallely instead of sequentially. 7. Caption table 1: the authors should clarify if ‘Output size’ is ‘height, width, channels’, and explain the notation in ‘Description’ (or refer to the text.)
iclr_2018_HymuJz-A-
The robust and efficient recognition of visual relations in images is a hallmark of biological vision. Here, we argue that, despite recent progress in visual recognition, modern machine vision algorithms are severely limited in their ability to learn visual relations. Through controlled experiments, we demonstrate that visual-relation problems strain convolutional neural networks (CNNs). The networks eventually break altogether when rote memorization becomes impossible such as when the intra-class variability exceeds their capacity. We further show that another type of feedforward network, called a relational network (RN), which was shown to successfully solve seemingly difficult visual question answering (VQA) problems on the CLEVR datasets, suffers similar limitations. Motivated by the comparable success of biological vision, we argue that feedback mechanisms including working memory and attention are the key computational components underlying abstract visual reasoning.
The authors introduce a set of very simple tasks that are meant to illustrate the challenges of learning visual relations. They then evaluate several existing network architectures on these tasks, and show that results are not as impressive as others might have assumed they would be. They show that while recent approaches (e.g. relational networks) can generalize reasonably well on some tasks, these results do not generalize as well to held-out-object scenarios as might have been assumed. Clarity: The paper is fairly clearly written. I think I mostly followed it. Quality: I'm intrigued by but a little uncomfortable with the generalization metrics that the authors use. The authors estimate the performance of algorithms by how well they generalize to new image scenarios when trained on other image conditions. The authors state that ". . . the effectiveness of an architecture to learn visual-relation problems should be measured in terms of generalization over multiple variants of the same problem, not over multiple splits of the same dataset." Taken literally, this would rule out a lot of modern machine learning, even obviously very good work. On the other hand, it's clear that at some point, generalization needs to occur in testing ability to understand relationships. I'm a little worried that it's "in the eye of the beholder" whether a given generalization should be expected to work or not. There are essentially three scenarios of generalization discussed in the paper: (a) various generalizations of image parameters in the PSVRT dataset (b) various hold-outs of the image parameters in the sort-of-CLEVR dataset (c) from sort-of-CLEVR "objects" to PSVRT bit patterns The result that existing architectures didn't do very well at these generalizations (especially b and c) *may* be important -- or it may not. Perhaps if CNN+RN were trained on a quite rich real-world training set with a variety of real-world three-D objects beyond those shown in sort-of-CLEVR, it would generalize to most other situations that might be encountered. After all, when we humans generalize to understanding relationships, exactly what variability is present in our "training sets" as compared to our "testing" situations? How do the authors know that humans are effectively generalizing rather than just "interpolating" within their (very rich) training set? It's not totally clear to me that if totally naive humans (who had never seen spatial relationships before) were evaluated on exactly the training/testing scenarios described above, that they would generalize particularly well either. I don't think it can just be assumed a priori that humans would be super good this form of generalization. So how should authors handle this criticism? What would be useful would either be some form of positive control. Either human training data showing very effective generalization (if one could somehow make "novel" relationships unfamiliar to humans), or a different network architecture that was obviously superior in generalization to CNN+RN. If such were present, I'd rate this paper significantly higher. Also, I can't tell if I really fully believe the results of this paper. I don't doubt that the authors saw the results they report. However, I think there's some chance that if the same tasks were in the hands of people who *wanted* CNNs or CNN+RN to work well, the results might have been different. I can't point to exactly what would have to be different to make things "work", because it's really hard to do that ahead of actually trying to do the work. However, this suspicion on my part is actually a reason I think it might be *good* for this paper to be published at ICLR. This will give the people working on (e.g.) CNN+RN somewhat more incentive to try out the current paper's benchmarks and either improve their architecture or show that the the existing one would have totally worked if only tried correctly. I myself am very curious about what would happen and would love to see this exchange catalyzed. Originality and Significance: The area of relation extraction seems to me to be very important and probably a bit less intensively worked on that it should be. However, as the authors here note, there's been some recent work (e.g. Santoro 2017) in the area. I think that the introduction of baselines benchmark challenge datasets such as the ones the authors describe here is very useful, and is a somewhat novel contribution.
iclr_2018_rJv4XWZA-
In this paper, we present a technique for generating artificial datasets that retain statistical properties of the real data while providing differential privacy guarantees with respect to this data. We include a Gaussian noise layer in the discriminator of a generative adversarial network to make the output and the gradients differentially private with respect to the training data, and then use the generator component to synthesise privacy-preserving artificial dataset. Our experiments show that under a reasonably small privacy budget we are able to generate data of high quality and successfully train machine learning models on this artificial data.
The paper proposes a technique for differentially privately generating synthetic data using GAN, and experimentally showed that their method achieves both high utility and good privacy. The idea of building a differentially private GAN and generating differentially private synthetic data is very interesting. However, my main concern is the privacy aspect of the technique, as it is not explained clearly enough in the paper. There is also room for improvement in the presentation and clarity of the paper. More details: - About the differential privacy aspect: The author didn't provide detailed privacy analysis of the Gaussian noise layer, and I don't find the values of the sensitivity (C = 1) provided in the answer to a public comment easy to see. Also, the paper mentioned that the batch size is 32 and the author mentioned in the comment that the std of the Gaussian noise is 0.7, and the number of epoch is 50 or 150. I think these values would lead to epsilon much larger than 8 (as in Table 1). However, in Section 5.2, it is said that "Privacy bounds were evaluated using the moments accountant and the privacy amplification theorem (Abadi et al., 2016), and therefore, are data-dependent and are tighter than using normal composition theorems." I don't see clearly why privacy amplification is needed here, and why using moments accountant and privacy amplification can lead to data-dependent privacy loss. In general, I don't find the privacy analysis of this paper clear and detailed enough to convince me about the correctness of the privacy results. However, I am very happy to change my opinion if there are convincing details in the rebuttal. - About the presentation: As a paper proposing a differentially private algorithm, detailed and formal analysis of the privacy guarantees is essential to convince the readers. For example, I think it would be much better if there is a formal theorem showing the sensitivity of the Gaussian noise layer. And it would be better to restate (in Appendix 7.4) not only the definition of moments accountant, but the composition and tail bound, as well as the moments accountant for the Gaussian mechanism, since they are all used in the privacy analysis of this paper.
iclr_2018_Syr8Qc1CW
Disentangling factors of variation has always been a challenging problem in representation learning. Existing algorithms suffer from many limitations, such as unpredictable disentangling factors, bad quality of generated images from encodings, lack of identity information, etc. In this paper, we proposed a supervised algorithm called DNA-GAN trying to disentangle different attributes of images. The latent representations of images are DNA-like, in which each individual piece represents an independent factor of variation. By annihilating the recessive piece and swapping a certain piece of two latent representations, we obtain another two different representations which could be decoded into images. In order to obtain realistic images and also disentangled representations, we introduced the discriminator for adversarial training. Experiments on Multi-PIE and CelebA datasets demonstrate the effectiveness of our method and the advantage of overcoming limitations existing in other methods.
Summary: This paper investigated the problem of attribute-conditioned image generation using generative adversarial networks. More specifically, the paper proposed to generate images from attribute and latent code as high-level representation. To learn the mapping from image to high-level representations, an auxiliary encoder was introduced. The model was trained using a combination of reconstruction (auto-encoding) and adversarial loss. To further encourage effective disentangling (against trivial solution), an annihilating operation was proposed together with the proposed training pipeline. Experimental evaluations were conducted on standard face image databases such as Multi-PIE and CelebA. == Novelty and Significance == Multi-attribute image generation is an interesting task but has been explored to some extent. The integration of generative adversarial networks with auto-encoding loss is not really a novel contribution. -- Autoencoding beyond pixels using a learned similarity metric. Larsen et al., In ICML 2016. == Technical Quality == First, it is not clear how was the proposed annihilating operation used in the experiments (there is no explanation in the experimental section). Based on my understanding, additional loss was added to encourage effective disentangling (prevent trivial solution). I would appreciate the authors to elaborate this a bit. Second, the iterative training (section 3.4) is not a novel contribution since it was explored in the literature before (e.g., Inverse Graphics network). The proof developed in the paper provides some theoretical analysis but cannot be considered as a significant contribution. Third, it seems that the proposed multi-attribute generation pipeline works for binary attribute only. However, such assumption limits the generality of the work. Since the title is quite general, I would assume to see the results (1) on datasets with real-valued attributes, mixture attributes or even relative attributes and (2) not specific to face images. -- Learning to generate chairs with convolutional neural networks. Dosovitskiy et al., In CVPR 2015. -- Deep Convolutional Inverse Graphics Network. Kulkarni et al., In NIPS 2015. -- Attribute2Image: Conditional Image Generation from Visual Attributes. Yan et al., In ECCV 2016. -- InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets. Chen et al., In NIPS 2016. Additionally, considering the generation quality, the CelebA samples in the paper are not the state-of-the-art. I suspect the proposed method only works in a more constrained setting (such as Multi-PIE where the images are all well aligned). Overall, I feel that the submitted version is not ready for publication in the current form.
iclr_2018_B1gJ1L2aW
Published as a conference paper at ICLR 2018 CHARACTERIZING ADVERSARIAL SUBSPACES USING LOCAL INTRINSIC DIMENSIONALITY Deep Neural Networks (DNNs) have recently been shown to be vulnerable against adversarial examples, which are carefully crafted instances that can mislead DNNs to make errors during prediction. To better understand such attacks, a characterization is needed of the properties of regions (the so-called 'adversarial subspaces') in which adversarial examples lie. We tackle this challenge by characterizing the dimensional properties of adversarial regions, via the use of Local Intrinsic Dimensionality (LID). LID assesses the space-filling capability of the region surrounding a reference example, based on the distance distribution of the example to its neighbors. We first provide explanations about how adversarial perturbation can affect the LID characteristic of adversarial regions, and then show empirically that LID characteristics can facilitate the distinction of adversarial examples generated using state-of-the-art attacks. As a proof-of-concept, we show that a potential application of LID is to distinguish adversarial examples, and the preliminary results show that it can outperform several state-of-the-art detection measures by large margins for five attack strategies considered in this paper across three benchmark datasets . Our analysis of the LID characteristic for adversarial regions not only motivates new directions of effective adversarial defense, but also opens up more challenges for developing new attacks to better understand the vulnerabilities of DNNs.
The paper considers a problem of adversarial examples applied to the deep neural networks. The authors conjecture that the intrinsic dimensionality of the local neighbourhood of adversarial examples significantly differs from the one of normal (or noisy) examples. More precisely, the adversarial examples are expected to have intrinsic dimensionality much higher than the normal points (see Section 4). Based on this observation they propose to use the intrinsic dimensionality as a way to separate adversarial examples from the normal (and noisy) ones during the test time. In other words, the paper proposes a particular approach for the adversarial defence. It turns out that there is a well-studied concept in the literature capturing the desired intrinsic dimensionality: it is called the local intrinsic dimensionality (LID, Definition 1) . Moreover, there is a known empirical estimator of LID, based on the k-nearest neighbours. The authors propose to use this estimator in computing the intrinsic dimensionalities for the test time examples. For every test-time example X the resulting Algorithm 1 computes LID estimates of X activations computed for all intermediate layer of DNN. These values are finally used as features in classifying adversarial examples from normal and noisy ones. The authors empirically evaluate the proposed technique across multiple state-of-the art adversarial attacks, 3 datasets (MNIST, CIFAR10, and SVHN) and compare their novel adversarial detection technique to 2 other ones recently reported in the literature. The experiments support the conjecture mentioned above and show that the proposed technique *significantly* improves the detection accuracy compared to 2 other methods across all attacks and datasets (see Table 1). Interestingly, the authors also test whether adversarial attacks can bypass LID-based detection methods by incorporating LID in their design. Preliminary results show that even in this case the proposed method manages to detect adversarial examples most of the time. In other words, the proposed technique is rather stable and can not be easily exploited. I really enjoyed reading this paper. All the statements are very clear, the structure is transparent and easy to follow. The writing is excellent. I found only one typo (page 8, "We also NOTE that..."), otherwise I don't actually have any comments on the text. Unfortunately, I am not an expert in the particular field of adversarial examples, and can not properly assess the conceptual novelty of the proposed method. However, it seems that it is indeed novel and given rather convincing empirical justifications, I would recommend to accept the paper.
iclr_2018_rJa90ceAb
Conventionally, convolutional neural networks (CNNs) process different images with the same set of filters. However, the variations in images pose a challenge to this fashion. In this paper, we propose to generate sample-specific filters for convolutional layers in the forward pass. Since the filters are generated on-thefly, the model becomes more flexible and can better fit the training data compared to traditional CNNs. In order to obtain sample-specific features, we extract the intermediate feature maps from an autoencoder. As filters are usually high dimensional, we propose to learn a set of coefficients instead of a set of filters. These coefficients are used to linearly combine the base filters from a filter repository to generate the final filters for a CNN. The proposed method is evaluated on MNIST, MTFL and CIFAR10 datasets. Experiment results demonstrate that the classification accuracy of the baseline model can be improved by using the proposed filter generation method.
The authors propose an approach to dynamically generating filters in a CNN based on the input image. The filters are generated as linear combinations of a basis set of filters, based on features extracted by an auto-encoder. The authors test the approach on recognition tasks on three datasets: MNIST, MTFL (facial landmarks) and CIFAR10, and show a small improvement over baselines without dynamic filters. Pros: 1) I have not seen this exact approach proposed before. 2) There method is evaluated on three datasets and two tasks: classification and facial landmark detection. Cons: 1) The authors are not the first to propose dynamically generating filters, and they clearly mention that the work of De Brabandere et al. is closely related. Yet, there is no comparison to other methods for dynamic weight generation. 2) Related to that, there is no ablation study, so it is unclear if the authors’ contributions are useful. I appreciate the analysis in Tables 1 and 2, but this is not sufficient. Why the need for the autoencoder - why can’t the whole network be trained end-to-end on the goal task? Why generate filters as linear combination - is this just for computational reasons, or also accuracy? This should be analyzed empirically. 3) The experiments are somewhat substandard: - On MNIST the authors use a tiny poorly-performance network, and it is no surprise that one can beat it with a bigger dynamic filter network. - The MTFL experiments look most convincing (although this might be because I am not familiar with SoTA on the dataset), but still there is no control for the number of parameters, and the performance improvements are not huge - On CIFAR10 - there is a marginal improvement in performance, which, as the authors admit, can also be reached by using a deeper model. The baseline models are far from SoTA - the authors should look at more modern architecture such as AllCNN (not particularly new or good, but very simple), ResNet, wide ResNet, DenseNet, etc. As a comment, I don’t think classification is a good task for showcasing such an architecture - classification is already working extremely well. Many other tasks - for instance, detection, tracking, few-shot learning - seem much more promising. To conclude, the authors propose a new approach to learning convolutional networks with dynamic input-conditioned filters. Unfortunately, the authors fail to demonstrate the value of the proposed method. I therefore recommend rejection.
iclr_2018_rJ1RPJWAW
This paper explores the simplicity of learned neural networks under various settings: learned on real vs random data, varying size/architecture and using large minibatch size vs small minibatch size. The notion of simplicity used here is that of learnability i.e., how accurately can the prediction function of a neural network be learned from labeled samples from it. While learnability is different from (in fact often higher than) test accuracy, the results herein suggest that there is a strong correlation between small generalization errors and high learnability. This work also shows that there exist significant qualitative differences in shallow networks as compared to popular deep networks. More broadly, this paper extends in a new direction, previous work on understanding the properties of learned neural networks. Our hope is that such an empirical study of understanding learned neural networks might shed light on the right assumptions that can be made for a theoretical study of deep learning.
Summary: This paper presents very nice experiments comparing the complexity of various different neural networks using the notion of "learnability" --- the learnability of a model (N1) is defined as the "expected agreement" between the output of N1, and the output of another model N2 which has been trained to match N1 (on a dataset of size n). The paper suggests that the learnability of a model is a good measure of how simple the function learned by that model is --- furthermore, it shows that this notion of learnability correlates well (across extensive experiments) with the test accuracy of the model. The paper presents a number of interesting results: 1) Larger networks are typically more learnable than smaller ones (typically we think of larger networks as being MORE complicated than smaller networks -- this result suggests that in an important sense, large networks are simpler). 2) Networks trained with random data are significantly less learnable than networks trained on real data. 3) Networks trained on small mini-batches (larger variance SGD updates) are more learnable than those trained on large minibatches. These results are in line with several of the observations made by Zhang et al (2017), which showed that neural networks are able to both (a) fit random data, and (b) generalize well; these results at first seem to run counter to the ideas from statistical learning theory that models with high capacity (VC dimension, radamacher complexity, etc.) have much weaker generalization guarantees than lower capacity models. These results suggest that models that have high capacity (by one definition) are also capable of being simple (by another definition). These results nicely complement the work which studies the "sharpness/curvature" of the local minima found by neural networks, which argue that the minima which generalize better are those with lower curvature. Review: Quality: I found this to be high quality work. The paper presents many results across a variety of network architectures. One area for improvement is presenting results on larger datasets (currently all experiments are on CIFAR-10), and/or on non-convolutional architectures. Additionally, a discussion of why learnabiblity might imply low generalization error would have been interesting (the more formal, the better), though it is unclear how difficult this would be. Clarity: The paper is written clearly. A small point: Step 2 in section 3.1 should specify that argmax of N1(D2) is used to generate labels for the training of the second network. Also, what dataset D_i is used for tables 3-6? Please specify. Originality: The specific questions tackled in this paper are original (learnability on random vs. real data, large vs. small networks, and large vs. small mini-batch training). But it is unclear to me exactly how original this use of "learnability" is in evaluating how simple a model is. It seems to me that this particular use of "learnability" is original, even though PAC learnability was defined a while ago. Significance: I find the results in this paper to be quite significant, and to provide a new way of understanding why deep neural networks generalize. I believe it is important to find new ways of formally defining the "simplicity/capacity" of a model, such that "simpler" models can be proven to have smaller generalization gap (between train and test error) relative to more "complicated" models. It is clear that VC dimension and radamacher complexity alone are not enough to explain the generalization performance of neural networks, and that neural networks with high capacity by these definitions are likely "simple" by other definitions (as we have seen in this paper). This paper makes an important contribution to this conversation, and could perhaps provide a starting point for theoreticians to better explain why deep networks generalize well. Pros - nice experiments, with very interesting results. - Helps explain one way in which large networks are in fact "simple" Cons - The paper does not attempt to relate the notion of learnability to that of generalization performance. All it says is that these two metrics appear to be well correlated.
iclr_2018_SJJinbWRZ
MODEL-ENSEMBLE TRUST-REGION POLICY OPTI- MIZATION Model-free reinforcement learning (RL) methods are succeeding in a growing number of tasks, aided by recent advances in deep learning. However, they tend to suffer from high sample complexity which hinders their use in real-world domains. Alternatively, model-based reinforcement learning promises to reduce sample complexity, but tends to require careful tuning and, to date, it has succeeded mainly in restrictive domains where simple models are sufficient for learning. In this paper, we analyze the behavior of vanilla model-based reinforcement learning methods when deep neural networks are used to learn both the model and the policy, and we show that the learned policy tends to exploit regions where insufficient data is available for the model to be learned, causing instability in training. To overcome this issue, we propose to use an ensemble of models to maintain the model uncertainty and regularize the learning process. We further show that the use of likelihood ratio derivatives yields much more stable learning than backpropagation through time. Altogether, our approach Model-Ensemble Trust-Region Policy Optimization (ME-TRPO) significantly reduces the sample complexity compared to model-free deep RL methods on challenging continuous control benchmark tasks 1 .
Summary: The paper proposes to use ensembles of models to overcome a typical problem when training on a learned model: That the policy learns to take advantage of errors of the model. The models use the same training data but are differentiated by a differente parameter initialization and by training on differently drawn minibatches. To train the policy, at each step the next state is taken from an uniformly randomly drawn model. For validation the policy is evaluated on all models and training is stopped early if it doesn't improve on enough of them. While the idea to use an ensemble of deep neural networks to estimate their uncertainty is not new, I haven't seen it yet in this context. They successfully show in their experiments that typical levels of performance can be achieved using much less samples from the real environment. The reduction in required samples is over an order of magnitude for simple environments (Mujoco Swimmer). However, (as expected for model based algorithms) both the performance as well as the reduction in sample complexity gets worse with increasing complexity of the environment. It can still successfully tackle the Humanoid Mujoco task but my guess is that that is close to the upper limit of this algorithm? Overall the paper is a solid and useful contribution to the field. *Quality:* The paper is clearly shows the advantage of the proposed method in the experimental section where it compares to several baselines (and not only one, thank you for that!). Things which in my opinion aren't absolutely required in the paper but I would find interesting and useful (e.g. in the appendix) are: 1. How does the runtime (e.g. number of total samples drawn from both the models and the real environment, including for validation purpuses) compare? From the experiments I would guess that MB-TRPO is about two to three orders of magnitude slower, but having this information would be useful. 2. For more complex environments it seems that training is becoming less stable and performance degradates, especially for the Humanoid environment. A plot like in figure 4 (different number of models) for the humanoid environment could be interesting? Additionally maybe a short discussion where the major problem for further scaling lies? For example: Expressiveness of the models? Required number of models / computation feasibility? Etc... This is not necessarily required for the paper but would be interesting. *Originality & Significance:* As far as I can tell, none of the fundamental ideas are new. However, they are combined in an interesting, novel way that shows significant performance improvements. The problem the authors tackle, namely learning a deep neural network model for model based RL, is important and relevant. As such, the paper contributes to the field and should be accepted. *Smaller questions and notes:* - Longer training times for MB-TRPO, in particular for Ant and Humanoid would have been intersting if computationionally feasibly. - Could this in principle be used with Q-learning as well (instead of TRPO) if the action space is discrete? Or is there an obvious reason why not that I am missing?
iclr_2018_B1jscMbAW
Published as a conference paper at ICLR 2018 DIVIDE AND CONQUER NETWORKS We consider the learning of algorithmic tasks by mere observation of input-output pairs. Rather than studying this as a black-box discrete regression problem with no assumption whatsoever on the input-output mapping, we concentrate on tasks that are amenable to the principle of divide and conquer, and study what are its implications in terms of learning. This principle creates a powerful inductive bias that we leverage with neural architectures that are defined recursively and dynamically, by learning two scaleinvariant atomic operations: how to split a given input into smaller sets, and how to merge two partially solved tasks into a larger partial solution. Our model can be trained in weakly supervised environments, namely by just observing input-output pairs, and in even weaker environments, using a non-differentiable reward signal. Moreover, thanks to the dynamic aspect of our architecture, we can incorporate the computational complexity as a regularization term that can be optimized by backpropagation. We demonstrate the flexibility and efficiency of the Divideand-Conquer Network on several combinatorial and geometric tasks: convex hull, clustering, knapsack and euclidean TSP. Thanks to the dynamic programming nature of our model, we show significant improvements in terms of generalization error and computational complexity.
This paper proposes to add new inductive bias to neural network architecture - namely a divide and conquer strategy know from algorithmics. Since introduced model has to split data into subsets, it leads to non-differentiable paths in the graph, which authors propose to tackle with RL and policy gradients. The whole model can be seen as an RL agent, trained to do splitting action on a set of instances in such a way, that jointly trained predictor T quality is maximised (and thus its current log prob: log p(Y|P(X)) becomes a reward for an RL agent). Authors claim that model like this (strengthened with pointer networks/graph nets etc. depending on the application) leads to empirical improvement on three tasks - convex hull finding, k-means clustering and on TSP. However, while results on convex hull task are good, k-means ones use a single, artificial problem (and do not test DCN, but rather a part of it), and on TSP DCN performs significantly worse than baselines in-distribution, and is better when tested on bigger problems than it is trained on. However the generalisation scores themselves are pretty bad thus it is not clear if this can be called a success story. I will be happy to revisit the rating if the experimental section is enriched. Pros: - very easy to follow idea and model - simple merge or RL and SL in an end-to-end trainable model - improvements over previous solutions Cons: - K-means experiments should not be run on artificial dataset, there are plenty of benchmarking datasets out there. In current form it is just a proof of concept experiment rather than evaluation (+ if is only for splitting, not for the entire architecture proposed). It would be also beneficial to see the score normalised by the cost found by k-means itself (say using Lloyd's method), as otherwise numbers are impossible to interpret. With normalisation, claiming that it finds 20% worse solution than k-means is indeed meaningful. - TSP experiments show that "in distribution" DCN perform worse than baselines, and when generalising to bigger problems they fail more gracefully, however the accuracies on higher problem are pretty bad, thus it is not clear if they are significant enough to claim success. Maybe TSP is not the best application of this kind of approach (as authors state in the paper - it is not clear how merging would be applied in the first place). - in general - experimental section should be extended, as currently the only convincing success story lies in convex hull experiments Side notes: - DCN is already quite commonly used abbreviation for "Deep Classifier Network" as well as "Dynamic Capacity Network", thus might be a good idea to find different name. - please fix \cite calls to \citep, when authors name is not used as part of the sentence, for example: Graph Neural Network Nowak et al. (2017) should be Graph Neural Network (Nowak et al. (2017)) # After the update Evaluation section has been updated threefold: - TSP experiments are now in the appendix rather than main part of the paper - k-means experiments are Lloyd-score normalised and involve one Cifar10 clustering - Knapsack problem has been added Paper significantly benefited from these changes, however experimental section is still based purely on toy datasets (clustering cifar10 patches is the least toy problem, but if one claims that proposed method is a good clusterer one would have to beat actual clustering techniques to show that), and in both cases simple problem-specific baseline (Lloyd for k-means, greedy knapsack solver) beats proposed method. I can see the benefit of trainable approach here, the fact that one could in principle move towards other objectives, where deriving Lloyd alternative might be hard; however current version of the paper still does not show that. I increased rating for the paper, however in order to put the "clear accept" mark I would expect to see at least one problem where proposed method beats all basic baselines (thus it has to either be the problem where we do not have simple algorithms for it, and then beating ML baseline is fine; or a problem where one can beat the typical heuristic approaches).
iclr_2018_B1mAkPxCZ
A natural solution for one-shot learning is to augment training data to handle the data deficiency problem. However, directly augmenting in the image domain may not necessarily generate training data that sufficiently explore the intra-class space for one-shot classification. Inspired by the recent vocabulary-informed learning, we propose to generate synthetic training data with the guide of the semantic word space. Essentially, we train an auto-encoder as a bridge to enable the transformation between the image feature space and the semantic space. Besides directly augmenting image features, we transform the image features to semantic space using the encoder and perform the data augmentation. The decoder then synthesizes the image features for the augmented instances from the semantic space. Experiments on three datasets show that our data augmentation method effectively improves the performance of one-shot classification. Extensive study shows that data augmented from semantic space are complementary with those from the image space, and thus boost the classification accuracy dramatically. Source code and dataset will be available.
Summary: This paper proposes a data augmentation method for one-shot learning of image classes. This is the problem where given just one labeled image of a class, the aim is to correctly identify other images as belonging to that class as well. The idea presented in this paper is that instead of performing data augmentation in the image space, it may be useful to perform data augmentation in a latent space whose features are more discriminative for classification. One candidate for this is the image feature space learned by a deep network. However they advocate that a better candidate is what they refer to as "semantic space" formed by embedding the (word) labels of the images according to pre-trained language models like word2vec. The reasoning here is that the image feature space may not be semantically organized so that we are not guaranteed that a small perturbation of an image vector will yield image vectors that correspond to semantically similar images (belonging to the same class). On the other hand, in this semantic space, by construction, we are guaranteed that similar concepts lie near by each other. Thus this space may constitute a better candidate for performing data augmentation by small perturbations or by nearest neighbour search around the given vector since 1) the augmented data is more likely to correspond to features of similar images as the original provided image and 2) it is more likely to thoroughly capture the intra-class variability in the augmented data. The authors propose to first embed each image into a feature space, and then feed this learned representation into a auto-encoder that handles the projection to and from the semantic space with its encoder and decoder, respectively. Specifically, they propose to perform the augmentation on the semantic space representation, obtained from the encoder of this autoencoder. This involves producing some additional data points, either by adding noise to the projected semantic vector, or by choosing a number of that vector's nearest neighbours. The decoder then maps these new data points into feature space, obtaining in this way the image feature representations that, along with the feature representation of the original (real) image will form the batch that will be used to train the one-shot classifier. They conduct experiments in 3 datasets where they experiment with augmentation in the image feature space by random noise, as well as the two aforementioned types of augmentation in the semantic space. They claim that these augmentation types provide orthogonal benefits and can be combined to yield superior results. Overall I think this paper addresses an important problem in an interesting way, but there is a number of ways in which it can be improved, detailed in the comments below. Comments: -- Since the authors are using a pre-trained VGG for to embed each image, I'm wondering to what extent they are actually doing one-shot learning here. In other words, the test set of a dataset that is used for evaluation might contain some classes that were also present in the training set that VGG was originally trained on. It would be useful to clarify whether this is happening. Can the VGG be instead trained from scratch in an end-to-end way in this model? -- A number of things were unclear to me with respect to the details of the training process: the feature extractor (VGG) is pre-trained. Is this finetuned during training? If so, is this done jointly with the training of the auto-encoder? Further, is the auto-encoder trained separately or jointly with the training of the one-shot learning classifier? -- While the authors have convinced me that data augmentation indeed significantly improves the performance in the domains considered (based on the results in Table 1 and Figure 5a), I am not convinced that augmentation in the proposed manner leads to a greater improvement than just augmenting in the image feature domain. In particular, in Table 2, where the different types of augmentation are compared against each other, we observe similar results between augmenting only in the image feature space versus augmenting only in the semantic feature space (ie we observe that "FeatG" performs similarly as "SemG" and as "SemN"). When combining multiple types of augmentation the results are better, but I'm wondering if this is because more augmented data is used overall. Specifically, the authors say that for each image they produce 5 additional "virtual" data points, but when multiple methods are combined, does this mean 5 from each method? Or 5 overall? If it's the former, the increased performance may merely be attributed to using more data. It is important to clarify this point. -- Comparison with existing work: There has been a lot of work recently on one-shot and few-shot learning that would be interesting to compare against. In particular, mini-ImageNet is a commonly-used benchmark for this task that this approach can be applied to for comparison with recent methods that do not use data augmentation. Some examples are: - Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks. (Finn et al.) - Prototypical Networks for Few-shot Learning (Snell et al.) - Matching Networks for One-shot Learning (Vinyals et al.) - Few-Shot Learning Through an Information Retrieval Lens (Triantafillou et al.) -- A suggestion: As future work I would be very interested to see if this method can be incorporated into common few-shot learning models to on-the-fly generate additional training examples from the "support set" of each episode that these approaches use for training.
iclr_2018_ryZ8sz-Ab
Workshop track -ICLR 2018 FAST AND ACCURATE TEXT CLASSIFICATION: SKIM- MING, REREADING AND EARLY STOPPING Recent advances in recurrent neural nets (RNNs) have shown much promise in many applications in natural language processing. For most of these tasks, such as sentiment analysis of customer reviews, a recurrent neural net model parses the entire review before forming a decision. We argue that reading the entire input is not always necessary in practice, since a lot of reviews are often easy to classify, i.e., a decision can be formed after reading some crucial sentences or words in the provided text. In this paper, we present an approach of fast reading for text classification. Inspired by several well-known human reading techniques, our approach implements an intelligent recurrent agent which evaluates the importance of the current snippet in order to decide whether to make a prediction, or to skip some texts, or to re-read part of the sentence. Our agent uses an RNN module to encode information from the past and the current tokens, and applies a policy module to form decisions. With an end-to-end training algorithm based on policy gradient, we train and test our agent on several text classification datasets and achieve both higher efficiency and better accuracy compared to previous approaches.
This paper proposes to augment RNNs for text classification with a mechanism that decides whether the RNN should re-read a token, skip a number of tokens, or stop and output the prediction. The motivation is that one can stop reading before the end of the text and/or skip some words and still arrive to the same answer but faster. The idea is intriguing, even though not entirely novel. Apart from the Yu et al. (2017) cited, there is older work trying to save computational time in NLP, e.g.: Dynamic Feature Selection for Dependency Parsing. He He, Hal Daumé III and Jason Eisner. Empirical Methods in Natural Language Processing (EMNLP), 2013 that decides whether to extract a feature or not. However, what is not clear to me what is achieved here. In the example shown in Figure 5 it seems like what happens is that by not reading the whole text the model avoids passages that might be confusing it. This could improve predictive accuracy (as it seems to do), as long as the model can handle better the earlier part of the text. But this is just an assumption, which is not guaranteed in any way. It could be that the earlier parts of the text are hard for the model. In a way, it feels more like we are addressing a limitation of RNN models in understanding text. Pros: - The idea is intersting and if evaluated thoroughly it could be quite influential. Cons: - the introduction states that there are two kinds of NLP problems, sequence2sequence and sequence2scalar. I found this rather confusing since text classification falls in the latter presumably, but the output is a label. Similarly, PoS tagging has a linear chain as its output, can't see why it is sequence2scalar. I think there is a confusion between the methods used for a task, and the task itself. Being able to apply a sequence-based model to a task, doesn't make it sequential necessarily. - the comparison in terms of FLOPs is a good idea. But wouldn't the relative savings depend on the architecture used for the RNN and the RL agent? E.g. it could be that the RL network is more complicated and using it costs more than what it saves in the RNN operations. - While table 2 reports the savings vs the full reading model, we don't know how much worse the model got for these savings. - Having a trade-off between savings and accuracy is a good idea too. I would have liked to see an experiment showing how many FLOPs we can save for the same performance, which should be achievable by adjusting the alpha parameter. - The experiments are conducted on previously published datasets. It would be good to have some previously published results on them to get a sense of how good the RNN model used is. - Why not use smaller chunks? 20 words or one sentence at the time is rather coarse. If anything, it should help the model proposed achieve greater savings. How much does the choice of chunk matter? - it is stated in the conclusion that the advantage actor-critic used is beneficial, however no experimental comparison is shown. Was it used for the Yu et al. baseline too? - It is stated that model hardly relies on any hyperparameters; in comparison to what? It is better to quantify such statements,
iclr_2018_H1O0KGC6b
One of the main challenges of deep learning methods is the choice of an appropriate training strategy. In particular, additional steps, such as unsupervised pretraining, have been shown to greatly improve the performances of deep structures. In this article, we propose an extra training step, called post-training, which only optimizes the last layer of the network. We show that this procedure can be analyzed in the context of kernel theory, with the first layers computing an embedding of the data and the last layer a statistical model to solve the task based on this embedding. This step makes sure that the embedding, or representation, of the data is used in the best possible way for the considered task. This idea is then tested on multiple architectures with various data sets, showing that it consistently provides a boost in performance.
Summary: Based on ideas within the context of kernel theory, the authors consider post-training of NNs as an extra training step, which only optimizes the last layer of the network. This additional step makes sure that the embedding, or representation, of the data is used in the best possible way for the considered task (which is also reflected in the experiments). According to the authors, the contributions are the following: 1. Post-training step: keeping the rest of the NN frozen (after training), the method trains the last layer in order to "make sure" that the representation learned is used in the most efficient way. 2. Highlighting connections with kernel techniques and RKHS optimization (like kernel ridge regression). 3. Experimental results. Clarity: The paper is well-written, the main ideas well-clarified. Importance: While the majority of papers nowadays focuses on the representation part (i.e., how we get to \Phi_{L-1}(x)), this paper assumes this is given and proposes how to optimize the weights in the final step of the algorithm. This by itself is not enough to boost the performance universally (e.g., if \Phi_{L-1} is not well-trained, the problem is deeper than training the last layer); however, it proposes an additional step that can be used in most NN architectures. From that front (i.e., proposing to do something different than simply training a NN), I find the paper interesting, that might attract some attention at the conference. On the other hand, to my humble opinion, the experimental results do not show a significant gain in the performances of all networks (esp. Figure 3 and Table 1 are within the range of statistical error). In order to state something like this universally, either one needs to perform experiments with more than just MNIST/CIFAR datasets, or even more preferably, prove that the algorithm performs better. Originality: It would be great to have some more theory (if any) for the post-training step, or investigate more cases, rather than optimizing only the last layer. Comments: 1. I assume the authors focused in the last layer of the NN for simplicity, but is there a reason why one might want to focus only on the last layer? One reason is convexity in W of the problem (2). Any other? 2. Have the authors considered (even in practice only) to include training of the last 2 layers of the NN? The authors state this question in the future direction, but it would make the paper more complete to consider it here.
iclr_2018_rkw-jlb0W
Generative adversarial networks (GANs) have enjoyed great success, however often suffer instability during training which motivates many attempts to resolve this issue. Theoretical explanation for the cause of instability is provided in Wasserstein GAN (WGAN), and wasserstein distance is proposed to stablize the training. Though WGAN is indeed more stable than previous GANs, it takes more iterations and time to train. This is because the ways to ensure Lipschitz condition in WGAN (such as weight-clipping) significantly limit the capacity of the network. In this paper, we argue that it is beneficial to ensure Lipschitz condition as well as maintain sufficient capacity and expressiveness of the network. To facilitate this, we develop both theoretical and practical building blocks, using which one can construct different neural networks using a large range of metrics, as well as ensure Lipschitz condition and sufficient capacity of the networks. Using the proposed building blocks, and a special choice of a metric called Dudley metric, we propose Dudley GAN that outperforms the state of the arts in both convergence and sample quality. We discover a natural link between Dudley GAN (and its extension) and empirical risk minimization, which gives rise to generalization analysis.
The authors propose a different type of GAN--the Dudley GAN--that is related to the Dudley metric. In fact, it is very much like the WGAN, but rather than just imposing the function class to have a bounded gradient, they also impose it to be bounded itself. This is argued to be more stable than the WGAN, as gradient clipping is said not necessary for the Dudley GAN. The authors empirically show that the Dudley GAN achieves a greater LL than WGAN for the MNIST and CIFAR-10 datasets. The main idea [and its variants] looks solid, but with the plethora of GANs in the literature now, after reading I'm still left wondering why this GAN is significantly better than others [BEGAN, WGAN, etc.]. It is clear that imposing the quadratic penalty in equation (3) is really the same constraint as the Dudley norm? The big contribution of the paper seems to be that adding some L_inf regularization to the function class helps preclude gradient clipping, but after reading I'm unsure why this is "the right thing" to do in this case. We know that convergence in the Wasserstein metric is stronger than the Dudley metric, so why is using the weaker metric overweighed by the benefits in training? Nits: Since the function class is parameterized by a NN, the IPM is not actually the Dudley metric between the two distributions. One would have to show that the NN is dense in Dudley unit ball w.r.t. L_inf norm, but this sort of misnaming had started with the "Wasserstein" GAN.
iclr_2018_r15kjpHa-
In cooperative multi-agent reinforcement learning (MARL), how to design a suitable reward signal to accelerate learning and stabilize convergence is a critical problem. The global reward signal assigns the same global reward to all agents without distinguishing their contributions, while the local reward signal provides different local rewards to each agent based solely on individual behavior. Both of the two reward assignment approaches have some shortcomings: the former might encourage lazy agents, while the latter might produce selfish agents. In this paper, we study reward design problem in cooperative MARL based on packet routing environments. Firstly, we show that the above two reward signals are prone to produce suboptimal policies. Then, inspired by some observations and considerations, we design some mixed reward signals, which are off-the-shelf to learn better policies. Finally, we turn the mixed reward signals into the adaptive counterparts, which achieve best results in our experiments. Other reward signals are also discussed in this paper. As reward design is a very fundamental problem in RL and especially in MARL, we hope that MARL researchers can rethink the rewards used in their systems.
The authors suggest using a mixture of shared and individual rewards within a MARL environment to induce cooperation among independent agents. They show that on their specific application this can lead to a better overall global performance than purely sharing the global signal, or using just the independent rewards. The paper is a little too focused on the packet routing example domain and fails to deliver much in terms of a general theory of reward design for cooperative behaviours beyond showing that mixed rewards can lead to improved results in their domain. They discuss what and how rewards, and this could be made more formal, as well as (at the very least) some guiding principles to follow when mixing rewards. It feels like there is a missing section between sections 2 and 3, where this methodological content could be described. The rest of the paper has similar issues, with key intuition and concepts either missing entirely or under-represented. The technical content often assumes that the reader is familiar with certain terms, and it is difficult to see what meaningful conclusions can be drawn from the evaluation. On a minor note, the use of the term cooperative in this paper could be better defined. In game theory, cooperative games are those in which agents share rewards. Non-cooperative (game theory) games are those where agents have general reward signals (not necessarily cooperature or adversarial). Conventionally (yes there is existing reward design/shaping literature for MARL) people have used the same terms in MARL. Perhaps the authors could define their approach as weakly cooperative, or emergent cooperation. The related work could be better described. There are existing papers on MARL and the issues with cooperation among independent learners, and this could be referenced. This includes reward shaping and reward potential. I would also have expected to see brief mention of empowerment in this section too (the agent favouring states where it has the power to control outcomes in an information theoretic sense), as an underyling principle for intrinsic reward. However, more importantly, the authors really needed to do more to synthesize this into an overall picture of what principles are at play and what ideas/methods exist that have tried to exploit some of these principles. Detailed comments: • [p2] the authors say "We set the meta reward signals as 1 - max(U l ).", before they define what U_l is. • [p2] we have "As many applications in the real world can be modeled using similar methods, we expect that other fields can also benefit from this work." This statement is too vague, and the authors could do more to identify which application areas might benefit. • [p3, first para] "However, the reward design studies for MARL is so limited." Drop the word 'so'. Also, I would argue that there have been quite a few (non-deep) discussions about reward design in MARL, cooperative, non-cooperative and competitive domains. • [p3, sec 2.2] "This makes the diligent agents confuse about..." should be "confused", and I would advise against anthropomorphism at least when the meaning is obscured. • [p3, sec 3] "After having considered several other options, we finally choose the Packet Routing Domain as our experimental environments." Not sure what useful information is being conveyed here. • [sec 3] THe domain could be better described with intuition and formal descriptions, e.g. link utilization ratio, etc, before. • [p6] "Importantly, the proposed blR seems to have similar capacity with dlR," The discussion here is all in terms of the reward acronyms with very little call on intuition or other such assistance to the reader. • [p7] "We firstly try gR without any thinking" The language could be better here.
iclr_2018_Hk2MHt-3-
We investigate in this paper the architecture of deep convolutional networks. Building on existing state of the art models, we propose a reconfiguration of the model parameters into several parallel branches at the global network level, with each branch being a standalone CNN. We show that this arrangement is an efficient way to significantly reduce the number of parameters while at the same time improving the performance. The use of branches brings an additional form of regularization. In addition to splitting the parameters into parallel branches, we propose a tighter coupling of these branches by averaging their log-probabilities. The tighter coupling favours the learning of better representations, even at the level of the individual branches, as compared to when each branch is trained independently. We refer to this branched architecture as "coupled ensembles". The approach is generic and can be applied to almost any neural network architecture. With coupled ensembles of DenseNet-BC and parameter budget of 25M, we obtain error rates of 2.92%, 15.68% and 1.50% on CIFAR-10, CIFAR-100 and SVHN respectively. For the same parameter budget, DenseNet-BC has an error rate of 3.46%, 17.18%, and 1.8% respectively. With ensembles of coupled ensembles, of DenseNet-BC networks, with 50M total parameters, we obtain error rates of 2.72%, 15.13% and 1.42% respectively on these tasks.
This paper presents a deep network architecture which processes data using multiple parallel branches and combines the posterior from these branches to compute the final scores; the network is trained in end-to-end, thus training the parallel branches jointly. Existing literature with branching architecture either employ a 2 stage training approach, training branches independently and then training the fusion network, or the branching is restricted to local regions (set of contiguous layers). In effect, this paper extends the existing literature suggesting end-to-end branching. While the technical novelty, as described in the paper, is relatively limited, the thorough experimentation together with detailed comparisons between intuitive ways to combine the output of the parallel branches is certainly valuable to the research community. + Paper is well written and easy to follow. + Proposed branching architecture clearly outperforms the baseline network (same number of parameters with a single branch) and thus offer yet another interesting choice while creating the network architecture for a problem + Detailed experiments to study and analyze the effect of various parameters including the number of branches as well as various architectures to combine the output of the parallel branches. + [Ease of implementation] Suggested architecture can be easily implemented using existing deep learning frameworks. - Although joint end-to-end training of branches certainly brings value compared to independent training, but the increased resource requirements may limits the applicability to large benchmarks such as ImageNet. While authors suggests a way to circumvent such limitations by training branches on separate GPUs but this would still impose limits on the number of branches as well as its ease of implementation. - Adding an overview figure of the architecture in the main paper (instead of supplementary) would be helpful. - Branched architecture serve as a regularization by distributing the gradients across different branches; however this also suggests that early layers on the network across branches would be independent. It would helpful if authors would consider an alternate archiecture where early layers may be shared across branches, suggesting a delayed branching, with fusion at the final layer. - One of the benefits of architectures such as DenseNet is their usefulness as a feature extractor (output of lower layers) which generalizes even to domain other that the dataset; the branched architecture could potentially diminish this benefit. Minor edits: Page 1. 'significantly match and improve' => 'either match or improve' Additional notes: - It would interesting to compare this approach with a conditional training pipeline that sequentially adds branches, keeping the previous branches fixed. This may offer as a trade-off between benefits of joint training of branches vs being able to train deep models with several branches.
iclr_2018_H15odZ-C-
Published as a conference paper at ICLR 2018 SEMANTIC INTERPOLATION IN IMPLICIT MODELS In implicit models, one often interpolates between sampled points in latent space. As we show in this paper, care needs to be taken to match-up the distributional assumptions on code vectors with the geometry of the interpolating paths. Otherwise, typical assumptions about the quality and semantics of in-between points may not be justified. Based on our analysis we propose to modify the prior code distribution to put significantly more probability mass closer to the origin. As a result, linear interpolation paths are not only shortest paths, but they are also guaranteed to pass through high-density regions, irrespective of the dimensionality of the latent space. Experiments on standard benchmark image datasets demonstrate clear visual improvements in the quality of the generated samples and exhibit more meaningful interpolation paths.
The paper concerns distributions used for the code space in implicit models, e.g. VAEs and GANs. The authors analyze the relation between the latent space dimension and the normal distribution which is commonly used for the latent distribution. The well-known fact that probability mass concentrates in a shell of hyperspheres as the dimensionality grows is used to argue for the normal distribution being sub-optimal when interpolating between points in the latent space with straight lines. To correct this, the authors propose to use a Gamma-distribution for the norm of the latent space (and uniform angle distribution). This results in more mass closer to the origin, and the authors show both that the midpoint distribution is natural in terms of the KL divergence to the data points, and experimentally that the method gives visually appealing interpolations. While the contribution of using a standard family of distributions in a standard implicit model setup is limited, the paper does make interesting observations, analyses and an attempt to correct the interpolation issue. The paper is clearly written and presents the theory and experimental results nicely. I find that the paper can be accepted but the incremental nature of the contribution prevents a higher score.
iclr_2018_ryZ283gAZ
Deep neural networks have become the state-of-the-art models in numerous machine learning tasks. However, general guidance to network architecture design is still missing. In our work, we bridge deep neural network design with numerical differential equations. We show that many effective networks, such as ResNet, PolyNet, FractalNet and RevNet, can be interpreted as different numerical discretizations of differential equations. This finding brings us a brand new perspective on the design of effective deep architectures. We can take advantage of the rich knowledge in numerical analysis to guide us in designing new and potentially more effective deep networks. As an example, we propose a linear multi-step architecture (LM-architecture) which is inspired by the linear multi-step method solving ordinary differential equations. The LM-architecture is an effective structure that can be used on any ResNet-like networks. In particular, we demonstrate that LM-ResNet and LM-ResNeXt (i.e. the networks obtained by applying the LM-architecture on ResNet and ResNeXt respectively) can achieve noticeably higher accuracy than ResNet and ResNeXt on both CIFAR and ImageNet with comparable numbers of trainable parameters. In particular, on both CIFAR and ImageNet, LM-ResNet/LM-ResNeXt can significantly compress (> 50%) the original networks while maintaining a similar performance. This can be explained mathematically using the concept of modified equation from numerical analysis. Last but not least, we also establish a connection between stochastic control and noise injection in the training process which helps to improve generalization of the networks. Furthermore, by relating stochastic training strategy with stochastic dynamic system, we can easily apply stochastic training to the networks with the LM-architecture. As an example, we introduced stochastic depth to LM-ResNet and achieve significant improvement over the original LM-ResNet on CIFAR10.
Summary - This paper draws analogy from numerical differential equation solvers and popular residual network-like deep learning architectures. It makes connection from ResNet, FractalNet, DenseNet, and RevNet to different numerical solvers such as forward and backward Euler and Runge-Kunta. In addition, inspired by the Linear Multi-step methods (LM), the authors propose a novel LM-ResNet architecture in which the next residual block takes a linear combination of the previous two residual blocks’ activations. They also propose a stochastic version of LM-ResNet that resembles Shake-shake regularization and stochastic depth. In both deterministic and stochastic cases, they show a positive improvement in classification accuracy on standard object classification benchmarks such as CIFAR-10/100 and ImageNet. Pros - The intuition is good that connects differential equation and ResNet-like architecture, also explored in some of the related work. - Building upon the intuition, the author proposes a novel architecture based on a numerical ODE solver method. - Consistent improvement in accuracy is observed in both deterministic and stochastic cases. Cons - The title is a little bit misleading. “Beyond Finite Layer Neural Networks” sounds like the paper proposes some infinite layer neural networks but the paper only studies finite number of layers. - One thing that needs to be clarified is that, if the network is not targeted at solving certain ODEs, then why is the intuition from ODE matters? The paper does not motivate readers in this perspective. - Given the widespread use of ResNet in the vision community, the incremental improvement of 1% on ImageNet is less likely to push vision research to switch to a completely different architecture. Therefore, the potential impact of the this paper to vision community is probably limited. Conclusion - Based on the comments above, I think the paper is a good contribution which links ODE with Deep Networks and derivation is convincing. The proposed new architecture can be considered in future architecture designs. Although the increase in performance is small, I think it is good enough to accept.
iclr_2018_S191YzbRZ
One of the fundamental tasks in understanding genomics is the problem of predicting Transcription Factor Binding Sites (TFBSs). With more than hundreds of Transcription Factors (TFs) as labels, genomic-sequence based TFBS prediction is a challenging multi-label classification task. There are two major biological mechanisms for TF binding: (1) sequence-specific binding patterns on genomes known as "motifs" and (2) interactions among TFs known as co-binding effects. In this paper, we propose a novel deep architecture, the Prototype Matching Network (PMN) to mimic the TF binding mechanisms. Our PMN model automatically extracts prototypes ("motif"-like features) for each TF through a novel prototypematching loss. Borrowing ideas from few-shot matching models, we use the notion of support set of prototypes and an LSTM to learn how TFs interact and bind to genomic sequences. On a reference TFBS dataset with 2.1 million genomic sequences, the PMN significantly outperforms baselines and validates our design choices empirically. To our knowledge, this is the first deep learning architecture that introduces prototype learning and considers TF-TF interactions for large scale TFBS prediction. Not only is the proposed architecture accurate, but it also models the underlying biology.
Summary This paper proposes a prototype matching network (PMN) to model transcription factor (TF) binding motifs and TF-TF interactions for large scale transcription factor binding site prediction task. They utilize the idea of having a support set of prototypes (motif-like features) and an LSTM from the few shot learning framework to develop this prototype matching network. The input is genomic sequences from 14% of the human genome, each sequence in the dataset is bound by at least one TF. First a Convolutional Neural Network with three convolutional layers is trained to predict single/multiple TF binding. The output of the last hidden layer before sigmoid transformation is used as the LSTM input. A weighted sum of similarity score (sigmoid of cosine similarity, similar to attention mechanism of LSTMs) along with prototype vectors are used to update the read vector. The final output is a sigmoid of the final hidden state concatenated with the read vector. The loss function used is difference of a cross-entropy loss function and a lambda weighted prototype loss function. The latter is the mean square error between the output label and the similarity score. The authors compare the PMN with different lambda values with CNN with single/multi-label and see marginal improvement in auROC, auPR and Recall at 50% FDR with the PWM. To test that PWN finds biologically relevant TF interactions, the authors perform hierarchical clustering on the prototypes of 86 TFs and compare the clusters found to the known co-regulators from the TRRUST database and find 6 significant clusters. Pros: 1. The authors utilize the idea of prototypes and few shot learning to the task of TF-binding and cooperation. 2. Attention LSTMs are used to model label interactions. Just like CNN can be related to discriminative training of PSSM or PWM, the above points demonstrate nicely how ideas/concepts from the recent developments in DL can be adopted/relate (and possibly improve on) to similar generative modeling approaches used in the past for learning cooperative TF binding. Cons: 1. Authors do not compare their model’s performance to the previously published TF binding prediction algorithms (DeepBind, DeepSEA). 2. The authors miss important context and make some inaccurate statements: TF do not just “control if a gene is expressed or not” (p.1). It’s not true that previous DL works did not consider co-binding. Works such as DeepSea combined many filters which can capture cooperative binding to define which sequence is “regulated”. It is true this or DeepBind did not construct a structure a structure over those as learned by an LSTM. The authors do point out a model that does add LSTM (Quang and Xie) but then do not compare to it and make a vague claim about it modeling interactions between features but not labels (p. 6 top). Comparing to it and directly to DeepSee/Bind seems crucial to claim improvements on previous works. Furthermore, the authors acknowledge the existence of vast literature on this specific problem but completely discard it as “loose connection to our TFBS formulation”. In reality though, many works in that area are highly relevant and should be discussed in the context of what the authors are trying to achieve. For example, numerous works by Prof. Saurabh Sinha have focused specifically on joint TF modeling (e.g. Kazemian NAR 2011, He Plos One 2009, Ivan Gen Bio 2008, MORPH Plos Comp Bio 2007). In general, trying to lay claims about significant contributions to a problem, as stated here by the authors, while completely disregarding previous work simply because it’s not in a DL framework (which the authors are clearly more familiar with) can easily alienate reviewers and readers alike. 3. The learning setup seems problematic: 3a. The model may overfit for the genomic sequences that contain TF binding sites as it has never seen genomic sequences without TF binding sites (the genomic sequences that don’t have CHIP peaks are discarded from the dataset). Performance for genome wide scans should definitely include those to assess accuracy. 3b. The train/validation/test are defined by chromosome. There does not seem to be any screening for sequence similarity (e.g. repetitive sequences, paralogs). This may inflate performance, especially for more complicated models which may be able to “memorize” sequences better. 4. The paper claims to have 4 major contributions. The details of second claim that the prototype matching loss learns motif like features is not explained anywhere in the paper. If we look at the actual loss function equation (12), it penalizes the difference between the label and the similarity score but the prototypes are not updated. The fourth claim about the biological relevance of the network is not sufficiently explored. The authors show that it learns co-bindings already known in the literature which is a good sanity check but does not offer any new biological insight. The actual motifs or the structure of their relations is not shown or explored. 5. PWN offers only marginal improvement over the CNN networks
iclr_2018_H1aIuk-RW
Published as a conference paper at ICLR 2018 ACTIVE LEARNING FOR CONVOLUTIONAL NEURAL NETWORKS: A CORE-SET APPROACH Convolutional neural networks (CNNs) have been successfully applied to many recognition and learning tasks using a universal recipe; training a deep model on a very large dataset of supervised examples. However, this approach is rather restrictive in practice since collecting a large set of labeled images is very expensive. One way to ease this problem is coming up with smart ways for choosing images to be labelled from a very large collection (i.e. active learning). Our empirical study suggests that many of the active learning heuristics in the literature are not effective when applied to CNNs in batch setting. Inspired by these limitations, we define the problem of active learning as core-set selection, i.e. choosing set of points such that a model learned over the selected subset is competitive for the remaining data points. We further present a theoretical result characterizing the performance of any selected subset using the geometry of the datapoints. As an active learning algorithm, we choose the subset which is expected to yield best result according to our characterization. Our experiments show that the proposed method significantly outperforms existing approaches in image classification experiments by a large margin.
After reading rebuttals from the authors: The authors have addressed all of my concerns. THe additional experiments are a good addition. ************************ The authors provide an algorithm-agnostic active learning algorithm for multi-class classification. The core technique is to construct a coreset of points whose labels inform the labels of other points. The coreset construction requires one to construct a set of points which can cover the entire dataset. While this is NP-hard problem in general, the greedy algorithm is 2-approximate. The authors use a variant of the greedy algorithm along with bisection search to solve a series of feasibility problems to obtain a good cover of the dataset each time. This cover tells us which points are to be queried. The reason why choosing the cover is a good idea is because under suitable Lipschitz continuity assumption the generalization error can be controlled via an appropriate value of the covering radius in the data space. The authors use the coreset construction with a CNN to demonstrate an active learning algorithm for multi-class classification. The experimental results are convincing enough to show that it outperforms other active learning algorithms. However, I have a few major and minor comments. Major comments: 1. The proof of Lemma 1 is incomplete. We need the Lipschitz constant of the loss function. The loss function is a function of the CNN function and the true label. The proof of lemma 1 only establishes the Lipschitz constant of the CNN function. Some more extra work is needed to derive the lipschitz constant of the loss function from the CNN function. 2. The statement of Prop 1 seems a bit confusing to me. the hypothsis says that the loss on the coreset = 0. But the equation in proposition 1 also includes the loss on the coreset. Why is this term included. Is this term not equal to 0? 3. Some important works are missing. Especially works related to pool based active learning, and landmark results on labell complexity of agnostic active learning. UPAL: Unbiased Pool based active learning by Ganti & Gray. http://proceedings.mlr.press/v22/ganti12/ganti12.pdf Efficient active learning of half-spaces by Gonen et al. http://www.jmlr.org/papers/volume14/gonen13a/gonen13a.pdf A bound on the label complexity of agnostic active learning. http://www.machinelearning.org/proceedings/icml2007/papers/375.pdf 4. The authors use L_2 loss as their objective function. This is a bit of a weird choice given that they are dealing with multi-class classification and the output layer is a sigmoid layer, making it a natural fit to work with something like a cross-entropy loss function. I guess the theoretical results do not extend to cross-entropy loss, but the authors do not mention these points anywhere in the paper. For example, the ladder network, which is one of the networks used by the authors is a network that uses cross-entropy for training. Minor-comment: 1. The feasibility program in (6) is an MILP. However, the way it is written it does not look like an MILP. It would have been great had the authors mentioned that u_j \in {0,1}. 2. The authors write on page 4, "Moreover, zero training error can be enforced by converting average loss into maximal loss". It is not clear to me what the authors mean here. For example, can I replace the average error in proposition 1, by maximal loss? Why can I do that? Why would that result in zero training error? On the whole this is interesting work and the results are very nice. But, the proof for Lemma 1 seems incomplete to me, and some choices (such as choice of loss function) are unjustified. Also, important references in active learning literature are missing.
iclr_2018_SyqShMZRb
Published as a conference paper at ICLR 2018 SYNTAX-DIRECTED VARIATIONAL AUTOENCODER FOR STRUCTURED DATA Deep generative models have been enjoying success in modeling continuous data. However it remains challenging to capture the representations for discrete structures with formal grammars and semantics, e.g., computer programs and molecular structures. How to generate both syntactically and semantically correct data still remains largely an open problem. Inspired by the theory of compiler where the syntax and semantics check is done via syntax-directed translation (SDT), we propose a novel syntax-directed variational autoencoder (SD-VAE) by introducing stochastic lazy attributes. This approach converts the offline SDT check into on-the-fly generated guidance for constraining the decoder. Comparing to the state-of-the-art methods, our approach enforces constraints on the output space so that the output will be not only syntactically valid, but also semantically reasonable. We evaluate the proposed model with applications in programming language and molecules, including reconstruction and program/molecule optimization. The results demonstrate the effectiveness in incorporating syntactic and semantic constraints in discrete generative models, which is significantly better than current state-of-the-art approaches.
NOTE: Would the authors kindly respond to the comment below regarding Kekulisation of the Zinc dataset? Fair comparison of the data is a serious concern. I have listed this review as a good for publication due to the novelty of ideas presented, but the accusation of misrepresentation below is a serious one and I would like to know the author's response. *Overview* This paper presents a method of generating both syntactically and semantically valid data from a variational autoencoder model using ideas inspired by compiler semantic checking. Instead of verifying the semantic correctness offline of a particular discrete structure, the authors propose “stochastic lazy attributes”, which amounts to loading semantic constraints into a CFG and using a tailored latent-space decoder algorithm that guarantees both syntactic semantic valid. Using Bayesian Optimization, search over this space can yield decodings with targeted properties. Many of the ideas presented are novel. The results presented are state-of-the art. As noted in the paper, the generation of syntactically and semantically valid data is still an open problem. This paper presents an interesting and valuable solution, and as such constitutes a large advance in this nascent area of machine learning. *Remarks on methodology* By initializing a decoding by “guessing” a value, the decoder will focus on high-probability starting regions of the space of possible structures. It is not clear to me immediately how this will affect the output distribution. Since this process on average begins at high-probability region and makes further decoding decisions from that starting point, the output distribution may be biased since it is the output of cuts through high-probability regions of the possible outputs space. Does this sacrifice exploration for exploitation in some quantifiable way? Some exploration of this issue or commentary would be valuable. *Nitpicks* I found the notion of stochastic predetermination somewhat opaque, and section 3 in general introduces much terminology, like lazy linking, that was new to me coming from a machine learning background. In my opinion, this section could benefit from a little more expansion and conceptual definition. The first 3 sections of the paper are very clearly written, but the remainder has many typos and grammatical errors (often word omission). The draft could use a few more passes before publication.
iclr_2018_SkNQeiRpb
This paper proposes a new model for the rating prediction task in recommender systems which significantly outperforms previous state-of-the art models on a time-split Netflix data set. Our model is based on deep autoencoder with 6 layers and is trained end-to-end without any layer-wise pre-training. We empirically demonstrate that: a) deep autoencoder models generalize much better than the shallow ones, b) non-linear activation functions with negative parts are crucial for training deep models, and c) heavy use of regularization techniques such as dropout is necessary to prevent overfitting. We also propose a new training algorithm based on iterative output re-feeding to overcome natural sparseness of collaborate filtering. The new algorithm significantly speeds up training and improves model performance.
This paper presents a deep autoencoder model for rating prediction. The autoencoder takes the user’s rating over all the items as input and tries to predict the observed ratings in the output with mean squared error. A few techniques are applied to make the training feasible without layer-wise pre-training: 1) SELU activation. 2) dropout with high probability. 3) dense output re-feeding. On the Netflix prize dataset, the proposed deep autoencoder outperforms other state-of-the-art approaches. Overall, the paper is easy to follow. However, I have three major concerns regarding the paper that makes me decide to reject it. 1. Lack of novelty. The paper is essentially a deeper version of the U-AutoRec (Sedhain et al. 2015) with a few recently emerged innovations in deep learning. The dense output re-feeding is not something particularly novel, it is more or less a data-imputation procedure with expectation-maximization — in fact if the authors intend to seek explanation for this output re-feeding technique, EM might be one of the interpretations. And similar technique (more theoretically grounded) has been applied in image imputation for variational autoencoder (Rezende et al. 2014, Stochastic Backpropagation and Approximate Inference in Deep Generative Models). 2. The experimental setup is also worth questioning. Using a time-split dataset is of course more challenging. However, the underlying assumption of autoencoders (or more generally, latent factor models like matrix factorization) is the all the ratings are exchangeable (conditionally independent given the latent representations), i.e., autoencoders/MF are not capable of inferring the temporal information from the data, Thus it is not even a head-to-head comparison with a temporal model (e.g., RNN in Wu et al. 2017). Of course you can still apply a static autoencoder to time-split data, but what ends up happening is the model will use its capacity to try to explain the temporal signal in the data — a deeper model certainly has more extra capacity to do so. I would suggest the authors comparing on a non-time-split dataset with other static models, like I(U)-AutoRec/MF/CF-NADE (Zheng et al. 2016)/etc. 3. Training deep models on recommender systems data is impressive. However, I would like to suggest we, as a community, start to step away from the task of rating predictions as much as we can, especially in more machine-learning-oriented venues (NIPS, ICML, ICLR, etc.) where the reviewers might be less aware of the shift in recommender systems research. (The task of rating predictions was made popular mostly due to the Netflix prize, yet even Netflix itself has already moved on from ratings.) Training (and evaluating) with RMSE on the observed ratings assumes all the missing ratings are missing at random, which is clearly far from realistic for recommender systems (see Marlin et al. 2007, Collaborative Filtering and the Missing at Random Assumption). In fact, understanding why some of the ratings are missing presents a unique challenge for the recommender systems. See, e.g., Steck 2010, Training and testing of recommender systems on data missing not at random, Liang et al. 2016, Modeling user exposure in recommendation, Schnabel et al. 2016, Recommendations as Treatments: Debiasing Learning and Evaluation. A model with good RMSE in a lot of cases does not translate to good recommendations (Cremonesi et al. 2010, Performance of recommender algorithms on top-n recommendation tasks ). As a first step, at least start to use all the 0’s in the form of implicit feedback and focus on ranking-based metrics other than RMSE.
iclr_2018_SJmAXkgCb
In this paper, we introduce a method to compress intermediate feature maps of deep neural networks (DNNs) to decrease memory storage and bandwidth requirements during inference. Unlike previous works, the proposed method is based on converting fixed-point activations into vectors over the smallest GF(2) finite field followed by nonlinear dimensionality reduction (NDR) layers embedded into a DNN. Such an end-to-end learned representation finds more compact feature maps by exploiting quantization redundancies within the fixed-point activations along the channel or spatial dimensions. We apply the proposed network architecture to the tasks of ImageNet classification and PASCAL VOC object detection. Compared to prior approaches, the conducted experiments show a factor of 2 decrease in memory requirements with minor degradation in accuracy while adding only bitwise computations.
The method of this paper minimizes the memory usage of the activation maps of a CNN. It starts from a representation where activations are compressed with a uniform scalar quantizer and fused to reduce intermediate memory usage. This looses some accuracy, so the contribution of the paper is to add a pair of convolution layers in the binary domain (GF(2)) that are trained to restore the lost precision. Overall, this paper seems to be a nice addition to the body of works on network compression. + : interesting approach and effective results. + : well related to the state of the art and good comparison with other works. - : somewhat incremental. Most of the claimed 100x compression is due to previous work. - : impact on runtime is not reported. Since there is a caffe implementation it would be interesting to have an additional column with the comparative execution speeds, even if only on CPU. I would expect the FP32 timings to be hard to beat, despite the claims that it uses only binary operations. - : the paper is sometimes difficult to understand (see below) detailed comments: Equations (3)-(4) are difficult to understand. If I understand correctly, b just decomposes a \hat{x} in {0..2^B-1} into its B bits \tilda{x} \in {0,1}^B, which can be then considered as an additional dimension in the activation map where \hat{x} comes from. It is not stated clearly whether P^l and R^l have binary weights. My understanding is that P^l has but R^l not. 4.1 --> a discussion of the large mini-batch size (1024) could be useful. My understanding is that large mini-batches are required to use averaged gradients and get smooth updates. end of 4.1 --> unclear what "equivalent bits" means
iclr_2018_rJe7FW-Cb
We propose a novel attention mechanism to enhance Convolutional Neural Networks for fine-grained recognition. The proposed mechanism reuses CNN feature activations to find the most informative parts of the image at different depths with the help of gating mechanisms and without part annotations. Thus, it can be used to augment any layer of a CNN to extract low-and high-level local information to be more discriminative. Differently, from other approaches, the mechanism we propose just needs a single pass through the input and it can be trained end-to-end through SGD. As a consequence, the proposed mechanism is modular, architecture-independent, easy to implement, and faster than iterative approaches. Experiments show that, when augmented with our approach, Wide Residual Networks systematically achieve superior performance on each of five different finegrained recognition datasets: the Adience age and gender recognition benchmark, Caltech-UCSD Birds-200-2011, Stanford Dogs, Stanford Cars, and UEC Food-100, obtaining competitive and state-of-the-art scores.
The manuscript describes a novel attentional mechanism applied to fine-grained recognition. On the positive side, the approach seems to consistently improve the recognition accuracy of the baseline (a wide residual net). The approach is also consistently tested on the main fine-grained recognition datasets (the Adience age and gender recognition benchmark, Caltech-UCSD Birds-200-2011, Stanford Dogs, Stanford Cars, and UEC Food-100). On the negative side, the paper could be better written and motivated. First, some claimed are made about how the proposed approach "enhances most of the desirable properties from previous approaches” (see pp 1-2) but these claims are never backed up. More generally since the paper focuses on attention, other attentional approaches should be used as benchmarks beyond the WRN baseline. If the authors want to claim that the proposed approach is "more robust to deformation and clutter” then they should design an experiment that shows that this is the case. Beyond, the approach seems a little ad hoc. No real rationale is provided for the different mechanisms including the gating etc and certainly no experimental validation is provided to demonstrate the need for these mechanisms. More generally, it is not clear from reading the paper specifically what computational limitation of the CNN is being solved by the proposed attentional mechanism. Some of the masks shown in Fig 3 seem rather suspicious and prompt this referee to think that the networks are seriously overfitting to the data. For instance, why would attending to a right ear help in gender recognition? The proposed extension adds several hyperparameters (for instance the number K of attention heads). Apologies if I missed it but I am not clear how this was optimized for the experiments reported. In general, the paper could be clearer. For instance, it is not clear from either the text or Fig 2 how H goes from XxYxK for the attention head o XxYxN for the output head. As a final point, I would say that while some of the criticisms could be addressed in a revision, the improvements seem relatively modest. Given that the focus of the paper is already limited to fine-grained recognition, it seems that the paper would be better suited for a computer vision conference. Minor point: "we incorporate the advantages of visual and biological attention mechanisms” not sure this statement makes much sense. Seems like visual and biological are distinct attributes but visual attention can be biological (or not, I guess) and it is not clear how biological the proposed approach is. Certainly no attempt is made by the authors to connect to biology. "top-down feed-forward attention mechanism” -> it should be just feed-forward attention. Not clear what "top-down feed-forward” attention could be...
iclr_2018_SygwwGbRW
SEMI-PARAMETRIC TOPOLOGICAL MEMORY FOR NAVIGATION We introduce a new memory architecture for navigation in previously unseen environments, inspired by landmark-based navigation in animals. The proposed semiparametric topological memory (SPTM) consists of a (non-parametric) graph with nodes corresponding to locations in the environment and a (parametric) deep network capable of retrieving nodes from the graph based on observations. The graph stores no metric information, only connectivity of locations corresponding to the nodes. We use SPTM as a planning module in a navigation system. Given only 5 minutes of footage of a previously unseen maze, an SPTM-based navigation agent can build a topological map of the environment and use it to confidently navigate towards goals. The average success rate of the SPTM agent in goal-directed navigation across test environments is higher than the best-performing baseline by a factor of three.
*** Revision: based on the author's work, we have switched the score to accept (7) *** Clever ideas but not end-to-end navigation. This paper presents a hybrid architecture that mixes parametric (neural) and non-parametric (Dijkstra's path planning on a graph of image embeddings) elements and applies it to navigation in unseen 3D environments (Doom). The path planning in unseen environments is done in the following way: first a human operator traverses the entire environment by controlling the agent and collecting a long episode of 10k frames that are put into a chain graph. Then loop closures are automatically detected using image similarity in feature space, using a localization feed-forward ResNet (trained using a DrLIM-like triplet loss on time-similar images), resulting in a topological graph where edges correspond to similar viewpoints or similar time points. For a given target position image and agent start position image, a nearest neighbor search-powered Dijkstra path planning is done on the graph to create a list of waypoint images. The pairs of (current image, next waypoint) images are then fed to a feed-forward locomotion (policy) network, trained in a supervised manner. The paper does not discuss at all the problems arising when the images are ambiguous: since the localisation network is feed-forward, surely there must be images that are ambiguously mapped to different graph areas and are closing loops erroneously? The problem is mitigated by the fact that a human operator controls the agent, making sure that the agent's viewpoint is clear, but the method will probably fail if the agent is learning to explore the maze autonomously, bumping into walls and facing walls. The screenshots on Figure 3 suggest that the walls have a large variety of textures and decorations, making each viewpoint potentially unique, unlike the environments in (Mirowski et al, 2017), (Jaderberg et al, 2017) and (Mnih et al, 2016). Most importantly, the navigation is not based on RL at all, and ignores the problem of exploration altogether. A human operator labels 10k frames by playing the game and controlling the agent, to show it how the maze looks like and what are the paths to be taken. As a consequence, comparison to end-to-end RL navigation methods is unclear, and should be stressed upon in the manuscript. This is NOT a proper navigation agent. Additional baselines should be evaluated: 1) a fully Dijkstra-based baseline where the direction of motion of the agent along the edge is retrieved and used to guide the agent (i.e., the policy becomes a lookup table on image pairs) and 2) the same but the localization network replaced by image similarities in pixel space or some image descriptor space (e.g., SURF, ORB, etc…). It seems to me that those baselines would be very strong. Another baseline is missing: (Oh et al, 2016, “Control of Memory, Active Perception, and Action in Minecraft”). The paper is not without merit: the idea of storing experiences in a graph and in using landmark similarity rather than metric embeddings is interesting. Unfortunately, that episodic memory is not learned (e.g., Neural Turing Machines or Memory Networks). In summary, just like the early paper released in 2010 about Kinect-based RGBD SLAM: lots of excitement but potential disappointment when the method is applied on an actual mobile robot, navigating in normal environments with visual ambiguity and white walls. The paper should ultimately be accepted to this conference to provide a baseline for the community (once the claims are revised), but I street that the claims of learning to navigate in unseen environments are unsubstantiated, as the method is neither end-to-end learned (as it relies on human input and heuristic path planning) nor capable of exploring unseen environments with visual ambiguity.
iclr_2018_B13EC5u6W
Interpretability and small labelled datasets are key issues in the practical application of deep learning, particularly in areas such as medicine. In this paper, we present a semi-supervised technique that addresses both these issues simultaneously. We learn dense representations from large unlabelled image datasets, then use those representations to both learn classifiers from small labeled sets and generate visual rationales explaining the predictions. Using chest radiography diagnosis as a motivating application, we show our method has good generalization ability by learning to represent our chest radiography dataset while training a classifier on an separate set from a different institution. Our method identifies heart failure and other thoracic diseases. For each prediction, we generate visual rationales for positive classifications by optimizing a latent representation to minimize the probability of disease while constrained by a similarity measure in image space. Decoding the resultant latent representation produces an image without apparent disease. The difference between the original and the altered image forms an interpretable visual rationale for the algorithm's prediction. Our method simultaneously produces visual rationales that compare favourably to previous techniques and a classifier that outperforms the current state-of-the-art.
The authors address two important issues: semi-supervised learning from relatively few labelled training examples in the presence of many unlabelled examples, and visual rationale generation: explaining the outputs of the classifiier by overlaing a visual rationale on the original image. This focus is mainly on medical image classification but the approach could potentially be useful in many more areas. The main idea is to train a GAN on the unlabeled examples to create a mapping from a lower-dimensional space in which the input features are approximately Gaussian, to the space of images, and then to train an encoder to map the original images into this space minimizing reconstruction error with the GAN weights fixed. The encoder is then used as a feature extractor for classification and regression of targets (e.g. heard disease). The visual rationales are generated by optimizing the encoded representation to simultaneously reconstruct an image close to the original and to minimize the probability of the target class. This gives an image that is similar to the original but with features that caused the classification of the disease removed. The resulting image can be subtracted from the original encoding to highlight problematic areas. The approach is evaluated on an in-house dataset and a public NIH dataset, demonstrating good performance, and illustrative visual rationales are also given for MNIST. The idea in the paper is, to my knowledge, novel, and represents a good step toward the important task of generating interpretable visual rationales. There are a few limitations, e.g. the difficulty of evaluating the rationales, and the fact that the resolution is fixed to 128x128 (which means discarding many pixels collected via ionizing radiation), but these are readily acknowledged by the authors in the conclusion. Comments: 1) There are a few details missing, like the batch sizes used for training (it is difficult to relate epochs to iterations without this). Also, the number of hidden units in the 2 layer MLP from para 5 in Sec 2. 2) It would be good to include PSNR/MSE figures for the reconstruction task (fig 2) to have an objective measure of error. 3) Sec 2 para 4: "the reconstruction loss on the validation set was similar to the reconstruction loss on the validation set" -- perhaps you could be a little more precise here. E.g. learning curves would be useful. 4) Sec 2 para 5: "paired with a BNP blood test that is correlated with heart failure" I suspect many readers of ICLR, like myself, will not be well versed in this test, correlation with HF, diagnostic capacity, etc., so a little further explanation would be helpful here. The term "correlated" is a bit too broad, and it is difficult for a non-expert to know exactly how correlated this is. It is also a little confusing that you begin this paragraph saying that you are doing a classification task, but then it seems like a regression task which may be postprocessed to give a classification. Anyway, a clearer explanation would be helpful. Also, if this test is diagnostic, why use X-rays for diagnosis in the first place? 5) I would have liked to have seen some indicative times on how long the optimization takes to generate a visual rationale, as this would have practical implications. 6) Sec 2 para 7: "L_target is a target objective which can be a negative class probability or in the case of heart failure, predicted BNP level" -- for predicted BNP level, are you treating this as a probability and using cross entropy here, or mean squared error? 7) As always, it would be illustrative if you could include some examples of failure cases, which would be helpful both in suggesting ways of improving the proposed technique, and in providing insight into where it may fail in practical situations.
iclr_2018_rkQu4Wb0Z
Performance of Deep Neural Network (DNN) heavily depends on the characteristics of hidden layer representations. Unlike the codewords of channel coding, however, the representations of learning cannot be directly designed or controlled. Therefore, we develop a family of penalty regularizers where each one aims to affect one of the representation's statistical properties such as sparsity, variance, or covariance. The regularizers are extended to perform class-wise regularization, and the extension is found to provide an outstanding shaping capability. A variety of statistical properties are investigated for ten different regularization strategies including dropout and batch normalization, and several interesting findings are reported. Using the family of regularizers, performance improvements are confirmed for MNIST, CIFAR-100, and CIFAR-10 classification problems. But more importantly, our results suggest that understanding how to manipulate statistical properties of representations can be an important step toward understanding DNN, and that the role and effect of DNN regularizers need to be reconsidered.
1. Summary The authors of the paper compare the learning of representations in DNNs with Shannons channel coding theory, which deals with reliably sending information through channels. In channel coding theory the statistical properties of the coding of the information can be designed to fit the task at hand. With DNNs the representations cannot be designed in the same way. But the representations, learned by DNNs, can be affected indirectly by applying regularization. Regularizers can be designed to affect statistical properties of the representations, such as sparsity, variance, or covariance. The paper extends the regularizers to perform per-class regularization. This makes sense, because, for example, forcing the variance of a representation to go towards zero is undesirable as it would state that the unit always has the same output no matter the input. On the other hand having zero variance for a class is desirable as it means that the unit has a consistent activation for all samples of the same class. The paper compares different regularization techniques regarding their error performance. They find that applying representation regularization outperforms classical approaches such as L1 and L2 weight regularization. They also find, that performing representation regularization on the last layer achieves the best performance. Class-wise methods generally outperform methods that apply regularization on all classes. 2. Remarks Shannons channel coding theory was used by the authors to derive regularizers, that manipulate certain statistical properties of representations learned by DNNs. In the reviewers opinion, there is no theoretical connection between DNNs and channel theory. For one, DNNs are no channels in the sense that they transmit information. DNNs are rather pipes that transform information from one domain to another, where representations are learned as an intermediate model as the information is being transformed. Noise introduced in the process is not due to a faulty channel but due to the quality of the learned representations themselves. The paper falls short in explaining how DNNs and Shannons channel coding theory fit together theoretically and how they used it to derive the proposed regularizers. Despite the theoretical gap between the two was not properly bridged by the authors, channel coding theory is still a good metaphor for what they were trying to achieve. The authors recognize that there is similar research being done independently by Belharbi et al. (2017). The similarities and differences between the proposed work and Belharbi et al. should be discussed in more detail. The authors conclude that it is unclear which statistical properties of representations are generally helpful when being strengthened. It would be nice if they had derived at least a set of rules of thumb. Especially because none of the regularizers described in the paper only target one specific statistical property but multiple. One good example that was provided, is that L1-rep consistently failed to train on CIFAR-100, because too much sparsity can hurt performance, when having many different classes (100 in this case). These kinds of conclusions will make it easier to transfer the presented theory into practice. 3. Conclusion The comparison between DNNs and Shannons channel coding theory stands on shaky ground. The proposed regularizes are rather simple, but perform well in the experiments. The effect of each regularizer on the statistical properties of the representation and the relations to previous work (especially Belharbi et al. (2017)) should be discussed in more detail.
iclr_2018_rJk51gJRb
Workshop track -ICLR 2018 ADVERSARIAL POLICY GRADIENT FOR ALTERNATING MARKOV GAMES Policy gradient reinforcement learning has been applied to two-player alternateturn zero-sum games, e.g., in AlphaGo, self-play REINFORCE was used to improve the neural net model after supervised learning. In this paper, we emphasize that two-player zero-sum games with alternating turns, which have been previously formulated as Alternating Markov Games (AMGs), are different from standard MDP because of their two-agent nature. We exploit the difference in associated Bellman equations, which leads to different policy iteration algorithms. As policy gradient method is a kind of generalized policy iteration, we show how these differences in policy iteration are reflected in policy gradient for AMGs. We formulate an adversarial policy gradient and discuss potential possibilities for developing better policy gradient methods other than self-play REINFORCE. The core idea is to estimate the minimum rather than the mean for the "critic". Experimental results on the game of Hex show the modified Monte Carlo policy gradient methods are able to learn better pure neural net policies than the REIN-FORCE variants. To apply learned neural weights to multiple board sizes Hex, we describe a board-size independent neural net architecture. We show that when combined with search, using a single neural net model, the resulting program consistently beats MoHex 2.0, the previous state-of-the-art computer Hex player, on board sizes from 9×9 to 13×13.
The paper makes the simple but important observation that (deep) reinforcement learning in alternating Markov games requires a min-max formulation of the Bellman equation as well as careful attention to the way in which one alternates solving for both players' policies in a policy iteration setting. While some of the core algorithmic insights regarding Algorithms 3 & 4 in the paper stem from previous work (Condon, 1990; Hoffman & Karp, 1966), I was not actually aware of these previous results until I reviewed this paper. A nice corollary of Algorithms 3 & 4 is that they make for a straightforward adaptation of policy gradient algorithms since when optimizing one policy, the other is fixed to the greedy policy. In general, it would be nice to have the algorithms specified as formal algorithms as opposed to text-based outlines. I found myself reading and re-reading descriptions to make sure I understood what math was being implied by the descriptions. Section 6 > Hex is simpler than Go in the sense that perfect play can > often be achieved whenever virtual connections are found > by H-Search It is not clear here what virtual connections are, what H-Search is, and how these imply perfect play, if perfect play as previously discussed is unknown. Overall, the results on Hex for AMCPG-A and AMCPG-B vs. standard REINFORCE variants currently used are very encouraging. That said, empirically it is always a question of whether these results are specific to Hex. Because this paper is not proposing the best Hex player (i.e., the winning rate against Wolve never exceeds 0.5), I think it is quite reasonable to request the authors to compare AMCPG-A and AMCPG-B to standard REINFORCE variants on other games (they do not need to be as difficult as Hex). Finally, assuming that the results do generalize to other games, I am left wondering about the significance of the contribution. On one hand, the authors have introduced me to literature I was not aware of, but on the other hand, their actual novel contribution is a rather straightforward adaptation of ideas in the literature to policy gradients (that could be formalized in a more technically precise way) with an evaluation on a single type of game. This is a useful contribution no doubt, but I am concerned with whether it meets the significance level that I am used to with accepted ICLR papers in previous years.
iclr_2018_HJnQJXbC-
New types of compute hardware in development and entering the market hold the promise of revolutionizing deep learning in a manner as profound as GPUs. However, existing software frameworks and training algorithms for deep learning have yet to evolve to fully leverage the capability of the new wave of silicon. In particular, models that exploit structured input via complex and instance-dependent control flow are difficult to accelerate using existing algorithms and hardware that typically rely on minibatching. We present an asynchronous model-parallel (AMP) training algorithm that is specifically motivated by training on networks of interconnected devices. Through an implementation on multi-core CPUs, we show that AMP training converges to the same accuracy as conventional synchronous training algorithms in a similar number of epochs, but utilizes the available hardware more efficiently, even for small minibatch sizes, resulting in shorter overall training times. Our framework opens the door for scaling up a new class of deep learning models that cannot be efficiently trained today.
This paper presents AMPNet, that addresses parallel training for dynamic networks. This is accomplished by building a static graph like IR that can serve as a target for compilation for high-level libraries such as tensor flow. In the IR each node of the computation graph is a parallel worker, and synchronization occurs when a sufficient number of gradients have been accumulated. The IR uses constructs such as concat, split, broadcast,.. allowing dynamic, instance dependent control flow decisions. The primary improvement in training performance is from reducing synchronization costs. Comments for the author: The paper proposes a solution to an important problem of model parallel training especially over dynamic batching that is increasingly important as we see more complex models where batching is not straightforward. The proposed solution can be effective. However, this is not really evident from the evaluation. Furthermore, the paper can be a little dense read for the ICLR audience. I have the following additional concerns: 1) The paper stresses new hardware throughout the paper. The paper also alludes to “simulator" of a 1 TFLOPs FPGA in the conclusion. However, your entire evaluation is over CPU. The said simulator is a bunch of sleep() calls (unless some details are skipped). I would encourage the authors to remove these references since these new devices have very different hardware behavior. For example, on a real constrained device, you may not enjoy a large L2 cache which you are benefitting from by doing an entire evaluation over CPUs. Likewise, the vector instruction processing behavior is also very different since these devices have limited power budgets and may not be able to support AVX style instructions. Unless an actual simulator like GEM5 is used, a correct representation of what hardware environment is being used is necessary before making claims that this is ideal for emerging hardware. 2) To continue on the hardware front and the evaluation, I feel for this paper to be accepted or appreciated, a simulated hardware is not necessary. Personally, I found the evaluation with simulated sleep functions more confusing than helpful. An appropriate evaluation for this paper can be just benefits over CPU or GPUs, For example, you have a 7 TFLOPS device (e.g. a GPU or a CPU). Existing algorithms extract X TFLOPs of processing power and using your IR/system one gets Y effective TFLOPs and Y>X. This is all that is required. Currently, looking at your evaluation riddled with hypothetical hardware, it is unclear to me if this is helpful for existing hardware. For example, in Table 1, are Tensorflow numbers only provided over the 1 TFLOPs device (they correspond to the 1 TFLOPs column for all workloads except for MNIST)? Do you use the parallelism at all in your Tensorflow baseline? Please clarify. 3) How do you compare for dynamic batching with dynamic IR platforms like pytorch? Furthermore, more details about how dynamic batching is happening in benchmarks mentioned in Table 1 will be nice to have. Finally, an emphasis on the novel contributions of the paper will also be appreciated. 4) Finally, the evaluation appears to be sensitive to the two hyper-parameters introduced. Are they dataset specific? I feel tuning them would be rather cumbersome for every model given how sensitive they are (Figure 5).
iclr_2018_ryALZdAT-
Workshop track -ICLR 2018 FEATURE INCAY FOR REPRESENTATION REGULARIZA- TION Softmax-based loss is widely used in deep learning for multi-class classification, where each class is represented by a weight vector and each sample is represented as a feature vector. Inspired by that weight decay is a common practice to regularize the weight vectors, we investigate how to regularize the feature vectors since representation is also tunable in deep learning. One main observation is that elongating the feature norm of both correctlyclassified and mis-classified feature vectors improves learning: (1) increasing the feature norm of correctly-classified examples enlarges the probability margin among different classes and ensures better generalization. (2) increasing the feature norm of mis-classified examples can up-weight the contribution from hard examples. Accordingly, we propose feature incay to regularize feature vectors by encouraging larger feature norm. Extensive empirical results on MNIST, CI-FAR10, CIFAR100 and LFW demonstrate the effectiveness of feature incay.
The manuscript proposes to increase the norm of the last hidden layer to promote better classification accuracy. However, the motivation is a bit less convincing. Here are a few motivations that are mentioned. (1) Increasing the feature norm of correctly classified examples helps cross entropy, which is of course correct. However, it only decreases the training loss. How do we know it will not lead to overfitting? (2) Increasing the feature norm of mis-classified examples will make gradient larger for self-correction. And the manuscript proves it in property 2. However, the proof seems not complete. In Eq (7), increasing the feature norm would also affect the value of the term in parenthesis. As an example, if a negative example is already mis-classified as a positive, and its current probability is very close to 1, then further increasing feature norm would make the probability even closer to 1, leading to saturation and smaller gradient. (3) Figure 1 shows that examples with larger feature norm tend to be predicted well. However, it is not very convincing since it is only a correlation rather than causality. Let's use simple linear softmax regression as a sanity check, where features to softmax are real features rather than hidden units. Increasing the feature norm seems to be against the best practice of feature normalization in which each feature after normalization is of variance 1. The manuscript states that the feature norm won't be infinitely increased since there is an upper bound. However, the proof of property 3 seems to only apply to the certain cases where K<2D. In addition, alpha is in the formula of upper bound, but what is the upper bound of alpha? The manuscript does comprehensive experiments to test the proposed method. The results are good, since the proposed method outperforms other baselines in most datasets. But the results are not impressively strong. Minor issues: (1) For proof of property 3, it seems that alpha and beta are used before defined. Are they the radius of two circles?
iclr_2018_rkA1f3NpZ
Deep learning has become the state of the art approach in many machine learning problems such as classification. It has recently been shown that deep learning is highly vulnerable to adversarial perturbations. Taking the camera systems of self-driving cars as an example, small adversarial perturbations can cause the system to make errors in important tasks, such as classifying traffic signs or detecting pedestrians. Hence, in order to use deep learning without safety concerns a proper defense strategy is required. We propose to use ensemble methods as a defense strategy against adversarial perturbations. We find that an attack leading one model to misclassify does not imply the same for other networks performing the same task. This makes ensemble methods an attractive defense strategy against adversarial attacks. We empirically show for the MNIST and the CIFAR-10 data sets that ensemble methods not only improve the accuracy of neural networks on test data but also increase their robustness against adversarial perturbations.
In this manuscript, the authors empirically investigated the robustness of some different deep neural networks ensembles to two types of attacks, namely FGSM and BIM, on two popular datasets, MNIST and CIFAR10. The authors concluded that the ensembles are more accurate on both clean and adversaries samples than a single deep neural network. Therefore, the ensembles are more robust in terms of the ability to correctly classify the adversary attacks. As the authors stated, an attack that is designed to fool one network does not necessarily fool the other networks in the same way. This is likely why ensembles appear more robust than single deep learners. However, robustness of ensembles to the white-box attacks that are generated from the ensemble is still low for FGS. Generally speaking, although FGS attacks generated from one network can fool less the whole ensembles, generating FGS adversaries from a given ensemble is still able to effectively fool it. Therefore, if the attacker has access to the ensemble or even know the classification system based on that ensemble, then the ensemble-based system is still vulnerable to the attacks generated specifically from it. Simple ensemble methods are not likely to confer significant robustness gains against adversaries. In contrast to FGS results, surprisingly BIM-Grad1 is able to fool more the ensemble than BIM-Grad2. Therefore, it seems that if the attacker makes BIM adversaries from only a single classifier, then she can simply and yet effectively mislead the whole ensemble. In comparison to BIM-Grad2, BIM-Grad1 results show that BIM attacks from one network (BIM-Grad1) can more successfully fool the other different networks in the ensembles in a similar way! BIM-Grad2 is not that much able to fool the ensemble-based system even this attack generated from the ensemble (white-box attacks). In order to confirm the robustness of the ensembles to BIM attacks, the authors can do more experiments by generating BIM-Grad2 attacks with higher number of iterations. Indeed, the low number of iterations might cause the lower rate of success for generating adversaries by BIM-Grad2. In fact, BIM adversaries from the ensembles might require more number of iterations to effectively fool the majority of the members in the ensembles. Therefore, increasing the number of iterations can increase the successful rate of generating BIM-Average Grad2 adversaries. Note that in this case, it is recommended to compare the amount of distortion (perturbation) with different number of iterations in order to indicate the effectiveness of the ensembles to white-box BIM attacks. Despite to averaging the output probabilities to compute the ensemble final prediction, the authors generated the adversaries from the ensemble by computing the sum of the gradients of the classifiers loss. A proper approach would have been to average of these gradients. The fact the sum is not divided by the number of members (i.e., sum of gradients instead of average of gradients) is increasing the step size of the adversarial method proportionally to the ensemble size, raising questions on the validity of the comparison with the single-model adversarial generation. Overall, I found the paper as having several methodological flaws in the experimental part, and rather light in terms of novel ideas. As noticed in the introduction, the idea of using ensemble for enhancing robustness as already been proposed. Making a paper only to restate it, is too light for acceptation. Moreover, experimental setup using a lot of space for comparing results on standard datasets (i.e., MNIST and CIFAR10), even with long presentation of these datasets. Several issues are raised in the current experiments and require adjustments. Experiments should also be more elaborated to make the case stronger, following at least some of indications provided.
iclr_2018_HktK4BeCZ
Published as a conference paper at ICLR 2018 LEARNING DEEP MEAN FIELD GAMES FOR MODEL- ING LARGE POPULATION BEHAVIOR We consider the problem of representing collective behavior of large populations and predicting the evolution of a population distribution over a discrete state space. A discrete time mean field game (MFG) is motivated as an interpretable model founded on game theory for understanding the aggregate effect of individual actions and predicting the temporal evolution of population distributions. We achieve a synthesis of MFG and Markov decision processes (MDP) by showing that a special MFG is reducible to an MDP. This enables us to broaden the scope of mean field game theory and infer MFG models of large real-world systems via deep inverse reinforcement learning. Our method learns both the reward function and forward dynamics of an MFG from real data, and we report the first empirical test of a mean field game model of a real-world social media population.
This paper attacks an important problems with an interesting and promising methodology. The authors deal with inference in models of collective behavior, specifically at how to infer the parameters of a mean field game representation of collective behavior. The technique the authors innovate is to specify a mean field game as a model, and then use inverse reinforcement learning to learn the reward functions of agents in the mean field game. This work has many virtues, and could be an impactful piece. There is still minimal work at the intersection of machine learning and collective behavior, and this paper could help to stimulate the growth of that intersection. The application to collective behavior could be an interesting novel application to many in machine learning, and conversely the inference techniques that are innovated should be novel to many researchers in collective behavior. At the same time, the scientific content of the work has critical conceptual flaws. Most fundamentally, the authors appear to implicitly center their work around highly controversial claims about the ontological status of group optimization, without the careful justification necessary to make this kind of argument. In addition to that, the authors appear to implicitly assume that utility function inference can be used for causal inference. That is, there are two distinct mistakes the authors make in their scientific claims: 1) The authors write as if mean field games represent population optimization (Mean field games are not about what a _group_ optimizes; they are about what _individuals_ optimize, and this individual optimization leads to certain patterns in collective behaviors) 2) The authors write as if utility/reward function inference alone can provide causal understanding of collective or individual behavior 1 - I should say that I am highly sympathetic to the claim that many types of collective behavior can be viewed as optimizing some kind of objective function. However, this claim is far from mainstream, and is in fact highly contested. For instance, many prominent pieces of work in the study of collective behavior have highlighted its irrational aspects, from the madness of crowds to herding in financial markets. Since it is so fringe to attribute causal agency to groups, let alone optimal agency, in the remainder of my review I will give the authors the benefit of the doubt and assume when they say things like "population behavior may be optimal", they mean "the behavior of individuals within a population may be optimal". If the authors do mean to say this, they should be more careful about their language use in this regard (individuals are the actors, not populations). If the authors do indeed mean to attribute causal agency to groups (as suggested in their MDP representation), they will run into all the criticisms I would have about an individual-level analysis and more. Suffice it to say, mean field games themselves don't make claims about aggregate-level optimization. A Nash equilibrium achieves a balance between individual-level reward functions. These reward functions are only interpretable at the individual level. There is no objective function the group itself in aggregate is optimizing in mean field games. For instance, even though the mean field game model of the Mexican wave produces wave solutions, the model is premised on people having individual utility functions that lead to emergent wave behavior. The model does not have the representational capacity to explain that people actually intend to create the emergent behavior of a wave (even though in this case they do). Furthermore, the fact that mean field games aggregate to a single-agent MDP does not imply that that the group can rightfully be thought of as an agent optimizing the reward function, because there is an exact correspondence between the rewards of the individual agents in the MFG and of the aggregate agent in the MDP by construction. 2 - The authors also claim that their inference methods can help explain why people choose to talk about certain topics. As far as the extent to which utility / reward function inference can provide causal explanations of individual (or collective) behavior, the argument that is invariably brought against a claim of optimization is that almost any behavior can be explained as optimal post-hoc with enough degrees of freedom in the utiliy function of the behavioral model. Since optimization frameworks are so flexible, they have little explanatory power and are hard to falsify. In fact, there is literally no way that the modeling framework of the authors even affords the possibility that individual/collective behavior is not optimal. Optimality is taken as an assumption that allows the authors to infer what reward function is being optimized. The authors state that the reward function they infer helps to interpret collective behavior because it reveals what people are optimizing. However, the reward function actually discovered is not interpretable at all. It is simply a summary of the statistical properties of changes in popularity of the topics of conversation in the Twitter data the authors' study. To quote the authors' insights: "The learned reward function reveals that a real social media population favors states characterized by a highly non-uniform distribution with negative mass gradient in decreasing order of topic popularity, as well as transitions that increase this distribution imbalance." The authors might as well have simply visualized the topic popularities and changes in popularities to arrive at such an insight. To take the authors claims literally, we would say that people have an intrinsic preference for everyone to arbitrarily be talking about the same thing, regardless of the content or relevance of that topic. To draw an analogy, this is like observing that on some days everybody on the street is carrying open umbrellas and on other days not, and inferring that the people on the street have a preference for everyone having their umbrellas open together (and the model would then predict that if one person opens an umbrella on a sunny day, everybody else will too). To the authors credit, they do make a brief attempt to present empirical evidence for their optimization view, stating succinctly: "The high prediction accuracy of the learned policy provides evidence that real population behavior can be understood and modeled as the result of an emergent population-level optimization with respect to a reward function." Needless to say, this one-sentence argument for a highly controversial scientific claims falls flat on closer inspection. Setting aside the issues of correlation versus causation, predictive accuracy does not in and of itself provide scientific plausibility. When an n-gram model produces text that is in the style of a particular writer, we do not conclude that the writer must have been composing based on the n-gram's generative mechanism. Predictive accuracy only provides evidence when combined in the first place with scientific plausibility through other avenues of evidence. The authors could attempt to address these issues by making what is called an "as-if" argument, but it's not even clear such an argument could work here in general. With all this in mind, it would be more instructive to show that the inference method the authors introduce could infer the correct utility functions used in standard mean field games, such as modeling traffic congestion and the Mexican wave. -- All that said, the general approach taken in the authors' work is highly promising, and there are many fruitful directions I would be exicted to see this work taken --- e.g., combining endogenous and exogenous rewards or looking at more complex applications. As a technical contribution, the paper is wonderful, and I would enthusiastically support acceptance. The authors simply either need to be much more careful with the scientific claims about collective behavior they make, or limit the scope of the contribution of the paper to be modeling / inference in the area of collective behavior. Mean field games are an important class of models in collective behavior, and being able to infer their parameters is a nice step forward purely due to the importance of that class of games. Identifying where the authors' inference method could be applied to draw valid scientific conclusions about collective behavior could then be an avenue for future work. Examples of plausible scientific applications might include parameter inference in settings where mean field games are already typically applied in order to improve the fit of those models or to learn about trade-offs people make in their utility functions in those settings. -- Other minor comments: - (Introduction) It is not clear at all how the Arab Spring, Black Lives Matter, and fake news are similar --- i.e., whether a single model could provide insight into these highly heterogeneous events --- nor is it clear what end the authors hope to achieve by modeling them --- the ethics of modeling protests in a field crowded with powerful institutional actors is worth carefully considering. - If I understand correctly, the fact that the authors assume a factored reward function seems limiting. Isn't the major benefit of game theory it's ability to accommodate utility functions that depend on the actions of others? - The authors state that one of their essential insights is that "solving the optimization problem of a single-agent MDP is equivalent to solving the inference problem of an MFG." This statement feels a bit too cute at the expense of clarity. The authors perform inference via inverse-RL, so it's more clear to say the authors are attempting to use statistical inference to figure out what is being optimized. - The relationship between MFGs and a single-agent MDP is nice and a fine observation, but not as surprising as the authors frame it as. Any multiagent MDP can be naively represented as a single-agent MDP where the agent has control over the entire population, and we already know that stochastic games are closely related to MDPs. It's therefore hard to imagine that there woudn't be some sort of correspondence.
iclr_2018_Bk346Ok0W
Recent work on encoder-decoder models for sequence-to-sequence mapping has shown that integrating both temporal and spatial attentional mechanisms into neural networks increases the performance of the system substantially. We report on a new modular network architecture that applies an attentional mechanism not on temporal and spatial regions of the input, but on sensor selection for multi-sensor setups. This network called the sensor transformation attention network (STAN) is evaluated in scenarios which include the presence of natural noise or synthetic dynamic noise. We demonstrate how the attentional signal responds dynamically to changing noise levels and sensor-specific noise, leading to reduced word error rates (WERs) on both audio and visual tasks using TIDIGITS and GRID; and also on CHiME-3, a multi-microphone real-world noisy dataset. The improvement grows as more channels are corrupted as demonstrated on the CHiME-3 dataset. Moreover, the proposed STAN architecture naturally introduces a number of advantages including ease of removing sensors from existing architectures, attentional interpretability, and increased robustness to a variety of noise environments.
Summary: The authors consider the use of attention for sensor, or channel, selection. The idea is tested on several speech recognition datasets, including TIDIGITS and CHiME3, where the attention is over audio channels, and GRID, where the attention is over video channels. Results on TIDIGITS and GRID show a clear benefit of attention (called STAN here) over concatenation of features. The results on CHiME3 show gain over the CHiME3 baseline in channel-corrupted data. Review: The paper reads well, but as a standard application of attention lacks novelty. The authors mention that related work is generalized but fail to differentiate their work relative to even the cited references (Kim & Lane, 2016; Hori et al., 2017). Furthermore, while their approach is sold as a general sensor fusion technique, most of their experimentation is on microphone arrays with attention directly over magnitude-based input features, which cannot utilize the most important feature for signal separation using microphone arrays---signal phase. Their results on CHiME3 are terrible: the baseline CHiME3 system is very weak, and their system is only slightly better! The winning system has a WER of only 5.8%(vs. 33.4% for the baseline system), while more than half of the submissions to the challenge were able to cut the WER of the baseline system in half or better! http://spandh.dcs.shef.ac.uk/chime_challenge/chime2015/results.html. Their results wrt channel corruption on CHiME3, on the other hand, are reasonable, because the model matches the problem being addressed… Overall Assessment: In summary, the paper lacks novelty wrt technique, and as an “application-of-attention” paper fails to be even close to competitive with the state-of-the-art approaches on the problems being addressed. As such, I recommend that the paper be rejected. Additional comments: - The experiments in general lack sufficient detail: Were the attention masks trained supervised or unsupervised? Were the baselines with concatenated features optimized independently? Why is there no multi-channel baseline for the GRID results? - Issue with noise bursts plot (Input 1+2 attention does not sum to 1) - A concatenation based model can handle a variable #inputs: it just needs to be trained/normalized properly during test (i.e. like dropout)…
iclr_2018_rk07ZXZRb
Published as a conference paper at ICLR 2018 LEARNING AN EMBEDDING SPACE FOR TRANSFERABLE ROBOT SKILLS We present a method for reinforcement learning of closely related skills that are parameterized via a skill embedding space. We learn such skills by taking advantage of latent variables and exploiting a connection between reinforcement learning and variational inference. The main contribution of our work is an entropyregularized policy gradient formulation for hierarchical policies, and an associated, data-efficient and robust off-policy gradient algorithm based on stochastic value gradients. We demonstrate the effectiveness of our method on several simulated robotic manipulation tasks. We find that our method allows for discovery of multiple solutions and is capable of learning the minimum number of distinct skills that are necessary to solve a given set of tasks. In addition, our results indicate that the hereby proposed technique can interpolate and/or sequence previously learned skills in order to accomplish more complex tasks, even in the presence of sparse rewards.
The paper presents a new approach for hierarchical reinforcement learning which aims at learning a versatile set of skills. The paper uses a variational bound for entropy regularized RL to learn a versatile latent space which represents the skill to execute. The variational bound is used to diversify the learned skills as well as to make the skills identifyable from their state trajectories. The algorithm is tested on a simple point mass task and on simulated robot manipulation tasks. This is a very intersting paper which is also very well written. I like the presented approach of learning the skill embeddings using the variational lower bound. It represents one of the most principled approches for hierarchical RL. Pros: - Interesting new approach for hiearchical reinforcement learning that focuses on skill versatility - The variational lower bound is one of the most principled formulations for hierarchical RL that I have seen so far - The results are convincing Cons: - More comparisons against other DRL algorithms such as TRPO and PPO would be useful Summary: This is an interesting deep reinforcement learning paper that introduces a new principled framework for learning versatile skills. This is a good paper. More comments: - There are several papers that focus on learning versatile skills in the context of movement primitive libraries, see [1],[2],[3]. These papers should be discussed. [1] Daniel, C.; Neumann, G.; Kroemer, O.; Peters, J. (2016). Hierarchical Relative Entropy Policy Search, Journal of Machine Learning Research (JMLR), [2] End, F.; Akrour, R.; Peters, J.; Neumann, G. (2017). Layered Direct Policy Search for Learning Hierarchical Skills, Proceedings of the International Conference on Robotics and Automation (ICRA). [3] Gabriel, A.; Akrour, R.; Peters, J.; Neumann, G. (2017). Empowered Skills, Proceedings of the International Conference on Robotics and Automation (ICRA).
iclr_2018_H1OQukZ0-
We propose an efficient online hyperparameter optimization method which uses a joint dynamical system to evaluate the gradient with respect to the hyperparameters. While similar methods are usually limited to hyperparameters with a smooth impact on the model, we show how to apply it to the probability of dropout in neural networks. Finally, we show its effectiveness on two distinct tasks.
Summary of paper: This work proposes an extension to an existing method (Franceschi 2017) to optimize regularization hyperparameters. Their method claims increased stability in contrast to the existing one. Summary of review: This is an incremental change of an existing method. This is acceptable as long as the incremental change significantly improves results or the paper presents some convincing theoretical arguments. I did not find either to be the case. The theoretical arguments are interesting but lacking in rigor. The proposed method introduces hyper-hyperparameters which may be hard to tune. The experiments are small scale and it is unclear how much the method improves random grid search. For these reasons, I cannot recommend this paper for acceptance. Comments: 1. Paper should cite Domke 2012 in related work section. 2. Should state and verify conditions for application of implicit function theorem on page 2. 3. Fix notation on page 3. Dot is used on the right hand side to indicate an argument but not left hand side for equation after "with respect to \lambda". 4. I would like to see more explanation for the figure in Appendix A. What specific optimization is being depicted? This figure could be moved into the paper's main body with some additional clarification. 5. I did not understand the paragraph beginning with "This poor estimation". Is this just a restatement of the previous paragraph, which concluded convergence will be slow if \eta is too small? 6. I do understand the notation used in equation (8) on page 4. Are <, > meant to denote less than/greater than or something else? 7. Discussion of weight decay on page 5 seems tangential to main point of the paper. Could be reduced to a sentence or two. 8. I would like to see some experimental verification that the proposed method significantly reduces the dropout gradient variance (page 6), if the authors claim that tuning dropout probabilities is an area they succeed where others don't. 9. Experiments are unconvincing. First, only one hyperparameter is being optimized and random search/grid search are sufficient for this. Second, it is unclear how close the proposed method is to finding the optimal regularization parameter \lambda. All one can conclude is that it performs slightly better than grid search with a small number of runs. I would have preferred to see an extensive grid search done to find the best possible \lambda, then seen how well the proposed method does compared to this. 10. I would have liked to see a plot of how the value of lambda changes throughout optimization. If one can initialize lambda arbitrarily and have this method find the optimal lambda, that is more impressive than a method that works simply because of a fortunate initialization. Typos: 1. Optimization -> optimize (bottom of page 2) 2. Should be a period after sentence starting "Several algorithms" on page 2. 3. In algorithm box on page 5, enable_projection is never used. Seems like warmup_time should also be an input to the algorithm.
iclr_2018_H18WqugAb
Humans can understand and produce new utterances effortlessly, thanks to their systematic compositional skills. Once a person learns the meaning of a new verb "dax," he or she can immediately understand the meaning of "dax twice" or "sing and dax." In this paper, we introduce the SCAN domain, consisting of a set of simple compositional navigation commands paired with the corresponding action sequences. We then test the zero-shot generalization capabilities of a variety of recurrent neural networks (RNNs) trained on SCAN with sequence-to-sequence methods. We find that RNNs can generalize well when the differences between training and test commands are small, so that they can apply "mix-and-match" strategies to solve the task. However, when generalization requires systematic compositional skills (as in the "dax" example above), RNNs fail spectacularly. We conclude with a proof-of-concept experiment in neural machine translation, supporting the conjecture that lack of systematicity is an important factor explaining why neural networks need very large training sets.
This paper focuses on the zero-shot learning compositional capabilities of modern sequence-to-sequence RNNs. Through a series of experiments and a newly defined dataset, it exposes the short-comings of current seq2seq RNN architectures. The proposed dataset, called the SCAN dataset, is a selected subset of the CommonAI navigation tasks data set. This subset is chosen such that each command sequence corresponds to exactly one target action sequence, making it possible to apply standard seq2seq methods. Existing methods are then compared based on how accurately they can produce the target action sequence based on the command input sequence. The introduction covers relevant literature and nicely describes the motivation for later experiments. Description of the model architecture is largely done in the appendix, this puts the focus of the paper on the experimental section. This choice seems to be appropriate, since standard methods are used. Figure 2 is sufficient to illustrate the model to readers familiar with the literature. The experimental part establishes a baseline using standard seq2seq models on the new dataset, by exploring large variations of model architectures and a large part of the hyper-parameter space. This papers experimentation sections sets a positive example by exploring a comparatively large space of standard model architectures on the problem it proposes. This search enables the authors to come to convincing conclusions regarding the shortcomings of current models. The paper explores in particular: 1.) Model generalization to unknown data similar to the training set. 2.) Model generalization to data-sequences longer than the training set. 3.) Generalization to composite commands, where a part of the command is never observed in sequence in the training set. 4.) A recreation of a similar problem in the machine translation context. These experiments show that modern sequence to sequence models do not solve the systematicity problem, while making clear by application to machine translation, why such a solution would be desirable. The SCAN data-set has the potential to become an interesting test-case for future research in this direction. The experimental results shown in this paper are clearly compelling in exposing the weaknesses of current seq2seq RNN models. However, where the paper falls a bit short is in the discussion / outlook in terms of suggestions of how one can go about tackling these shortcomings.
iclr_2018_H1Y8hhg0b
Published as a conference paper at ICLR 2018 LEARNING SPARSE NEURAL NETWORKS THROUGH L 0 REGULARIZATION We propose a practical method for L 0 norm regularization for neural networks: pruning the network during training by encouraging weights to become exactly zero. Such regularization is interesting since (1) it can greatly speed up training and inference, and (2) it can improve generalization. AIC and BIC, well-known model selection criteria, are special cases of L 0 regularization. However, since the L 0 norm of weights is non-differentiable, we cannot incorporate it directly as a regularization term in the objective function. We propose a solution through the inclusion of a collection of non-negative stochastic gates, which collectively determine which weights to set to zero. We show that, somewhat surprisingly, for certain distributions over the gates, the expected L 0 regularized objective is differentiable with respect to the distribution parameters. We further propose the hard concrete distribution for the gates, which is obtained by "stretching" a binary concrete distribution and then transforming its samples with a hard-sigmoid. The parameters of the distribution over the gates can then be jointly optimized with the original network parameters. As a result our method allows for straightforward and efficient learning of model structures with stochastic gradient descent and allows for conditional computation in a principled way. We perform various experiments to demonstrate the effectiveness of the resulting approach and regularizer.
Learning sparse neural networks through L0 regularisation Summary: The authors introduce a gradient-based approach to minimise an objective function with an L0 sparse penalty. The problem is relaxed onto a continuous optimisation by changing an expectation over discrete variables (representing whether a variable is present or not) to an expectation over continuous variables, inspired by earlier work from Maddison et al (ICLR 2017) where a similar transformation was used to learn over discrete variable prediction tasks with neural networks. Here the application is to learn sparse feedforward networks in standard classification tasks, although the framework described is quite general and could be used to impose L0 sparsity to any objective function in principal. The method provides equivalent accuracy and sparsity to published state-of-the-art results on these datasets but it is argue that learning sparsity during the training process will lead to significant speed-ups - this is demonstrated by comparing to a theoretical benchmark (standard training with dropout) rather than through empirical testing against other implementations. Pros: The paper is well written and the derivation of the method is easy to follow with a good explanation of the underlying theory. Optimisation under L0 regularisation is a difficult and generally important topic and certainly has advantages over other sparse inference objective functions that impose shrinkage on non-sparse parameters. The work is put in context and related to some previous relaxation approaches to sparsity. The method allows for sparsity to be learned during training rather than after training (as in standard dropout approaches) and this allows the algorithm to obtain significant per-iteration speed-ups, which improves through training. Cons: The method is applied to standard neural network architectures and performance in terms of accuracy and final achieved sparsity is comparable to the state-of-the-art methods. Therefore the main advance is in terms of learning speed to obtain this similar performance. However, the learning speed-up is presented against a theoretical FLOPs estimate per iteration for a similar network with dropout. It would be useful to know whether the number of iterations to achieve a particular performance is equivalent for all the different architectures considered, e.g. does the proposed sparse learning method converge at the same rate as the others? I felt a more thorough experimental section would have greatly improved the work, focussing on this learning speed aspect. It was unclear how much tuning of the lambda hyper-parameter, which tunes the sparsity, would be required in a practical application since tuning this parameter would increase computation time. It might be useful to provide a full Bayesian treatment so that the optimal sparsity can be chosen through hyper-parameter learning. Minor point: it wasn’t completely clear to me why the fact (3) is a variational approximation to a spike-and-slab is important (Appendix). I don’t see why the spike-and-slab is any more fundamental than the L0 norm prior in (2), it is just more convenient in Bayesian inference because it is an iid prior and potentially allows an informative prior over each parameter. In the context here this didn’t seem a particularly relevant addition to the paper.
iclr_2018_HJGXzmspb
Published as a conference paper at ICLR 2018 TRAINING AND INFERENCE WITH INTEGERS IN DEEP NEURAL NETWORKS Researches on deep neural networks with discrete parameters and their deployment in embedded systems have been active and promising topics. Although previous works have successfully reduced precision in inference, transferring both training and inference processes to low-bitwidth integers has not been demonstrated simultaneously. In this work, we develop a new method termed as "WAGE" to discretize both training and inference, where weights (W), activations (A), gradients (G) and errors (E) among layers are shifted and linearly constrained to low-bitwidth integers. To perform pure discrete dataflow for fixed-point devices, we further replace batch normalization by a constant scaling layer and simplify other components that are arduous for integer implementation. Improved accuracies can be obtained on multiple datasets, which indicates that WAGE somehow acts as a type of regularization. Empirically, we demonstrate the potential to deploy training in hardware systems such as integer-based deep learning accelerators and neuromorphic chips with comparable accuracy and higher energy efficiency, which is crucial to future AI applications in variable scenarios with transfer and continual learning demands.
The authors describe a method called WAGE, which quantize all operands and operators in a neural network, specifically, the weights (W), activations (A), gradients (G), and errors (E) . The idea is using quantizers with clipping (denoted in the paper with Q(x,k)) and some additional operators like shift (denoted with shift(x)) and stochastic rounding. The main motivation of the authors in this work is to reduce the number of bits for representation in a network for all the WAGE operations and operands which influences the power consumption and silicon area in hardware implementations. After introducing the idea and related work, the authors in Section 3 give details about how to perform the quantization. They introduce the additional operators needed for training in such network. Since quantization may loss some information, the authors need to quantize the signals in the network around the dynamic range in order not to "kill" the signal. The authors describe how to do that. Afterward, as in other techniques for quantization, they describe how to initialize the network values. Also, they argue that batch normalization in this network is replaced with the shift-quantize operations, and what is matter in this case is (1) the relative values (“orientations”) and not the absolute values and (2) small values in errors are negligible. Afterward, the authors conduct experiments on MNIST, SVHN, CIFAR10, and ILSVRC12 datasets, where they show promising results compared to the errors provided by previous works. The WAGE parameters (i.e., the quantized no. of bits used) are 2-8-8-8, respectively. For understand more the WAGE, the authors compare on CIFAR10 the test error rate with vanilla CNN and show is small loss in using their network. The authors investigate mainly the bitwidth of errors and gradients. In overall, this paper is an accept since it shows good performance on standard problems and invent some nice tricks to implement NN in hardware, for *both* training and inference. For inference only, other works has more to offer but this is a promising technique for learning. The things that are still missing in this work are some power reduction estimates as well as area reduction estimations. This will give the hardware community a clear vision of how such methods may be implemented both in data centers as well as on end portable devices.
iclr_2018_ByUEelW0-
Long Short-Term Memory (LSTM) units have the ability to memorise and use long-term dependencies between inputs to generate predictions on time series data. We introduce the concept of modifying the cell state (memory) of LSTMs using rotation matrices parametrised by a new set of trainable weights. This addition shows significant increases of performance on some of the tasks from the bAbI dataset.
The paper proposes to add a rotation operation in long short-term memory (LSTM) cells. It performs experiments on bAbI tasks and showed that the results are better than the simple baselines with original LSTM cells. There are a few problems with the paper. Firstly, the title and abstract discuss "modifying memories", but the content is only about a rotation operation. Perhaps the title should be "Rotation Operation in Long Short-Term Memory"? Secondly, the motivation of adding the rotation operation is not properly justified. What does it do that a usual LSTM cell could not learn? Does it reduce the excess representational power compared to the LSTM cell that could result in better models? Or does it increase its representational capacity so that some pattern is modeled in the new cell structure that was not possible before? This is not clear at all after reading the paper. Besides, the idea of using a rotation operation in recurrent networks has been explored before [3]. Finally, the task (bAbI) and baseline models (LSTM from a Keras tutorial) are too weak. There have been recent works that nearly solved the bAbI tasks to perfection (e.g., [1][2][4][5], and many others). The paper presented a solution that is weak compared to these recent results. In a summary, the main idea of adding rotation to LSTM cells is not properly justified in the paper, and the results presented are quite weak for publication in ICLR 2018. [1] Sainbayar Sukhbaatar, Jason Weston, Rob Fergus. End-to-end memory networks, NIPS 2015 [2] Caiming Xiong, Stephen Merity, Richard Socher. Dynamic Memory Networks for Visual and Textual Question Answering, ICML 2016 [3] Mikael Henaff, Arthur Szlam, Yann LeCun, Recurrent Orthogonal Networks and Long-Memory Tasks, ICML 2016 [4] Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio, Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes, ICLR 2017 [5] Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, Yann LeCun, Tracking the World State with Recurrent Entity Networks, ICLR 2017
iclr_2018_SyZipzbCb
Published as a conference paper at ICLR 2018 DISTRIBUTED DISTRIBUTIONAL DETERMINISTIC POLICY GRADIENTS This work adopts the very successful distributional perspective on reinforcement learning and adapts it to the continuous control setting. We combine this within a distributed framework for off-policy learning in order to develop what we call the Distributed Distributional Deep Deterministic Policy Gradient algorithm, D4PG. We also combine this technique with a number of additional, simple improvements such as the use of N -step returns and prioritized experience replay. Experimentally we examine the contribution of each of these individual components, and show how they interact, as well as their combined contributions. Our results show that across a wide variety of simple control tasks, difficult manipulation tasks, and a set of hard obstacle-based locomotion tasks the D4PG algorithm achieves state of the art performance.
A DeepRL algorithm is presented that represents distributions over Q values, as applied to DDPG, and in conjunction with distributed evaluation across multiple actors, prioritized experience replay, and N-step look-aheads. The algorithm is called Distributed Distributional Deep Deterministic Policy Gradient algorithm, D4PG. SOTA results are generated for a number of challenging continuous domain learning problems, as compared to benchmarks that include DDPG and PPO, in terms of wall-clock time, and also (most often) in terms of sample efficiency. pros/cons + the paper provides a thorough investigation of the distributional approach, as applied to difficult continuous action problems, and in conjunction with a set of other improvements (with ablation tests) - the story is a bit mixed in terms of the benefits, as compared to the non-distributional approach, D3PG - it is not clear which of the baselines are covered in detail in the cited paper: "Anonymous. Distributed prioritized experience replay. In submission, 2017.", i.e., should readers assume that D3PG already exists and is attributable to this other submission? Overall, I believe that the community will find this to be interesting work. Is a video of the results available? It seems that the distributional model often does not make much of a difference, as compared to D3PG non-prioritized. However, sometimes it does make a big difference, i.e., 3D parkour; acrobot. Do the examples where it yields the largest payoff share a particular characteristic? The benefit of the distributional models is quite different between the 1-step and 5-step versions. Any ideas why? Occasionally, D4PG with N=1 fails very badly, e.g., fish, manipulator (bring ball), swimmer. Why would that be? Shouldn't it do at least as well as D3PG in general? How many atoms are used for the categorical representation? As many as [Bellemare et al.], i.e., 51 ? How much "resolution" is necessary here in order to gain most of the benefits of the distributional representation? As far as I understand, V_min and V_max are not the global values, but are specific to the current distribution. Hence the need for the projection. Is that correct? Would increasing the exploration noise result in a larger benefit for the distributional approach? Figure 2: DDPG performs suprisingly poorly in most examples. Any comments on this, or is DDPG best avoided in normal circumstances for continuous problems? :-) Is the humanoid stand so easy because of large (or unlimited) torque limits? The wall-clock times are for a cluster with K=32 cores for Figure 1? "we utilize a network architecture as specified in Figure 1 which processes the terrain info in order to reduce its dimensionality" Figure 1 provides no information about the reduced dimensionality of the terrain representation, unless I am somehow failing to see this. "the full critic architecture is completed by attaching a critic head as defined in Section A" I could find no further documenation in the paper with regard to the "head" or a separate critic for the "head". It is not clear to me why multiple critics are needed. Do you have an intuition as to why prioritized replay might be reducing performance in many cases?
iclr_2018_S1DWPP1A-
UNSUPERVISED LEARNING OF GOAL SPACES FOR INTRINSICALLY MOTIVATED GOAL EXPLORATION Intrinsically motivated goal exploration algorithms enable machines to discover repertoires of policies that produce a diversity of effects in complex environments. These exploration algorithms have been shown to allow real world robots to acquire skills such as tool use in high-dimensional continuous state and action spaces. However, they have so far assumed that self-generated goals are sampled in a specifically engineered feature space, limiting their autonomy. In this work, we propose to use deep representation learning algorithms to learn an adequate goal space. This is a developmental 2-stage approach: first, in a perceptual learning stage, deep learning algorithms use passive raw sensor observations of world changes to learn a corresponding latent space; then goal exploration happens in a second stage by sampling goals in this latent space. We present experiments where a simulated robot arm interacts with an object, and we show that exploration algorithms using such learned representations can match the performance obtained using engineered representations.
[Edit: After revisions, the authors have made a good-faith effort to improve the clarity and presentation of their paper: figures have been revised, key descriptions have been added, and (perhaps most critically) a couple of small sections outlining the contributions and significance of this work have been written. In light of these changes, I've updated my score.] Summary: The authors aim to overcome one of the central limitations of intrinsically motivated goal exploration algorithms by learning a representation without relying on a "designer" to manually specify the space of possible goals. This work is significant as it would allow one to learn a policy in complex environments even in the absence of a such a designer or even a clear notion of what would constitute a "good" distribution of goal states. However, even after multiple reads, much of the remainder of the paper remains unclear. Many important details, including the metrics by which the authors evaluate performance of their work, can only be found in the appendix; this makes the paper very difficult to follow. There are too many metrics and too few conclusions for this paper. The authors introduce a handful of metrics for evaluating the performance of their approach; I am unfamiliar with a couple of these metrics and there is not much exposition justifying their significance and inclusion in the paper. Furthermore, there are myriad plots showing the performance of the different algorithms, but very little explanation of the importance of the results. For instance, in the middle of page 9, it is noted that some of the techniques "yield almost as low performance as" the randomized baseline, yet no attempt is made to explain why this might be the case or what implications it has for the authors' approach. This problem pervades the paper: many metrics are introduced for how we might want to evaluate these techniques, yet there is no provided reason to prefer one over another (or even why we might want to prefer them over the classical techniques). Other comments: - There remain open questions about the quality of the MSE numbers; there are a number of instances in which the authors cite that the "Meta-Policy MSE is not a simple to interpret" (The remainder of this sentence is incomplete in the paper), yet little is done to further justify why it was used here, or why many of the deep representation techniques do not perform very well. - The authors do not list how many observations they are given before the deep representations are learned. Why is this? Additionally, is it possible that not enough data was provided? - The authors assert that 10 dimensions was chosen arbitrarily for the size of the latent space, but this seems like a hugely important choice of parameter. What would happen if a dimension of 2 were chosen? Would the performance of the deep representation models improve? Would their performance rival that of RGE-FI? - The authors should motivate the algorithm on page 6 in words before simply inserting it into the body of the text. It would improve the clarity of the paper. - The authors need to be clearer about their notation in a number of places. For instance, they use \gamma to represent the distribution of goals, yet it does not appear on page 7, in the experimental setup. - It is never explicitly mentioned exactly how the deep representation learning methods will be used. It is pretty clear to those who are familiar with the techniques that the latent space is what will be used, but a few equations would be instructive (and would make the paper more self-contained). In short, the paper has some interesting ideas, yet lacks a clear takeaway message. Instead, it contains a large number of metrics and computes them for a host of different possible variations of the proposed techniques, and does not include significant explanation for the results. Even given my lack of expertise in this subject, the paper has some clear flaws that need addressing. Pros: - A clear, well-written abstract and introduction - While I am not experienced enough in the field to really comment on the originality, it does seem that the approach the authors have taken is original, and applies deep learning techniques to avoid having to custom-design a "feature space" for their particular family of problems. Cons: - The figure captions are all very "matter-of-fact" and, while they explain what each figure shows, provide no explanation of the results. The figure captions should be as self-contained as possible (I should be able to understand the figures and the implications of the results from the captions alone). - There is not much significance in the current form of the paper, owing to the lack of clear message. While the overarching problem is potentially interesting, the authors seem to make very little effort to draw conclusions from their results. I.e. it is difficult for me to easily visualize all of the "moving parts" of this work: a figure showing the relationship bet - Too many individual ideas are presented in the paper, hurting clarity. As a result, the paper feels scattered. The authors do not have a clear message that neatly ties the results together.
iclr_2018_SkFAWax0-
VOICELOOP: VOICE FITTING AND SYNTHESIS VIA A PHONOLOGICAL LOOP We present a new neural text to speech (TTS) method that is able to transform text to speech in voices that are sampled in the wild. Unlike other systems, our solution is able to deal with unconstrained voice samples and without requiring aligned phonemes or linguistic features. The network architecture is simpler than those in the existing literature and is based on a novel shifting buffer working memory. The same buffer is used for estimating the attention, computing the output audio, and for updating the buffer itself. The input sentence is encoded using a context-free lookup table that contains one entry per character or phoneme. The speakers are similarly represented by a short vector that can also be fitted to new identities, even with only a few samples. Variability in the generated speech is achieved by priming the buffer prior to generating the audio. Experimental results on several datasets demonstrate convincing capabilities, making TTS accessible to a wider range of applications. In order to promote reproducibility, we release our source code and models 1 .
This paper present the application of the memory buffer concept to speech synthesis, and additionally learns a "speaker vector" that makes the system adaptive and work reasonably well on "in-the-wild" speech data. This is a relevant problem, and a novel solution, but synthesis is a wicked problem to evaluate, so I am not sure if ICLR is the best venue for this paper. I see two competing goals: - If the focus is on showing that the presented approach outperforms other approaches under given conditions, a different task would be better (for example recognition, or some sort of trajectory reconstruction) - If the focus is on showing that the system outperforms other synthesis systems, then a speech oriented venue might be best (and it is unfortunate that optimized hyper-parameters for the other systems are not available for a fair comparsion) - If fair comparisons with the other appraoches cannot be made, my sense is that the multi-speaker (post-training fitting) option is really the most interesting and novel contribution here, which could be discussed in mroe detail Still, the approach is creative and interesting and deserves to be presented. I have a few questions/ suggestions: Introduction - The link to Baddeley's "phonological loop" concept seems weak at best. There is nothing phonological about the features that this model stores and retrieves, and no evidence that the model behaves in a way consistent with "phonologcial" (or articulatory) assumptions or models - maybe best to avoid distracting the reader with this concept and strengthen the speaker adaptation aspect? - The memory model is not an RNN, but it is a recurrently called structure (as the name "phonological loop" also implies) - so I would also not highlight this point much - Why would the four properties of the proposed method (mid of p. 2, end of introduction: memory buffer, shared memory, shallow fully connected networks, and simple reader mechanism) lead to better robustness and improve performance on noisy and limited training data? Maybe the proposed approach works better for any speech synthesis task? Why specifically for "in-the-wild" data? The results in Table 2 show that the proposed system outperforms other systems on Blizzard 2013, but not Blizzard 2011 - does this support the previous argument? - Why not also evaluate MCD scores? This should be a quick and automatic way to diagnose what the system is doing? Or is this not meaningful with the noisy training data? Previous work - Please introduce abbreviations the first time they are used ("CBHG" for example) - There is other work on using "in-the-wild" speech as well: Pallavi Baljekar and Alan W Black. Utterance Selection Techniques for TTS Systems using Found Speech, SSW 2016, Sunnyvale, USA Sept 2016 The architecture - Please explain the "GMM" (Gaussian Mixture Model?) attention mechanism in a bit more detail, how does back-propagation work in this case? - Why was this approach chosen? Does it promise to be robust or good for low data situations specifically? - The fonts in Figure 2 are very small, please make them bigger, and the Figure may not print well in b/w. Why does the mean of the absolute weights go up for high buffer positions? Is there some "leaking" from even longer contexts? - I don't understand "However, human speech is not deterministic and one cannot expect [...] truth". You are saying that the model cannot be excepted to reproduce the input exactly? Or does this apply only to the temporal distribution of the sequence (but not the spectral characteristics)? The previous sentence implies that it does. And how does teacher-forcing help in this case? - what type of speed is "x5"? Five times slower or faster than real-time? Experiments - Table 2: maybe mention how these results were computed, i.e. which systems use optimized hyper parameters, and which don't? How do these results support the interpretation of hte results in the introruction re in-the-wild data and found data? - I am not sure how to read Figure 4. Maybe it would be easier to plot the different phone sequences against each other and show how the timings are off, i.e. plot the time of the center of panel one vs the time of the center of panel 2 for the corresponding phone, and show how this is different from a straight line. Or maybe plot phones as rectangles that get deformed from square shape as durations get learned? - Figure 5: maybe provide spectrograms and add pitch contours to better show the effect of the dfifferent intonations? - Figure 4 uses a lot of space, could be reduced, if needed Discussion - I think the first claim is a bit to broad - nowhere is it shown that the method is inherently more robust to clapping and laughs, and variable prosody. The authors will know the relevant data-sets better than I do, maybe they can simply extend the discussion to show that this is what happens. - Efficiency: I think Wavenet has also gotten much faster and runs in less than real-time now - can you expand that discussion a bit, or maybe give estimates in times of FLOPS required, rather than anecdotal evidence for systems that may or may not be comparable? Conclusion - Now the advantage of the proposed model is with the number of parameters, rather than the computation required. Can you clarify? Are your models smaller than competing models?
iclr_2018_rkZB1XbRZ
SCALABLE PRIVATE LEARNING WITH PATE The rapid adoption of machine learning has increased concerns about the privacy implications of machine learning models trained on sensitive data, such as medical records or other personal information. To address those concerns, one promising approach is Private Aggregation of Teacher Ensembles, or PATE, which transfers to a "student" model the knowledge of an ensemble of "teacher" models, with intuitive privacy provided by training teachers on disjoint data and strong privacy guaranteed by noisy aggregation of teachers' answers. However, PATE has so far been evaluated only on simple classification tasks like MNIST, leaving unclear its utility when applied to larger-scale learning tasks and real-world datasets. In this work, we show how PATE can scale to learning tasks with large numbers of output classes and uncurated, imbalanced training data with errors. For this, we introduce new noisy aggregation mechanisms for teacher ensembles that are more selective and add less noise, and prove their tighter differential-privacy guarantees. Our new mechanisms build on two insights: the chance of teacher consensus is increased by using more concentrated noise and, lacking consensus, no answer need be given to a student. The consensus answers used are more likely to be correct, offer better intuitive privacy, and incur lower-differential privacy cost. Our evaluation shows our mechanisms improve on the original PATE on all measures, and scale to larger tasks with both high utility and very strong privacy (ε < 1.0).
This paper considers the problem of private learning and uses the PATE framework to achieve differential privacy. The dataset is partitioned and multiple learning algorithms produce so-called teacher classifiers. The labels produced by the teachers are aggregated in a differentially private manner and the aggregated labels are then used to train a student classifier, which forms the final output. The novelty of this work is a refined aggregation process, which is improved in three ways: a) Gaussian instead of Laplace noise is used to achieve differential privacy. b) Queries to the aggregator are "filtered" so that the limited privacy budget is only expended on queries where the teachers are confident and the student is uncertain or wrong. c) A data-dependent privacy analysis is used to attain sharper bounds on the privacy loss with each query. I think this is a nice modular framework form private learning, with significant refinements relative to previous work that make the algorithm more practical. On this basis, I think the paper should be accepted. However, I think some clarification is needed with regard to item c above: Theorem 2 gives a data-dependent privacy guarantee. That is, if there is one label backed by a clear majority of teachers, then the privacy loss (as measured by Renyi divergence) is low. This data-dependent privacy guarantee is likely to be much tighter than the data-independent guarantee. However, since the privacy guarantee now depends on the data, it is itself sensitive information. How is this issue resolved? If the final privacy guarantee is data-dependent, then this is very different to the way differential privacy is usually applied. This would resemble the "privacy odometer" setting of Rogers-Roth-Ullman-Vadhan [ https://arxiv.org/abs/1605.08294 ]. Another way to resolve this would be to have an output-dependent privacy guarantee. That is, the privacy guarantee would depend only on public information, rather than the private data. The widely-used "sparse vector" technique [ http://www.cis.upenn.edu/~aaroth/Papers/privacybook.pdf#page=59 ] does this. In any case, this is an important issue that needs to be clarified, as it is not clear to me how this is resolved. The algorithm in this work is similar to the so-called median mechanism [ https://www.cis.upenn.edu/~aaroth/Papers/onlineprivacy.pdf ] and private multiplicative weights [ http://mrtz.org/papers/HR10mult.pdf ]. These works also involve a "student" being trained using sensitive data with queries being answered in a differentially private manner. And, in particular, these works also filter out uninformative queries using the sparse vector technique. It would be helpful to add a comparison.
iclr_2018_SknC0bW0-
While Bayesian optimization (BO) has achieved great success in optimizing expensive-to-evaluate black-box functions, especially tuning hyperparameters of neural networks, methods such as random search (Li et al., 2016) and multifidelity BO (e.g. Klein et al. (2017)) that exploit cheap approximations, e.g. training on a smaller training data or with fewer iterations, can outperform standard BO approaches that use only full-fidelity observations. In this paper, we propose a novel Bayesian optimization algorithm, the continuous-fidelity knowledge gradient (cfKG) method, that can be used when fidelity is controlled by one or more continuous settings such as training data size and the number of training iterations. cfKG characterizes the value of the information gained by sampling a point at a given fidelity, choosing to sample at the point and fidelity with the largest value per unit cost. Furthermore, cfKG can be generalized, following Wu et al. (2017), to settings where derivatives are available in the optimization process, e.g. large-scale kernel learning, and where more than one point can be evaluated simultaneously. Numerical experiments show that cfKG outperforms state-of-art algorithms when optimizing synthetic functions, tuning convolutional neural networks (CNNs) on CIFAR-10 and SVHN, and in large-scale kernel learning.
Many black-box optimization problems are "multi-fidelity", in which it is possible to acquire data with different levels of cost and associated uncertainty. The training of machine learning models is a common example, in which more data and/or more training may lead to more precise measurements of the quality of a hyperparameter configuration. This has previously been referred to as a special case of "multi-task" Bayesian optimization, in which the tasks can be constructed to reflect different fidelities. The present paper examines this construction with three twists: using the knowledge gradient acquisition function, using batched function evaluations, and incorporating derivative observations. Broadly speaking, the idea is to allow fidelity to be represented as a point in a hypercube and then include this hypercube as a covariate in the Gaussian process. The knowledge gradient acquisition function then becomes "knowledge gradient per unit cost" the KG equivalent to the "expected improvement per unit cost" discussed in Snoek et al (2012), although that paper did not consider treating fidelity separately. I don't understand the claim that this is "the first multi-fidelity algorithm that can leverage gradients". Can't any Gaussian process model use gradient observations trivially, as discussed in the Rasmussen and Williams book? Why can't any EI or entropy search method also use gradient observations? This doesn't usually come up in hyperparameter optimization, but it seems like a grandiose claim. Similarly, although I don't know of a paper that explicitly does "A + B" for multi-fidelity BO and parallel BO, it is an incremental contribution to combine them, not least because no other parallel BO methods get evaluated as baselines. Figure 1 does not make sense to me. How can the batched algorithm outperform the sequential algorithm on total cost? The sequential cfKG algorithm should always be able to make better decisions with its remaining budget than 8-cfKG. Is the answer that "cost" here means "wall-clock time when parallelism is available"? If that's the case, then it is necessary to include plots of parallelized EI, entropy search, and KG. The same is true for Figure 2; other parallel BO algorithms need to appear.
iclr_2018_HyRVBzap-
CASCADE ADVERSARIAL MACHINE LEARNING REG- ULARIZED WITH A UNIFIED EMBEDDING Injecting adversarial examples during training, known as adversarial training, can improve robustness against one-step attacks, but not for unknown iterative attacks. To address this challenge, we first show iteratively generated adversarial images easily transfer between networks trained with the same strategy. Inspired by this observation, we propose cascade adversarial training, which transfers the knowledge of the end results of adversarial training. We train a network from scratch by injecting iteratively generated adversarial images crafted from already defended networks in addition to one-step adversarial images from the network being trained. We also propose to utilize embedding space for both classification and low-level (pixel-level) similarity learning to ignore unknown pixel level perturbation. During training, we inject adversarial images without replacing their corresponding clean images and penalize the distance between the two embeddings (clean and adversarial). Experimental results show that cascade adversarial training together with our proposed low-level similarity learning efficiently enhances the robustness against iterative attacks, but at the expense of decreased robustness against one-step attacks. We show that combining those two techniques can also improve robustness under the worst case black box attack scenario.
The paper presents a novel adversarial training setup, based on distance based loss of the feature embedding. + novel loss + good experimental evaluation + better performance - way too long - structure could be improved - pivot loss seems hacky The distance based loss is novel, and significantly different from prior work. It seems to perform well in practice as shown in the experimental section. The experimental section is extensive, and offers new insights into both the presented algorithm and baselines. Judging the content of the paper alone, it should be accepted. However, the exposition needs significant improvements to warrant acceptance. First, the paper is way too long and unfocused. The recommended length is 8 pages + 1 page for citations. This paper is 12+1 pages long, plus a 5 page supplement. I'd highly recommend the authors to cut a third of their text, it would help focus the paper on the actual message: pushing their new algorithm. Try to remove any sentence or word that doesn't serve a purpose (help sell the algorithm). The structure of the paper could also be improved. For example the cascade adversarial training is buried deep inside the experimental section. Considering that it is part of the title, I would have expected a proper exposition of the idea in the technical section (before any results are presented). While condensing the paper, consider presenting all technical material before evaluation. Finally, the pivot "loss" seems a bit hacky. First, the pivot objective and bidirectional loss are exactly the same thing. While the bidirectional loss is a proper loss and optimized as such (by optimizing both E^adv and E), the pivot objective is no loss function, as it does not correspond to any function any optimization algorithm could minimize. I'd recommend the just remove the pivot objective, or at least not call it a loss. In summary, the results and presented method are good, and eventually deserve publication. However the exposition needs to significantly improve for the paper to be ready for ICLR.
iclr_2018_H1U_af-0-
We consider the problem of improving kernel approximation via randomized feature maps. These maps arise as Monte Carlo approximation to integral representations of kernel functions and scale up kernel methods for larger datasets. We propose to use more efficient numerical integration technique to obtain better estimates of the integrals compared to the state-of-the-art methods. Our approach allows the use of information about the integrand to enhance approximation and facilitates fast computations. We derive the convergence behavior and conduct an extensive empirical study that supports our hypothesis.
The authors offer a novel version of the random feature map approach to approximately solving large-scale kernel problems: each feature map evaluates the "fourier feature" corresponding to the kernel at a set of randomly sampled quadrature points. This gives an unbiased kernel estimator; they prove a bound its variance and provide experiment evidence that for Gaussian and arc-cos kernels, their suggested qaudrature rule outperforms previous random feature maps in terms of kernel approximation error and in terms of downstream classification and regression tasks. The idea is straightforward, the analysis seems correct, and the experiments suggest the method has superior accuracy compared to prior RFMs for shift-invariant kernels. The work is original, but I would say incremental, and the relevant literature is cited. The method seems to give significantly lower kernel approximation errors, but the significance of the performance difference in downstream ML tasks is unclear --- the confidence intervals of the different methods overlap sufficiently to make it questionable whether the relative complexity of this method is worth the effort. Since good performance on downstream tasks is the crucial feature that we want RFMs to have, it is not clear that this method represents a true improvement over the state-of-the-art. The exposition of the quadrature method is difficult to follow, and the connection between the quadrature rules and the random feature map is never explicitly stated: e.g. equation 6 says how the kernel function is approximated as an integral, but does not give the feature map that an ML practitioner should use to get that approximate integral. It would have been a good idea to include figures showing the time-accuracy tradeoff of the various methods, which is more important in large-scale ML applications than the kernel approximation error. It is not clear that the method is *not* more expensive in practice than previous methods (Table 1 gives superior asymptotic runtimes, but I would like to see actual run times, as evaluating the feature maps sound relatively complicated compared to other RFMs). On a related note, I would also like to have seen this method applied to kernels where the probability density in the Bochner integral was not the Gaussian density (e.g., the Laplacian kernel): the authors suggested that their method works there as well when one uses a Gaussian approximation of the density (which is not clear to me), --- and it may be the case that sampling from their quadrature distribution is faster than sampling from the original non-Gaussian density.
iclr_2018_rk4Fz2e0b
We present graph partition neural networks (GPNN), an extension of graph neural networks (GNNs) able to handle extremely large graphs. GPNNs alternate between locally propagating information between nodes in small subgraphs and globally propagating information between the subgraphs. To efficiently partition graphs, we experiment with spectral partitioning and also propose a modified multi-seed flood fill for fast processing of large scale graphs. We extensively test our model on a variety of semi-supervised node classification tasks. Experimental results indicate that GPNNs are either superior or comparable to state-of-the-art methods on a wide variety of datasets for graph-based semi-supervised classification. We also show that GPNNs can achieve similar performance as standard GNNs with fewer propagation steps.
The authors investigate different message passing schedules for GNN learning. Their proposed approach is to partition the graph into disjoint subregions, pass many messages on the sub regions and pass fewer messages between regions (an approach that is already considered in related literature, e.g., the BP literature), with the goal of minimizing the number of messages that need to be passed to convey information between all pairs of nodes in the network. Experimentally, the proposed approach seems to perform comparably to existing methods (or slightly worse on average in some settings). The paper is well-written and easy to read. My primary concern is with novelty. Many similar ideas have been floating around in a variety of different message-passing communities. With no theoretical reason to prefer the proposed approach, it seems like it may be of limited interest to the community if speed is its only benefit (see detailed comments below). Specific comments: 1) "When information from any one node has reached all other nodes in the graph for the first time, this problem is considered as solved." Perhaps it is my misunderstanding of the way in which GNNs work, but isn't the objective actually to reach a set of fixed point equations. If so, then simply propagating information from one side of the graph may not be sufficient. 2) The experimental results in Section 4.4 are almost impossible to interpret. Perhaps it is better to plot number of edges updated versus accuracy? This at least would put them on equal footing. In addition, the experiments that use randomness should be repeated and plotted on average (just in case you happened to pick a bad schedule). 3) More generally, why not consider random schedules (i.e., just pick a random edge, update, repeat) or random partitions? I'm not certain that a fixed set will perform best independent of the types of updates being considered, and random schedules, like the fully synchronous case for an important baseline (especially if update speed is all you care about). Typos: -pg. 6, "Thm. 2" -> "Table 2"
iclr_2018_Byj54-bAW
Several state of the art convolutional networks rely on inter-connecting different layers to ease the flow of information and gradient between their input and output layers. These techniques have enabled practitioners to successfully train deep convolutional networks with hundreds of layers. Particularly, a novel way of interconnecting layers was introduced as the Dense Convolutional Network (DenseNet) and has achieved state of the art performance on relevant image recognition tasks. Despite their notable empirical success, their theoretical understanding is still limited. In this work, we address this problem by analyzing the effect of layer interconnection on the overall expressive power of a convolutional network. In particular, the connections used in DenseNet are compared with other types of inter-layer connectivity. We carry out a tensor analysis on the expressive power inter-connections on convolutional arithmetic circuits (ConvACs) and relate our results to standard convolutional networks. The analysis leads to performance bounds and practical guidelines for design of ConvACs. The generalization of these results are discussed for other kinds of convolutional networks via generalized tensor decompositions.
SUMMARY Traditional convolutional neural networks consist of a sequence of information processing layers. However, one can relax this sequential design constraint so that higher layers receive inputs from one, some, or all preceding layers. This modification allows information to travel more freely throughout the network and has been shown to improve performance, e.g., in image recognition tasks. However, it is not clear whether this change in architecture truly increases representational capacity or it merely facilitates network training. In this paper, the authors present a theoretical analysis of the gain in representational capacity induced by additional inter-layer connections. The authors restrict their analysis to convolutional arithmetic circuits (ConvACs), a class of networks whose representational capacity has been studied previously. An important property of ConvACs is that the network mapping can be recast as a homogeneous polynomial over the input, with coefficients stored in a "grid tensor" $\mathcal{A}^y$. The grid tensor itself is a function of the hidden weight vectors $\mathbf{a}^{z,i}$. The authors first extend ConvACs to accommodate "dense" inter-layer connections and describe how adding dense connections affects the grid tensor. This analysis gives a potentially useful perspective for understanding the mappings that densely connected ConvACs compute. The authors' main results (Theorems 5.1-5.3) analyze the "dense gain" of a densely connected ConvAC. This quantity roughly captures how much wider a standard ConvAC would need to be in order to represent the network mapping of a generic densely connected ConvAC. This is in a way a measure of the added representational power obtained from dense connections. The authors give upper bounds on this quantity, but also produce a case in which the upper bound is achieved. Importantly, the upper bounds are inversely proportional to a parameter $\lambda \leq 1$ controlling the rate at which hidden layer widths decay with increasing depth. The implication is that indeed densely connected ConvACs can have greater representational capacity, however the gain is limited to the case where hidden layers shrink exponentially with increasing depth. These results are partly unsurprising, since densely connected ConvACs contain more trainable parameters than standard ConvACs. In Proposition 3, the authors give some criteria for evaluating when it is nonetheless worthwhile to add dense connections to a ConvAC. COMMENTS (1.) The authors address an interesting and important problem: explaining the empirical success of densely connected CNNs such as ResNets & DenseNets, relative to standard CNNs. The tensor algebra machinery built around ConvACs is impressive and seems to generate sound insights. However, I feel the current presentation fails to provide adequate intuition and interpretation of the results. Moreover, there is no overarching narrative linking the formal results together. This makes it difficult for the reader to grasp the main ideas and significance of the work without diving into all the details. For example: - In Proposition 1, the authors comment that including a dense connection increases the rank of the grid tensor for a shallow densely connected convAC. However, the significance of grid tensor rank is not discussed. - In Proposition 2, the authors do not explain why it is important that the added term $g(\mathbf{X})$ contains only polynomial terms of strictly smaller degree. It is not clear how Propositions 1 & 2 relate to the main Theorems 5.1-5.3. Is the characterization of the grid tensor in Proposition 1 used to obtain the bounds in the later Theorems? - In Section 5, the authors introduce a parameter $\lambda \leq 1$ controlling how the widths of the hidden layers decay with increasing depth. This parameter seems central to the following bounds on dense gain, yet the authors do not motivate it, and there is no discussion of decaying hidden layer widths in previous sections. - The practical significance of Proposition 3 is not sufficiently well explained. First, it is not clear how to use this result if all we have is an upper bound for $G_w$, as given by Theorems 5.1-5.2. It seems we would need a lower bound to be able to conclude that the ratio $\Delta P_{stand}/ \Delta P_{dense}$ is large. Second, it would be helpful if the authors commented on the implication for the special case $k=1$ and $r \leq (1/1+\lambda) \sqrt{M}$, where the dense gain is known. (2.) Moreover, because the authors choose not to sketch the main proof ideas, it is difficult to identify the key novel insights, and how the special structure of densely connected ConvACs factors into the analysis. After studying the proofs in some detail, I have some specific concerns outlined below, which diminish the significance of the results and raise some doubts about soundness. - In Theorem 5.1, the authors upper bound the dense gain by showing that arbitrary $(L, r, \lambda, k)$ dense ConvACs can be represented as standard $(L, r^\prime, \lambda, 0)$ ConvACs of sufficient width $r^\prime \geq G_w r$. The mechanism of the proof is to relate the grid tensor ranks of dense and standard ConvACs. However, a worst case bound on the grid tensor rank of a dense ConvAC is used, which does not seem to rely on the formulation of dense ConvACs. Thus, this result does not tell us anything in particular about dense ConvACs, but rather is a general result relating the expressive capacity of arbitrary depth-$L$ ConvACs and $(L, r^\prime, \lambda, 0)$ ConvACs with decaying widths. - Central to Theorem 5.2 is the observation that a densely connected ConvAC can be viewed as a standard ConvAC, only with "virtually enlarged" hidden layers (of width $\tilde{r}_\ell = (1 + 1/\lambda)r_\ell$ for $k=1$, using the notation of the paper), and blocks of weights fixed to represent the identity mapping. This is a relatively simple idea, and one that seems to hold for general architectures. Thus, I believe Theorem 5.2 can be shown more simply and in greater generality, and without use of the tensor algebra machinery. - There is some intuitive inconsistency in Theorem 5.3 which I would like some help resolving. We have seen that dense ConvACs can be viewed as standard ConvACs with larger hidden layers and some weights fixed. Effectively, the proof of Theorem 5.3 argues for a regime on $r, \lambda, M$ where this induced ConvAC uses its full representational capacity. This is surprising to me however, as I would have guessed that having some weights fixed makes this impossible. It would be very helpful if the authors could weigh in on this confusion. Perhaps there is an issue with the application of Lemmas 2 & 3 in the proof of Theorem 5.3. In Lemmas 2 & 3, we assume the tensors $\mathcal{A}$ and $\mathcal{B}$ are random. These Lemmas are applied in the proof of Theorem 5.3 to tensors $\phi^{\alpha, j, \gamma}$ appearing in the construction of the dense ConvAC grid tensor. However, the $\phi^{\alpha, j, \gamma}$ tensors do not seem completely random, as there are blocks of fixed weights. Can the authors please clarify how the randomness assumption is satisfied? (3.) Lastly, I am concerned that the authors do not at least sketch how to generalize these results to architectures of more practical interest. As the authors point out, there is previous work generalizing theoretical results for ConvACs to convolutional rectifier networks. The authors should discuss whether a similar strategy might apply in this case.
iclr_2018_BkoXnkWAb
We propose a simple extension to the ReLU-family of activation functions that allows them to shift the mean activation across a layer towards zero. Combined with proper weight initialization, this alleviates the need for normalization layers. We explore the training of deep vanilla recurrent neural networks (RNNs) with up to 144 layers, and show that bipolar activation functions help learning in this setting. On the Penn Treebank and Text8 language modeling tasks we obtain competitive results, improving on the best reported results for non-gated networks. In experiments with convolutional neural networks without batch normalization, we find that bipolar activations produce a faster drop in training error, and results in a lower test error on the CIFAR-10 classification task.
This paper proposes a self-normalizing bipolar extension for the ReLU activation family. For every neuron out of two, authors propose to preserve the negative inputs. Such activation function allows to shift the mean of i.i.d. variables to zeros in the case of ReLU or to a given saturation value in the case of ELU. Combined with variance preserving initialization scheme, authors empirically observe that the bipolar ReLU allows to better preserve the mean and variance of the activations through training compared to regular ReLU for a deep stacked RNN. Authors evaluate their bipolar activation on PTB and Text8 using a deep stacked RNN. They show that bipolar activations allow to train deeper RNN (up to some limit) and leads to better generalization performances compared to the ReLU /ELU activation functions. They also show that they can train deep residual network architecture on CIFAR without the use of BN. Question: - Which layer mean and variance are reported in Figure 2? What is the difference between the left and right plots? - In Table 1, we observe that ReLU-RNN (and BELU-RNN for very deep stacked RNN) leads to worst validation performances. It would be nice to report the training loss to see if this is an optimization or a generalization problem. - How does bipolar activation compare to model train with BN on CIFAR10? - Did you try bipolar activation function for gated recurrent neural networks for LSTM or GRU? - As stated in the text, BELU-RNN outperforms BN-LSTM for PTB. However, BN-LSTM outperforms BELU-RNN on Text8. Do you know why the trend is not consistent across datasets? -Clarity/Quality The paper is well written and pleasant to read - Originality: Self-normalizing function have been explored also in scaled ELU, however the application of self-normalizing function to RNN seems novel. - Significance: Activation function is still a very active research topic and self-normalizing function could potentially be impactful for RNN given that the normalization approaches (batch norm, layer norm) add a significant computational cost. In this paper, bipolar activations are used to train very deep stacked RNN. However, the stacked RNN with bipolar activation are not competitive regarding to other recurrent architectures. It is not clear what are the advantage of deep stacked RNN in that context.
iclr_2018_r1tJKuyRZ
We propose the set autoencoder, a model for unsupervised representation learning for sets of elements. It is closely related to sequence-to-sequence models, which learn fixed-sized latent representations for sequences, and have been applied to a number of challenging supervised sequence tasks such as machine translation, as well as unsupervised representation learning for sequences. In contrast to sequences, sets are permutation invariant. The proposed set autoencoder considers this fact, both with respect to the input as well as the output of the model. On the input side, we adapt a recently-introduced recurrent neural architecture using a content-based attention mechanism. On the output side, we use a stable marriage algorithm to align predictions to labels in the learning phase. We train the model on synthetic data sets of point clouds and show that the learned representations change smoothly with translations in the inputs, preserve distances in the inputs, and that the set size is represented directly. We apply the model to supervised tasks on the point clouds using the fixed-size latent representation. For a number of difficult classification problems, the results are better than those of a model that does not consider the permutation invariance. Especially for small training sets, the set-aware model benefits from unsupervised pretraining.
Summary This paper proposes an autoencoder for sets. An input set is encoded into a fixed-length representation using an attention mechanism (previously proposed by [1]). The decoder generates the output sequentially and the generated sequence is matched to the best-matching ordering of the target output set. Experiments are done on synthetic datasets to demonstrate properties of the learned representation. Pros - Experiments show that the autoencoder helps improve classification accuracy for small training set sizes on the shape classification task. - The analysis of how the decoder generates data is insightful. Cons - The experiments are on toy datasets only. Given the availability of point cloud data sets, for example, KITTI which has a widely used benchmark for point cloud based object detection, it would make the paper stronger if this model was benchmarked against published baselines. - The autoencoder does not seem to help much on the regression tasks where even for the smaller training set size setting, directly using the encoder to solve the task often works best. Even finetuning is unable to recover from the pretrained weights. Therefore, it seems that the decoder (which is the novel aspect of this work) is perhaps not working well, or is not well suited to the regression tasks being considered. - The classification task, for which the learned representations work well empirically, seems to be geared towards representing object shape. It doesn't really require remembering each point. On the other hand, the regression tasks that could require remembering the points don't seem to be benefit much from the autoencoder pretraining. This suggests that while the model is able to represent overall shape, it has a hard time remembering individual elements of the set. This seems like a drawback, since a general "set auto-encoder" should be able to perform a wide variety of tasks on the input set which could require remembering the set's elements. Quality This paper describes the proposed model quite well and provides encouraging preliminary results. Clarity The paper is easy to understand. Originality The novelty in the model is using a matching algorithm to find the best ordering of the target output set to match with the sequentially generated decoder output. However, the paper makes a choice of one ranking based matching scheme and does not compare to other alternatives. Significance This paper proposes a way of learning representations of sets which will be of broad interest across the machine learning community. These models are likely to become more relevant with increasing prevelance of point cloud data. References [1] Oriol Vinyals, Samy Bengio, and Manjunath Kudlur. Order matters: Sequence to sequence for sets. arXiv preprint arXiv:1511.06391.
iclr_2018_H1vCXOe0b
In this paper, we propose a novel approach to interpret a well-trained classification model through systematically investigating effects of its hidden units on prediction making. We search for the core hidden units responsible for predicting inputs as the class of interest under the generative Bayesian inference framework. We model such a process of unit selection as an Indian Buffet Process, and derive a simplified objective function via the MAP asymptotic technique. The induced binary optimization problem is efficiently solved with a continuous relaxation method by attaching a Switch Gate layer to the hidden layers of interest. The resulted interpreter model is thus end-to-end optimized via standard gradient back-propagation. Experiments are conducted with two popular deep convolutional classifiers, respectively well-trained on the MNIST dataset and the CI-FAR10 dataset. The results demonstrate that the proposed interpreter successfully finds the core hidden units most responsible for prediction making. The modified model, only with the selected units activated, can hold correct predictions at a high rate. Besides, this interpreter model is also able to extract the most informative pixels in the images by connecting a Switch Gate layer to the input layer.
Pros - The paper proposes a novel formulation of the problem of finding hidden units that are crucial in making a neural network come up with a certain output. - The method seems to be work well in terms of isolating a few hidden units that need to be kept while preserving classification accuracy. Cons - Sections 3.1 and 3.2 are hard to understand. There seem to be inconsistencies in the notation. For example, (1) It would help to clarify whether y^b_n is the prediction score or its transformation into [0, 1]. The usage is inconsistent. (2) It is not clear how "y^b_n can be expressed as \sum_{k=1}^K z_{nk}f_k(x_n)" in general. This is only true for the penultimate layer, and when y^b_n denotes the input to the output non-linearity. However, this analysis seems to be applied for any hidden layer and y^b_n is the output of the non-linearity unit ("The new prediction scores are transformed into a scalar ranging from 0 to 1, denoted as y^b_n.") (3) Section 3.1 denotes the DNN classifier as F(.), but section 3.2 denotes the same classifier as f(.). (4) Why is r_n called the "center" ? I could not understand in what sense is this the center, and of what ? It seems that the max value has been subtracted from all the logits into a softmax (which is a fairly standard operation). - The analysis seems to be about finding neurons that contribute evidence for a particular class. This does not address the issue of understanding why the network makes a certain prediction for a particular input. Therefore this approach will be of limited use. - The paper should include more analysis of how this method helps interpret the actions of the neural net, once the core units have been identified. Currently, the focus seems to be on demonstrating that the classifier performance is maintained as a significant fraction of hidden units are masked. However, there is not enough analysis on showing whether and how the identified hidden units help "interpret" the model. Quality The idea explored in the paper is interesting and the experiments are described in enough detail. However, the writing still needs to be polished. Clarity The problem formulation and objective function (Section 3.1) was hard to follow. Originality This approach to finding important hidden units is novel. Significance The paper addresses an important problem of trying to have more interpretable neural networks. However, it only identifies hidden units that are important for a class, not what are important for any particular input. Moreover, the main thesis of the paper is to describe a method that helps interpret neural network classifiers. However, the experiments only focus on identifying important hidden units and fall short of actually providing an interpretation using these hidden units.
iclr_2018_BJDH5M-AW
Neural network-based classifiers parallel or exceed human-level accuracy on many common tasks and are used in practical systems. Yet, neural networks are susceptible to adversarial examples, carefully perturbed inputs that cause networks to misbehave in arbitrarily chosen ways. When generated with standard methods, these examples do not consistently fool a classifier in the physical world due to a combination of viewpoint shifts, camera noise, and other natural transformations. Adversarial examples generated using standard techniques require complete control over direct input to the classifier, which is impossible in many real-world systems. We introduce the first method for constructing real-world 3D objects that consistently fool a neural network across a wide distribution of angles and viewpoints. We present a general-purpose algorithm for generating adversarial examples that are robust across any chosen distribution of transformations. We demonstrate its application in two dimensions, producing adversarial images that are robust to noise, distortion, and affine transformation. Finally, we apply the algorithm to produce arbitrary physical 3D-printed adversarial objects, demonstrating that our approach works end-to-end in the real world. Our results show that adversarial examples are a practical concern for real-world systems.
The authors present a method to enable robust generation of adversarial visual inputs for image classification. They develop on the theme that 'real-world' transformations typically provide a countermeasure against adversarial attacks in the visual domain, to show that contextualising the adversarial exemplar generation by those very transformations can still enable effective adversarial example generation. They adapt an existing method for deriving adversarial examples to act under a projection space (effectively a latent-variable model) which is defined through a transformations distribution. They demonstrate the effectiveness of their approach in the 2D and 3D (simulated and real) domains. The paper is clear to follow and the objective employed appears to be sound. I like the idea of using 3D generation, and particularly, 3D printing, as a means of generating adversarial examples -- there is definite novelty in that particular exploration for adversarial examples. I did however have some concerns: 1. What precisely is the distribution of transformations used for each experiment? Is it a PCFG? Are the different components quantised such that they are discrete rvs, or are there still continuous rvs? (For example, is lighting discretised to particular locations or taken to be (say) a 3D Gaussian?) And on a related note, how were the number of sampled transformations chosen? Knowing the distribution (and the extent of it's support) can help situate the effectiveness of the number of samples taken to derive the adversarial input. 2. While choosing the distance metric in transformed space, LAB is used, but for the experimental results, l_2 is measured in RGB space -- showing the RGB distance is perhaps not all that useful given it's not actually being used in the objective. I would perhaps suggest showing LAB, maybe in addition to RGB if required. 3. Quantitative analysis: I would suggest reporting confidence intervals; perhaps just the 1st standard deviation over the accuracies for the true and 'adversarial' labels -- the min and max don't help too much in understanding what effect the monte-carlo approximation of the objective has on things. Moreover, the min and max are only reported for the 2D and rendered 3D experiments -- it's missing for the 3D printing experiment. 4. Experiment power: While the experimental setup seems well thought out and structured, the sample size (i.e, the number of entities considered) seems a bit too small to draw any real conclusions from. There are 5 exemplar objects for the 3D rendering experiment and only 2 for the 3D printing one. While I understand that 3D printing is perhaps not all that scalable to be able to rattle off many models, the 3D rendering experiment surely can be extended to include more models? Were the turtle and baseball models chosen randomly, or chosen for some particular reason? Similar questions for the 5 models in the 3D rendering experiment. 5. 3D printing experiment transformations: While the 2D and 3D rendering experiments explicitly state that the sampled transformations were random, the 3D printing one says "over a variety of viewpoints". Were these viewpoints chosen randomly? Most of these concerns are potentially quirks in the exposition rather than any issues with the experiments conducted themselves. For now, I think the submission is good for a weak accept –- if the authors address my concerns, and/or correct my potential misunderstanding of the issues, I'd be happy to upgrade my review to an accept.
iclr_2018_SkffVjUaW
Successful training of convolutional neural networks is often associated with sufficiently deep architectures composed of high amounts of features. These networks typically rely on a variety of regularization and pruning techniques to converge to less redundant states. We introduce a novel bottom-up approach to expand representations in fixed-depth architectures. These architectures start from just a single feature per layer and greedily increase width of individual layers to attain effective representational capacities needed for a specific task. While network growth can rely on a family of metrics, we propose a computationally efficient version based on feature time evolution and demonstrate its potency in determining feature importance and a networks' effective capacity. We demonstrate how automatically expanded architectures converge to similar topologies that benefit from lesser amount of parameters or improved accuracy and exhibit systematic correspondence in representational complexity with the specified task. In contrast to conventional design patterns with a typical monotonic increase in the amount of features with increased depth, we observe that CNNs perform better when there is more learnable parameters in intermediate, with falloffs to earlier and later layers.
This paper introduces a simple correlation-based metric to measure whether filters in neural networks are being used effectively, as a proxy for effective capacity. The authors then introduce a greedy algorithm that expands the different layers in a neural network until the metric indicates that additional features will end up not being used effectively. The application of this algorithm is shown to lead to architectures that differ substantially from hand-designed models with the same number of layers: most of the parameters end up in intermediate layers, with fewer parameters in earlier and later layers. This indicates that common heuristics to divide capacity over the layers of a network are suboptimal, as they tend to put most parameters in later layers. It's also nice that simpler tasks yield smaller models (e.g. MNIST vs. CIFAR in figure 3). The experimental section is comprehensive and the results are convincing. I especially appreciate the detailed analysis of the results (figure 3 is great). Although most experiments were conducted on the classic benchmark datasets of MNIST, CIFAR-10 and CIFAR-100, the paper also includes some promising preliminary results on ImageNet, which nicely demonstrates that the technique scales to more practical problems as well. That said, it would be nice to demonstrate that the algorithm also works for other tasks than image classification. I also like the alternative perspective compared to pruning approaches, which most research seems to have been focused on in the past. The observation that the cross-correlation of a weight vector with its initial values is a good measure for effective filter use seems obvious in retrospect, but hindsight is 20/20 and the fact is that apparently this hasn't been tried before. It is definitely surprising that a simple method like this ends up working this well. The fact that all parameters are reinitialised whenever any layer width changes seems odd at first, but I think it is sufficiently justified. It would be nice to see some comparison experiments as well though, as the intuitive thing to do would be to just keep the existing weights as they are. Other remarks: Formula (2) seems needlessly complicated because of all the additional indices. Maybe removing some of those would make things easier to parse. It would also help to mention that it is basically just a normalised cross-correlation. This is mentioned two paragraphs down, but should probably be mentioned right before the formula is given instead. page 6, section 3.1: "it requires convergent training of a huge architecture with lots of regularization before complexity can be introduced", I guess this should be "reduced" instead of "introduced".
iclr_2018_ryH20GbRW
Published as a conference paper at ICLR 2018 RELATIONAL NEURAL EXPECTATION MAXIMIZATION: UNSUPERVISED DISCOVERY OF OBJECTS AND THEIR INTERACTIONS Common-sense physical reasoning is an essential ingredient for any intelligent agent operating in the real-world. For example, it can be used to simulate the environment, or to infer the state of parts of the world that are currently unobserved. In order to match real-world conditions this causal knowledge must be learned without access to supervised data. To address this problem we present a novel method that learns to discover objects and model their physical interactions from raw visual images in a purely unsupervised fashion. It incorporates prior knowledge about the compositional nature of human perception to factor interactions between object-pairs and learn efficiently. On videos of bouncing balls we show the superior modelling capabilities of our method compared to other unsupervised neural approaches that do not incorporate such prior knowledge. We demonstrate its ability to handle occlusion and show that it can extrapolate learned knowledge to scenes with different numbers of objects.
Summary --- This work applies a representaion learning technique that segments entities to learn simple 2d intuitive physics without per-entity supervision. It adds a relational mechanism to Neural Expectation Maximization and shows that this mechanism provides a better simulation of bouncing balls in a synthetic environment. Neural Expectation Maximization (NEM) decomposes an image into K latent variables (vectors of reals) theta_k. A decoder network reconstructs K images from each of these latent variables and these K images are combined into a single reconstruction using pixel-wise mixture components that place more weight on pixels that match the ground truth. An encoder network f_enc() then updates the latent variables to better explain the reconstructions they produced. The neural nets are learned so that the latent variables reconstruct the image well when used by the mixture model and match a prior otherwise. Previously NEM has been shown to learn variables which represent individual objects (simple shapes) in a compositional manner, using one variable per object. Other recent neural models can learn to simulate simple 2d physics environments (balls bouncing around in a 2d plane). That work supervises the representation for each entity (ball) explicitly using states (e.g. position and velocity of balls) which are known from the physics simulator used to generate the training data. The key feature of these models is the use of a pairwise embedding of an object and its neighbors (message passing) to predict the object's next state in the simulation. This paper paper combines the two methods to create Relational Neural Expectation Maximization (R-NEM), allowing direct interaction at inference time between the latent variables that encode a scene. The encoder network from NEM can be seen as a recurrent network which takes one latent variable theta_k at time t and some input x to produce the next latent variable theta_k at time t+1. R-NEM adds a relational module which computes an embedding used as a third input to the recurrent encoder. Like previous relational models, this one uses a pairwise embedding of the object being updated (object k) and its neighbors. Unlike previous neural physics models, R-NEM uses a soft attention mechanism to determine which objects are neighbors and which are not. Also unlike previous neural models, this method does not require per-object supervision. Experiments show that R-NEM learns compositional representations that support intuitive physics more effectively than ablative baselines. These experiments show: 1) R-NEM reconstructs images more accurately than baselines (RNN/LSTM) and NEM (without object interaction). 2) R-NEM is trained with 4 objects per image. It does a bit worse at reconstructing images with 6-8 objects per image, but still performs better than baselines. 3) A version of R-NEM without neighborhood attention in the relation module matches the performance of R-NEM using 4 objects and performs worse than R-NEM at 6-8 objects. 4) R-NEM learns representations which factorize into one latent variable per object as measured by the Adjusted Rand Index, which compares NEM's pixel clustering to a ground truth clustering with one cluster per object. 5) Qualitative and quantitative results show that R-NEM can simulate 2d ball physics for many time steps more effectively than an RNN and while only suffering gradual divergence from the ground truth simulation. Qualitative results show that the attentional mechanism attends to objects which are close to the context object together, acting like the heuristic neighborhood mechanism from previous work. Follow up experiments extend the basic setup significantly. One experiment shows that R-NEM demonstrates object permanence by correctly tracking a collision when one of the objects is completely occluded. Another experiment applies the method to the Space Invaders Atari game, showing that it treats columns of aliens as entities. This representation aligns with the game's goal. Strengths --- The paper presents a clear, convincing, and well illustrated story. Weaknesses --- * RNN-EM BCE results are missing from the simulation plot (right of figure 4). Minor comments/concerns: * 2nd paragraph in section 4: Are parameters shared between these 3 MLPs (enc,emb,eff)? I guess not, but this is ambiguous. * When R-NEM is tested against 6-8 balls is K set to the number of balls plus 1? How does performance vary with the number of objects? * Previous methods report performance across simulations of a variety of physical phenomena (e.g., see "Visual Interaction Networks"). It seems that supervision isn't needed for bouncing ball physics, but I wonder if this is the case for other kinds of phenomena (e.g., springs in the VIN paper). Can this method eliminate the need for per-entity supervision in this domain? * A follow up to the previous comment: Could a supervised baseline that uses per-entity state supervision and neural message passsing (like the NPE from Chang et. al.) be included? * It's a bit hard to qualitatively judge the quality of the simulations without videos to look at. Could videos of simulations be uploaded (e.g., via anonymous google drive folder as in "Visual Interaction Networks")? * This uses a neural message passing mechanism like those of Chang et. al. and Battaglia et. al. It would be nice to see a citation to neural message passing outside of the physics simulation domain (e.g. to "Neural Message Passing for Quantum Chemistry" by Gilmer et. al. in ICML17). * Some work uses neighborhood attention coefficients for neural message passing. It would be nice to see a citation included. * See "Neighborhood Attention" in "One-Shot Imitation Learning" by Duan et. al. in NIPS17 * Also see "Programmable Agents" by Denil et. al. Final Evaluation --- This paper clearly advances the body of work on neural intuitive physics by incorporating NEM entity representation to allow for less supervision. Alternatively, it adds a message passing mechanism to the NEM entity representation technique. These are moderately novel contributions and there are only minor weaknesses, so this is a clear accept.
iclr_2018_B1twdMCab
Common-sense or background knowledge is required to understand natural language, but in most neural natural language understanding (NLU) systems, the requisite background knowledge is indirectly acquired from static corpora. We develop a new reading architecture for the dynamic integration of explicit background knowledge in NLU models. A new task-agnostic reading module provides refined word representations to a task-specific NLU architecture by processing background knowledge in the form of free-text statements, together with the taskspecific inputs. Strong performance on the tasks of document question answering (DQA) and recognizing textual entailment (RTE) demonstrate the effectiveness and flexibility of our approach. Analysis shows that our models learn to exploit knowledge selectively and in a semantically appropriate way.
The main emphasis of this paper is how to add background knowledge so as to improve the performance of NLU (specifically QA and NLI) systems. They adopt the sensible perspective that background knowledge might most easily be added by providing it in text format. However, in this paper, the way it is added is simply by updating word representations based on this extra text. This seems too simple to really be the right way to add background knowledge. In practice, the biggest win of this paper turns out to be that you can get quite a lot of value by sharing contextualized word representations between all words with the same lemma (done by linguistic preprocessing; the paper never says exactly how, not even if you read the supplementary material). This seems a useful observation which it would be easy to apply everywhere and which shows fairly large utility from a bit of linguistically sensitive matching! As the paper notes, this type of sharing is the main delta in this paper from simply using a standard deep LSTM (which the paper claims to not work on these data sets, though I'm not quite sure couldn't be made to work with more tuning). pp. 6-7: The main thing of note seems to be that sharing of representations between words with the same lemma (which the tables refer to as "reading" is worth a lot (3.5-6.0%), in every case rather more than use of background knowledge (typically 0.3-1.5%). A note on the QA results: The QA results are certainly good enough to be in the range of "good systems", but none of the results really push the SOTA. The best SQuAD (devset) results are shown as several percent below the SOTA. In the table the TriviaQA results are shown as beating the SOTA, and that's fair wrt published work at the time of submission, but other submissions show that all of these results are below what you get by running the DrQA (Chen et al. 2017) system off-the-shelf on TriviaQA, so the real picture is perhaps similar to SQuAD, especially since DrQA is itself now considerably below the SOTA on SQUAD. Similar remarks perhaps apply to the NLI results. p.7 In the additional NLI results, it is interesting and valuable to note that the lemmatization and knowledge help much more when amounts of data (and the covarying dimensionality of the word vectors) is much smaller, but the fact that the ideas of this paper have quite little (or even negative) effects when run on the full data with full word vectors on top of the ESIM model again draws into question whether enough value is being achieved from the world knowledge. Biggest question: - Are word embeddings powerful enough as a form of memory to store the kind of relational facts that you are accessing as background knowledge? Minor notes: - The paper was very well written/edited. The only real copyediting I noticed was in the conclusion: and be used ➔ and can be used; that rely on ➔ that relies on. - Should reference to (Manning et al. 1999) better be to (Manning et al. 2008) since the context here appears to be IR systems? - On p.3 above sec 3.1: What is u? Was that meant to be z? - On p.8, I'm a bit suspicious of the "Is additional knowledge used?" experiment which trains with knowledge and then tests without knowledge. It's not surprising that this mismatch might hurt performance, even if the knowledge provided no incremental value over what could be gained from standard word vectors alone. - In the supplementary material the paper notes that the numbers are from the best result from 3 runs. This seems to me a little less good experimental practice than reporting an average of k runs, preferably for k a bit bigger than 3.
iclr_2018_HJUOHGWRb
We introduce contextual explanation networks (CENs)-a class of models that learn to predict by generating and leveraging intermediate explanations. CENs are deep networks that generate parameters for context-specific probabilistic graphical models which are further used for prediction and play the role of explanations. Contrary to the existing post-hoc model-explanation tools, CENs learn to predict and to explain jointly. Our approach offers two major advantages: (i) for each prediction, valid instance-specific explanations are generated with no computational overhead and (ii) prediction via explanation acts as a regularization and boosts performance in low-resource settings. We prove that local approximations to the decision boundary of our networks are consistent with the generated explanations. Our results on image and text classification and survival analysis tasks demonstrate that CENs are competitive with the state-of-the-art while offering additional insights behind each prediction, valuable for decision support.
the paper is clearly written; it works on a popular idea of combining graphical models and neural nets. this work could benefit from differentiating more from previous literature. one key component is interpretability, which comes from the use of graphical models. the authors claim that the previous art directly integrate neural networks into the graphical models as components, which renders the models uninterpretable. however, it is unclear, following the same logic, why the proposed method has interpretability. after all, how to go from the context to the parameters of the graphical models is still uninterpretable. specifically, it is helpful to pinpoint what is special in this model that makes it interpretable, compared to works like Gao, Y., Archer, E. W., Paninski, L., & Cunningham, J. P. (2016). NIPS or Johnson, M., Duvenaud, D. K., Wiltschko, A., Adams, R. P., & Datta, S. R. (2016). NIPS. also, is there any methodological advancement essential to CENs? the other idea is to go context specific. this idea has been present in language modeling, for example, amortized embedding models like M. Rudolph, F. Ruiz, S. Athey, and D. Blei (2017). NIPS and L. Liu, F. Ruiz, S. Athey, and D. Blei. (2017). NIPS. application to medical data is interesting. but it could be helpful for the readers to understand if the idea in this work is fundamentally different from these previous ideas from amortized inference. a final thing. a common challenge with composing graphical models and neural networks (in interpretable or uninterpretable ways) is that the neural networks will usually eat up all the representational power. the variance captured by graphical models becomes negligible. to this end, the power of graphical models for interpretability is limited. interpretability in this case is not much different from fitting only a neural network, taking the penultimate layer to the output as "context specific features" can claim that we are composing a linear model with a neural network, and the linear model is interpretable. it would be interesting to be clear about how the authors get around this issue.
iclr_2018_ry1arUgCW
Published as a conference paper at ICLR 2018 DORA THE EXPLORER: DIRECTED OUTREACHING REINFORCEMENT ACTION-SELECTION Exploration is a fundamental aspect of Reinforcement Learning, typically implemented using stochastic action-selection. Exploration, however, can be more efficient if directed toward gaining new world knowledge. Visit-counters have been proven useful both in practice and in theory for directed exploration. However, a major limitation of counters is their locality. While there are a few model-based solutions to this shortcoming, a model-free approach is still missing. We propose E-values, a generalization of counters that can be used to evaluate the propagating exploratory value over state-action trajectories. We compare our approach to commonly used RL techniques, and show that using E-values improves learning and performance over traditional counters. We also show how our method can be implemented with function approximation to efficiently learn continuous MDPs. We demonstrate this by showing that our approach surpasses state of the art performance in the Freeway Atari 2600 game.
The paper proposes an approach to exploration based on initializing a value function to 1 everywhere, then letting the value decay back toward zero as the state space is explored. I like the idea a lot. I don't really like the paper, though. I'd really like to see a strong theoretical and/or empirical justification for it, and both are lacking. On the theoretical side, can a bound be proven for this approach, even in the tabular case? On the empirical side, there are more (and more recent!) testbeds that have come to define the field---the mountain car problem is just not sufficient to convincingly argue that the method scales and generalizes. My intuition is that such an approach ought to be effective, but I really want to see additional evidence. Given the availability of so many RL testbeds, I worry that it had been tried but failed. Detailed comments: "Where γ is" -> ", <newline> where γ is". "The other alternative" -> "The alternative"? "without learning a model (Mongillo et al., 2014).": Seems like an odd choice for a citation for model-free RL. Perhaps select the paper that first used the term? Or an RL survey? Right before Section 1.1, put a period after the Q-learning update equation. "new states may" -> "new states, may". "such approaches leads" -> "such approaches lead". "they still fails" -> "they still fail". "evaluated with respect only to its immediate outcome": Not so. Several of the cited papers use counters to determine which states are "known" and then solve an MDP to direct exploration past immediate outcomes. " exploration bonus(Little & Sommer, 2014)" -> " exploration bonus (Little & Sommer, 2014)". "n a model-free settings." -> "n model-free settings.". " Therefore, a satisfying approach for propagating directed exploration in model-free reinforcement learning is still missing. ": I think you should cite http://research.cs.rutgers.edu/~nouri/papers/nips08mre.pdf , which also combines a kind of counter idea with function approximation to improve exploration. "initializing E-values to 1": I like this idea. I wonder if one could prove bounds similar to the delayed Q-learning algorithm with this approach. It is reminiscent of https://arxiv.org/pdf/1205.2606.pdf , which also drives exploration by beginning with an overly optimistic estimate and letting the data (in a function approximation setting) decay the influence of this initialization. "So after visited n times" -> "So after being visited n times". "figure 1a" -> "Figure 1a". (And, in other places.) "An important property of E-values is that it decreases over repetition" -> "An important property of E-values is that they decrease over repetition". "t utilize counters, can" -> "t utilize counters can". " hence we were interested a convergence measure": Multiple problems in this sentence, please fix. Figure 2: How many states are in this environment? Some description is needed. Figure 3: The labels in this figure (and all the figures) are absurdly small and, hence, unreadable. "now turn to show that by using counters," -> "now turn to showing that, by using counters,". Theorem 3.1: I'm not quite getting why we want to take a stochastic rule and make it deterministic. Note that standard PAC-MDP algorithms choose deterministically. It's not clear why we'd want to start from a stochastic rule. " models(Bellemare" -> " models (Bellemare". "Efficient memory-based learning for robot control": This reference is incomplete. (I'm skeptical that it represents the first use of this problem, but I can't check it.) "Softmax exploration fail" -> "Softmax exploration fails". "whom also analyzed" -> "who also analyzed". "non-Markovity" -> "non-Markovianness"?
iclr_2018_r1hsJCe0Z
We study the problem of semantic code repair, which can be broadly defined as automatically fixing non-syntactic bugs in source code. The majority of past work in semantic code repair assumed access to unit tests against which candidate repairs could be validated. In contrast, the goal here is to develop a strong statistical model to accurately predict both bug locations and exact fixes without access to information about the intended correct behavior of the program. Achieving such a goal requires a robust contextual repair model, which we train on a large corpus of real-world source code that has been augmented with synthetically injected bugs. Our framework adopts a two-stage approach where first a large set of repair candidates are generated by rule-based processors, and then these candidates are scored by a statistical model using a novel neural network architecture which we refer to as Share, Specialize, and Compete. Specifically, the architecture (1) generates a shared encoding of the source code using an RNN over the abstract syntax tree, (2) scores each candidate repair using specialized network modules, and (3) then normalizes these scores together so they can compete against one another in comparable probability space. We evaluate our model on a real-world test set gathered from GitHub containing four common categories of bugs. Our model is able to predict the exact correct repair 41% of the time with a single guess, compared to 13% accuracy for an attentional sequence-to-sequence model.
This paper introduces a neural network architecture for fixing semantic bugs in code. Focusing on four specific types of bugs, the proposed two-stage approach first generates a set of candidate repairs and then scores the repair candidates using a neural network trained on synthetically introduced bug/repair examples. Comparing to a prior sequence-to-sequence approach, the proposed approach achieved dominantly better accuracy on both synthetic and real bug datasets. On a real bug dataset constructed from GitHub commits, it was shown to outperform human. I find the application of neural networks to the problem of code repair to be highly interesting. The proposed approach is highly specialized for the specific four types of bugs considered here and appears to be effective for fixing these specific bug types, especially in comparison to the sequence-to-sequence model based approach. However, I was wondering whether limiting the output choices (based on the bug type) is going a long way toward improving the performance compared to seq-2-seq, which does not utilize such output constraints. What if we introduce the same type of constraints for the seq-2-seq model? For example, one can simply modifying the decoding process such that for locations that are not in the candidate set, the network simply makes no change, and for candidate-repair locations, the output space is limited to the specific choices provided in the candidate set. This will provide a more fair comparison between the different models. Right now it is not clear how much of the observed performance gain is due to the use of these constraints on the output space. Is there any control mechanism used to ensure that the real bug test set do not overlap with the training set? This is not clear to me. I find the comparison result to human performance to be interesting and somewhat surprising. This seems quite impressive. The presented example where human makes a mistake but the algorithm is correct is informative and provides some potential explanation to this. But it also raises a question. The specific example snippet could be considered to be correct when placed in a different context. Bugs are context sensitive artifacts. The setup of considering each function independently without any context seems like an inherent limitation in the types of bugs that this method could potentially address. Some discussion on the limitation of the proposed method seems to be warranted. Pro: Interesting application Impressive results on a difficult task Nice discussion of results and informative examples Clear presentation, easy to read. Con: The comparison to baseline seq-2-seq does not seem quite fair The method appears to be highly specialized to the four bug types. It is not clear how generalizable it will be to more complex bugs, and to the real application scenarios where we are dealing with open world classification and there is not fixed set of possible bugs.
iclr_2018_S1sqHMZCb
Published as a conference paper at ICLR 2018 NERVENET: LEARNING STRUCTURED POLICY WITH GRAPH NEURAL NETWORKS We address the problem of learning structured policies for continuous control. In traditional reinforcement learning, policies of agents are learned by multi-layer perceptrons (MLPs) which take the concatenation of all observations from the environment as input for predicting actions. In this work, we propose NerveNet to explicitly model the structure of an agent, which naturally takes the form of a graph. Specifically, serving as the agent's policy network, NerveNet first propagates information over the structure of the agent and then predict actions for different parts of the agent. In the experiments, we first show that our NerveNet is comparable to state-of-the-art methods on standard MuJoCo environments. We further propose our customized reinforcement learning environments for benchmarking two types of structure transfer learning tasks, i.e., size and disability transfer, as well as multi-task learning. We demonstrate that policies learned by NerveNet are significantly more transferable and generalizable than policies learned by other models and are able to transfer even in a zero-shot setting.
The authors present an interesting application of Graph Neural Networks to learning policies for controlling "centipede" robots of different lengths. They leverage the non-parametric nature of graph neural networks to show that their approach is capable of transferring policies to different robots more quickly than other approaches. The significance of this work is in its application of GNNs to a potentially practical problem in the robotics domain. The paper suffers from some clarity/presentation issues that will need to be improved. Ultimately, the contribution of this paper is rather specific, yet the authors show the clear advantage of their technique for improved performance and transfer learning on some agent types within this domain. Some comments: - Significant: A brief statement of the paper's "contributions" is also needed; it is unclear at first glance what portions of the work are the authors' own contributions versus prior work, particularly in the section describing the GNN theory. - Abstract: I take issue with the phrase "are significantly better than policies learned by other models", since this is not universally true. While there is a clear benefit to their technique for the centipede and snake models, the performance on the other agents is mostly comparable, rather than "significantly better"; this should be reflected in the abstract. - Figure 1 is instructive, but another figure is needed to better illustrate the algorithm (including how the state of the world is mapped to the graph state h, how these "message" are passed between nodes, and how the final graph states are used to develop a policy). This would greatly help clarity, particularly for those who have not seen GNNs before, and would make the paper more self-contained and easier to follow. The figure could also include some annotated examples of the input spaces of the different joints, etc. Relatedly, Sec. 2.2.2 is rather difficult to follow because of the lack of a figure or concrete example (an example might help the reader understand the procedure without having to develop an intuition for GNNs). - There is almost certainly a typo in Eq. (4), since it does not contain the aggregated message \bar{m}_u^t. Smaller issues / typos: - Abstract: please spell out spell out multi-layer perceptrons (MLP). - Sec 2.2: "servers" should be "serves" - "performance By" on page 4 is missing a "." Pros: - The paper presents an interesting application of GNNs to the space of reinforcement learning and clearly show the benefits of their approach for the specific task of transfer learning. - To the best of my knowledge, the paper presents an original result and presents a good-faith effort to compare to existing, alternative systems (showing that they outperform on the tasks of interest). Cons: - The contributions of the paper should be more clearly stated (see comment above). - The section describing their approach is not "self contained" and is difficult for an unlearned reader to follow. - The problem the authors have chosen to tackle is perhaps a bit "specific", since the performance of their approach is only really shown to exceed the performance on agents, like centipedes or snakes, which have this "modular" quality. I certainly hope the authors improve the quality of the theory section; the poor presentation here brings down the rest of the paper, which is otherwise an easy read.
iclr_2018_S1uxsye0Z
ADAPTIVE DROPOUT WITH RADEMACHER COMPLEX- ITY REGULARIZATION We propose a novel framework to adaptively adjust the dropout rates for the deep neural network based on a Rademacher complexity bound. The state-of-the-art deep learning algorithms impose dropout strategy to prevent feature co-adaptation. However, choosing the dropout rates remains an art of heuristics or relies on empirical grid-search over some hyperparameter space. In this work, we show the network Rademacher complexity is bounded by a function related to the dropout rate vectors and the weight coefficient matrices. Subsequently, we impose this bound as a regularizer and provide a theoretical justified way to trade-off between model complexity and representation power. Therefore, the dropout rates and the empirical loss are unified into the same objective function, which is then optimized using the block coordinate descent algorithm. We discover that the adaptively adjusted dropout rates converge to some interesting distributions that reveal meaningful patterns. Experiments on the task of image and document classification also show our method achieves better performance compared to the state-of-the-art dropout algorithms.
==Main comments The authors connect dropout parameters to a bound of the Rademacher complexity (Rad) of the network. While it is great to see deep learning techniques inspired by learning theory, I think the paper makes too many leaps and the Rad story is ultimately unconvincing. Perhaps it is better to start with the resulting regularizer, and the interesting direct optimization of dropout parameters. In its current form, the following leaps problematic and were not addressed in the paper: 1) Why is is adding Rad as a regularizer reasonable? Rad is usually hard to compute, and most useful for bounding the generalization error. It would be interesting if it also turns out to be a good regularizer, but the authors do not say why nor cite anything. Like the VC dimension, Rad itself depends on the model class, and cannot be directly optimized. Even if you can somehow optimize over the model class, these quantities give very loose bounds, and do not equal to generalization error. For example, I feel even just adding the actual generalization error bound is more natural. Would it make sense to just add Rad to the objective in this way for a linear model? 2) Why is it reasonable to go from a regularizer based on RC to a loose bound of Rad? The actual resulting regularizer turns out to be a weight penalty but this seems to be a rather loose bound that might not have too much to do with Rad anymore. There should be some analysis on how loose this bound is, and if this looseness matter at all. The empirical results themselves seem reasonable, but the results are not actually better than simpler methods in the corresponding tasks, the interpretation is less confident. Afterall, it seems that the proposed method had several parameters that were turned, where the analogous parameters are not present in the competing methods. And the per unit dropout rates are themselves additional parameters, but are they actually good use of parameters? ==Minor comments The optimization is perhaps also not quite right, since this requires taking the gradient of the dropout parameter in the original objective. While the authors point out that one can use the mean, but that is more problematic for the gradient than for normal forward predictions. The gradient used for regular learning is not based on the mean prediction, but rather the samples. tiny columns surrounding figures are ugly and hard to read
iclr_2018_SJiHXGWAZ
DIFFUSION CONVOLUTIONAL RECURRENT NEURAL NETWORK: DATA-DRIVEN TRAFFIC FORECASTING Spatiotemporal forecasting has various applications in neuroscience, climate and transportation domain. Traffic forecasting is one canonical example of such learning task. The task is challenging due to (1) complex spatial dependency on road networks, (2) non-linear temporal dynamics with changing road conditions and (3) inherent difficulty of long-term forecasting. To address these challenges, we propose to model the traffic flow as a diffusion process on a directed graph and introduce Diffusion Convolutional Recurrent Neural Network (DCRNN), a deep learning framework for traffic forecasting that incorporates both spatial and temporal dependency in the traffic flow. Specifically, DCRNN captures the spatial dependency using bidirectional random walks on the graph, and the temporal dependency using the encoder-decoder architecture with scheduled sampling. We evaluate the framework on two real-world large scale road network traffic datasets and observe consistent improvement of 12% -15% over state-of-the-art baselines.
The paper proposes to build a graph where the edge weight is defined using the road network distance which is shown to be more realistic than the Euclidean distance. The defined diffusion convolution operation is essentially conducting random walks over the road segment graph. To avoid the expensive matrix operation for the random walk, it empirically shows that K = 3 hops of the random walk can give a good performance. The outputs of the graph convolutionary operation are then fed into the sequence to sequence architecture with the GRU cell to model the temporal dependency. Experiments show that the proposed architecture can achieve good performance compared to classic time series baselines and several simplified variants of the proposed model. Although the paper argues that several existing deep-learning based approaches may not be directly applied in the current setting either due to using Euclidean distance or undirected graph structure, the comparisons are not persuasive. For example, the approach in the paper "DeepTransport: Learning Spatial-Temporal Dependency for Traffic Condition Forecasting" also consider directed graph and a diffusion effect from 2 or 3 hops away in the neighboring subgraph of a target road segment. Furthermore, the paper proposes to use two convolution components in Equation 2, each of which corresponds to out-degree and in-degree direction, respectively. This effectively increase the number of model parameters to learn. Compared to the existing spectral graph convolution approach, it is still not clear how its performance will be by using the same number of parameters. The experiments will be improved if it can compare with "Spatio-temporal graph convolutional neural network: A deep learning framework for traffic forecasting" using roughly the same number of parameters.
iclr_2018_SJn0sLgRb
Data augmentation is a widely used technique in many machine learning tasks, such as image classification, to virtually enlarge the training dataset size and avoid overfitting. Traditional data augmentation techniques for image classification tasks create new samples from the original training data by, for example, flipping, distorting, adding a small amount of noise to, or cropping a patch from an original image. In this paper, we introduce a simple but surprisingly effective data augmentation technique for image classification tasks. With our technique, named SamplePairing, we synthesize a new sample from one image by overlaying another image randomly chosen from the training data (i.e., taking an average of two images for each pixel). By using two images randomly selected from the training set, we can generate N 2 new samples from N training samples. This simple data augmentation technique significantly improved classification accuracy for all the tested datasets; for example, the top-1 error rate was reduced from 33.5% to 29.0% for the ILSVRC 2012 dataset with GoogLeNet and from 8.22% to 6.93% in the CIFAR-10 dataset. We also show that our SamplePairing technique largely improved accuracy when the number of samples in the training set was very small. Therefore, our technique is more valuable for tasks with a limited amount of training data, such as medical imaging tasks.
The paper investigates a method of data augmentation for image classification, where two images from the training set are averaged together as input, but the label from only one image is used as a target. Since this scheme is asymmetric and uses quite unrealistic input images, a training scheme is used where the technique is only enabled in the middle of training (not very beginning or end), and in an alternating on-off fashion. This improves classification performance nicely on a variety of datasets. This is a simple technique, and the paper is concise and to the point. However, I would have liked to see a few additional comparisons. First, this augmentation technique seems to have two components: One is the mixing of inputs, but another is the effective dropping of labels from one of the two images in the pair. Which of these are more important, and can they be separated? What if some of the images' labels are changed at random, for half the images in a minibatch, for example? This would have the effect of random label changes, but without the input mixing. Likewise, what if both labels in the pair are used as targets (with 0.5 assigned to each in the softmax target)? This would mix the images, but keep targets intact. Second, the bottom of p.3 says that multiple training procedures were evaluated, but I'd be interested to see the results of some of these. In particular, is it important to alternate enabling and disabling SamplePairing, or does it also work to mix samples with and without it in each minibatch (e.g. 3/4 of the minibatch with pairing augmentation, and 1/4 without it)? I liked the experiment mixing images from within a restricted training set composed of a subset of the CIFAR images, compared to mixing these images with CIFAR training set images outside the restricted sample (p.5 and Fig 5). This suggests to me, however, that it's possible the label manipulations may play an important role. Or, is an explanation why this performs not as well that the network will train these mixing images to random targets (that of the training image in the pair), and never see this example again, whereas by using the training set alone, the mixing image is likely to be repeated with its correct label? Some more discussion on this would be nice. Overall, I think this is an interesting technique that appears to achieve nice results. It could be investigated deeper at some key points.
iclr_2018_BkIkkseAZ
In this paper, we study the problem of optimizing a two-layer artificial neural network that best fits a training dataset. We look at this problem in the setting where the number of parameters is greater than the number of sampled points. We show that for a wide class of differentiable activation functions (this class involves most nonlinear functions and excludes piecewise linear functions), we have that arbitrary first-order optimal solutions satisfy global optimality provided the hidden layer is non-singular. We essentially show that these non-singular hidden layer matrix satisfy a "good" property for these big class of activation functions. Techniques involved in proving this result inspire us to look at a new algorithmic, where in between two gradient step of hidden layer, we add a stochastic gradient descent (SGD) step of the output layer. In this new algorithmic framework, we extend our earlier result and show that for all finite iterations the hidden layer satisfies the"good" property mentioned earlier therefore partially explaining success of noisy gradient methods and addressing the issue of data independency of our earlier result. Both of these results are easily extended to hidden layers given by a flat matrix from that of a square matrix. Results are applicable even if network has more than one hidden layer provided all inner hidden layers are arbitrary, satisfy non-singularity, all activations are from the given class of differentiable functions and optimization is only with respect to the outermost hidden layer. Separately, we also study the smoothness properties of the objective function and show that it is actually Lipschitz smooth, i.e., its gradients do not change sharply. We use smoothness properties to guarantee asymptotic convergence of O(1/number of iterations) to a first-order optimal solution.
This paper aims to study some of the theoretical properties of the global optima of single-hidden layer neural networks and also the convergence to such a solution. I think there are some interesting arguments made in the paper e.g. Lemmas 4.1, 5.1, 5.2, and 5.3. However, as I started reading beyond intro I increasingly got the sense that this paper is somewhat incomplete e.g. while certain claims are made (abstract/intro) the theoretical justification are rather far from these claims. Of course there is a chance that I might be misunderstanding some things and happy to adjust my score based on the discussions here. Detailed comments: 1) My main concern is that the abstract and intro claims things that are never proven (or even stated) in the rest of the paper Example 1 from abstract: “We show that for a wide class of differentiable activation functions (this class involved “almost” all functions which are not piecewise linear), we have that first-order optimal solutions satisfy global optimality provided the hidden layer is non-singular.” This is certainly not proven and in fact not formally stated anywhere in the paper. Closest result to this is Lemma 4.1 however, because the optimal solution is data dependent this lemma can not be used to conclude this. Example 2 from intro when comparing with other results on page 2: The authors essentially state that they have less restrictive assumptions in the form of the network or assumptions on the data (e.g. do not require Gaussianity). However as explained above the final conclusions are also significantly weaker than this prior literature so it’s a bit of apples vs oranges comparison. 2) Page 2 minor typos We study training problem -->we study the training problem In the regime training objective--> in the regime the training objective 3) the basic idea argument and derivative calculations in section 3 is identical to section 4 of Soltan...et al 4) Lemma 4.1 is nice, well done! That being said it does not seem easy to make it (1) quantifiable (2) apply to all W. It would also be nice to compare with Soudry et. al. 5) Argument on top of page 6 is incorrect as the global optima is data dependent and hence lemma 4.1 (which is for a fixed matrix) does not apply 6) Section 5 on page 6. Again the stated conclusion here that the iterates do not lead to singular W is much weaker than the claims made early on. 7) I haven’t had time yet to verify correctness of Lemmas 5.1, 5.2, and Lemma 5.3 in detail but if this holds is a neat argument to side step invertibility w.r.t. W, Nicely done! 8) What is the difference between Lemma 5.4 and Lemma 6.12 of Soltan...et al 9) Theorem 5.9. Given that the arguments in this paper do not show asymptotic convergence to a point where gradient vanishes and W is invertible why is the proposed algorithm better than a simple approach in which gradient descent is applied but a small amount of independent Gaussian noise is injected in every iteration over W. By adjusting the noise variance across time one can ensure a result of the kind in Theorem 5.9 (Of course in the absence of a quantifiable version of Lemma 4.1 which can apply to all W that result will also suffer from the same issues).
iclr_2018_SyxCqGbRZ
Sepsis is a life-threatening complication from infection and a leading cause of mortality in hospitals. While early detection of sepsis improves patient outcomes, there is little consensus on exact treatment guidelines, and treating septic patients remains an open problem. In this work we present a new deep reinforcement learning method that we use to learn optimal personalized treatment policies for septic patients. We model patient continuous-valued physiological time series using multi-output Gaussian processes, a probabilistic model that easily handles missing values and irregularly spaced observation times while maintaining estimates of uncertainty. The Gaussian process is directly tied to a deep recurrent Q-network that learns clinically interpretable treatment policies, and both models are learned together end-to-end. We evaluate our approach on a heterogeneous dataset of septic spanning 15 months from our university health system, and find that our learned policy could reduce patient mortality by as much as 8.2% from an overall baseline mortality rate of 13.3%. Our algorithm could be used to make treatment recommendations to physicians as part of a decision support tool, and the framework readily applies to other reinforcement learning problems that rely on sparsely sampled and frequently missing multivariate time series data.
This paper presents an important application of modern deep reinforcement learning (RL) methods to learning optimal treatments for sepsis from past patient encounters. From a methods standpoint, it offers nothing new but does synthesize best practice deep RL methods with a differentiable multi-task Gaussian Process (GP) input layer. This means that the proposed architecture can directly handle irregular sampling and missing values without a separate resampling step and can be trained end-to-end to optimize reward -- patient survival -- without a separate ad hoc preprocessing step. The experiments are thorough and the results promising. Overall, strong application work, which I appreciate, but with several flaws that I'd like the authors to address, if possible, during the review period. I'm perfectly willing to raise my score at least one point if my major concerns are addressed. QUALITY Although the core idea is derivative, the work is executed pretty well. Pros (+) and cons (-) are listed below: + discussion of the sepsis application is very strong. I especially appreciated the qualitative analysis of the individual case shown in Figure 4. While only a single anecdote, it provides insight into how the model might yield clinical insights at the bedside. + thorough comparison of competing baselines and clear variants -- though it would be cool to apply offline policy evaluation (OPE) to some of the standard clinical approaches, e.g., EGDT, discussed in the introduction. - "uncertainty" is one of the supposed benefits of the MTGP layer, but it was not at all clear how it was used in practice, other than -- perhaps -- as a regularizer during training, similar to data augmentation. - uses offline policy evaluation "off-the-shelf" and does not address or speculate the potential pitfalls or dangers of doing so. See "Note on Offline Policy Evaluation" below. - although I like the anecdote, it tells us very little about the overall policy. The authors might consider some coarse statistical analyses, similar to Figure 3 in Raghu, et al. (though I'm sure you can come up with more and better analyses!). - there are some interesting patterns in Table 1 that the authors do not discuss, such as the fact that adding the MGP layer appears to reduce expected mortality more (on average) than adding recurrences. Why might this be (my guess is data augmentation)? CLARITY Paper is well-written, for the most part. I have some nitpicks about the writing, but in general, it's not a burden to read. + core ideas and challenges of the application are communicated clearly - the authors did not detail how they chose their hyperparameters (number of layers, size of layers, whether to use dropout, etc.). This is critical for fully assessing the import of the empirical results. - the text in the figures are virtually impossible to read (too small) - the image quality in the figures is pretty bad (and some appear to be weirdly stretched or distorted) - I prefer the X-axis labels that Raghu uses in their Figure 4 (with clinically interpretable increments) over the generic +1, +2, etc., labels used in Figure 3 here Some nitpicks on the writing * too much passive voice. Example: third paragraph in introduction ("Despite the promising results of EGDT, concerns arose."). Avoid passive voice whenever possible. * page 3, sec. 2.2 doesn't flow well. You bounce back and forth between discussion of the Markov assumption and full vs. partial observability. Try to focus on one concept at a time (and the solution offered by a proposed approach). Note that RNNs do NOT relax the Markov assumption -- they simply do an end run around it by using distributed latent representations. ORIGINALITY This work scores relatively low in originality. It really just combines ideas from two MLHC 2017 papers [1][2]. One could read those two papers and immediately conclude this paper's findings (the GP helps; RL helps; GP + RL is the best). This paper adds few (if any) new insights. One way to address this would be to discuss in greater detail some potential explanations for why their results are stronger than those in Raghu and why the MTGP models outperform their simpler counterparts. Perhaps they could run some experiments to measure performance as a function of the number of MC samples (if perhaps grows with the number of samples, then it suggests that maybe it's largely a data augmentation effect). SIGNIFICANCE This paper's primary significance is that it provides further evidence that RL could be applied successfully to clinical data and problems, in particular sepsis treatment. However, this gets undersold (unsurprising, given the ML community's disdain for replication studies). It is also noteworthy that the MTGP gives such a large boost in performance for a relatively modest data set -- this property is worth exploring further, since clinical data are often small. However, again, this gets undersold. One recommendation I would make is that the authors directly compare the results in this paper with those in Raghu and to point out, in particular, the confirmatory results. Interestingly, the shapes of the action vs. mortality rate plots (Figure 4 in Raghu, Figure 3 here) are quite similar -- that's not precisely replication, but it's comforting. NOTE ON OFFLINE POLICY EVALUATION This work has the same flaw that Raghu, et al., has -- neither justifies the use of offline policy evaluation. Both simply apply Jiang, et al.'s doubly robust approach [3] "off the shelf" without commenting on its accuracy in practice or discussing potential pitfalls (neither even considers [4] which seems to be superior in practice, especially with limited data). As far as I can tell (I'm not an RL expert), the DR approach carries stronger consistency guarantees and reduced variance but is still only as good the data it is trained on, and clinical data is known to have significant bias, particularly with respect to treatment, where clinicians are often following formulaic guidelines. Can we trust the mortality estimates in Table 1? Why or why not? Why shouldn't I think that RL is basically guaranteed to outperform non-RL approaches under an evaluation that is itself an RL model learned from the same training data! While I'm willing to accept that this is the best we can do in this setting (we can't just try the learned policy on new patients!), I think this paper (and similar works, like Raghu, et al.) *must* provide a sober and critical discussion of its results, rather than simply applaud itself for getting the best score among competing approaches. REFERENCES [1] Raghu, et al. "Continuous State-Space Models for Optimal Sepsis Treatment - a Deep Reinforcement Learning Approach." MLHC 2017. [2] Futoma, et al. "An Improved Multi-Output Gaussian Process RNN with Real-Time Validation for Early Sepsis Detection." MLHC 2017. [3] Jiang, et al. "Doubly robust off-policy value evaluation for reinforcement learning." ICML 2016. [4] Thompson and Brunskill. "Data-Efficient Off-Policy Policy Evaluation for Reinforcement Learning." ICML 2016.
iclr_2018_By4HsfWAZ
Published as a conference paper at ICLR 2018 DEEP LEARNING FOR PHYSICAL PROCESSES: INCORPORATING PRIOR SCIENTIFIC KNOWLEDGE We consider the use of Deep Learning methods for modeling complex phenomena like those occurring in natural physical processes. With the large amount of data gathered on these phenomena the data intensive paradigm could begin to challenge more traditional approaches elaborated over the years in fields like maths or physics. However, despite considerable successes in a variety of application domains, the machine learning field is not yet ready to handle the level of complexity required by such problems. Using an example application, namely Sea Surface Temperature Prediction, we show how general background knowledge gained from the physics could be used as a guideline for designing efficient Deep Learning models. In order to motivate the approach and to assess its generality we demonstrate a formal link between the solution of a class of differential equations underlying a large family of physical phenomena and the proposed model. Experiments and comparison with series of baselines including a state of the art numerical approach is then provided.
The paper ‘Deep learning for Physical Process: incorporating prior physical knowledge’ proposes to question the use of data-intensive strategies such as deep learning in solving physical inverse problems that are traditionally solved through assimilation strategies. They notably show how physical priors on a given phenomenon can be incorporated in the learning process and propose an application on the problem of estimating sea surface temperature directly from a given collection of satellite images. All in all the paper is very clear and interesting. The results obtained on the considered problem are clearly of great interest, especially when compared to state-of-the-art assimilation strategies such as the one of Béréziat. While the learning architecture is not original in itself, it is shown that a proper physical regularization greatly improves the performance. For these reasons I believe the paper has sufficient merits to be published at ICLR. That being said, I believe that some discussions could strengthen the paper: - Most classical variational assimilation schemes are stochastic in nature, notably by incorporating uncertainties in the observation or physical evolution models. It is still unclear how those uncertainties can be integrated in the model; - Assimilation methods are usually independent of the type of data at hand. It is not clear how the model learnt on one particular type of data transpose to other data sequences. Notably, the question of transfer and generalization is of high relevance here. Does the learnt model performs well on other dataset (for instance, acquired on a different region or at a distant time). I believe this type of issue has to be examinated for this type of approach to be widely use in inverse physical problems.
iclr_2018_HyEi7bWR-
Recurrent Neural Networks (RNNs) are designed to handle sequential data but suffer from vanishing or exploding gradients. Recent work on Unitary Recurrent Neural Networks (uRNNs) have been used to address this issue and in some cases, exceed the capabilities of Long Short-Term Memory networks (LSTMs). We propose a simpler and novel update scheme to maintain orthogonal recurrent weight matrices without using complex valued matrices. This is done by parametrizing with a skew-symmetric matrix using the Cayley transform. Such a parametrization is unable to represent matrices with negative one eigenvalues, but this limitation is overcome by scaling the recurrent weight matrix by a diagonal matrix consisting of ones and negative ones. The proposed training scheme involves a straightforward gradient calculation and update step. In several experiments, the proposed scaled Cayley orthogonal recurrent neural network (scoRNN) achieves superior results with fewer trainable parameters than other unitary RNNs.
This manuscript introduce a scheme for learning the recurrent parameter matrix in a neural network that uses the Cayley transform and a scaling weight matrix. This scheme leads to good performance on sequential data tasks and requires fewer parameters than other techniques Comments: -- It’s not clear to me how D is determined for each test. Given the definition in Theorem 3.1 it seems like you would have to have some knowledge of how many eigenvalues in W you expect to be close to -1. -- For the copying and adding problem test cases, it might be useful to clarify or cite something clarifying that the failure mode RNNs run into with temporal ordering problems is an exploding gradient, rather than any other pathological training condition, just to make it clear why these experiments are relevant. -- The ylabel in Figure 1 is “Test Loss” which I didn’t see defined. Is this test loss the cross entropy? If so, I think it would be more effective to label the plot with that. -- The plots in figure 1 and 2 have different colors to represent the same set of techniques. I would suggest keeping a consistent color scheme -- It looks like in Figure 1 the scoRNN is outperformed by the uRNN in the long run in spite of the scoRNN convergence being smoother, which should be clarified. -- It looks like in Figure 2 the scoRNN is outperformed by the LSTM across the board, which should be clarified. -- How is test set accuracy defined in section 5.3? Classifying digits? Recreating digits? -- When discussing table 1, the manuscript mentions scoRNN and Restricted-capacity uRNN have similar performance for 16k parameters and then state that scoRNN has the best test accuracy at 96.2%. However, there is no example for restricted-capacity uRNN with 69k parameters to show that the performance of restricted-capacity uRNN doesn't also increase similarly with more parameters. -- Overall it’s unclear to me how to completely determine the benefit of this technique over the others because, for each of the tests, different techniques may have superior performance. For instance, LSTM performs best in 5.2 and in 5.3 for the MNIST test accuracy. scoRNN and Restricted-capacity uRNN perform similarly for permuted MNIST Test Accuracy in 5.3. Finally, scoRNN seems to far outperform the other techniques in table 2 on the TIMIT speech dataset. I don’t understand the significance of each test and why the relative performance of the techniques vary from one to the other. -- For example, the manuscript seems to be making the case that the scoRNN gradients are more stable than those of a uRNN, but all of the results are presented in terms of network accuracy and not gradient stability. You can sort of see that generally the convergence is more gradual for the scoRNN than the uRNN from the training graphs but it'd be nice if there was an actual comparison of the stability of the gradients during training (as in Figure 4 of the Arjovsky 2016 paper being compared to for instance) just to make it really clear.
iclr_2018_HJ8W1Q-0Z
We improve previous end-to-end differentiable neural networks (NNs) with fast weight memories. A gate mechanism updates fast weights at every time step of a sequence through two separate outer-product-based matrices generated by slow parts of the net. The system is trained on a complex sequence to sequence variation of the Associative Retrieval Problem with roughly 70 times more temporal memory (i.e. time-varying variables) than similar-sized standard recurrent NNs (RNNs). In terms of accuracy and number of parameters, our architecture outperforms a variety of RNNs, including Long Short-Term Memory, Hypernetworks, and related fast weight architectures.
Summary The paper proposes a neural network architecture for associative retrieval based on fast weights with context-dependent gated updates. The architecture consists of a ‘slow’ network which provides weight updates for the ‘fast’ network which outputs the predictions of the system. The experiments show that the architecture outperforms a couple of related models on an associative retrieval problem. Quality The authors evaluate their architecture on an associative retrieval task which is similar to the variable assignment task used in Danihelka et al. (2016). The difference with the original task seems to be that the network is also trained to predict a ‘blank’ symbol which indicates that no prediction has been made. While this task is artificial, it does make sense in the context of what the authors want to show. The fact that the authors compare their results with three sensible baselines and perform some form of hyper-parameter search for all of the models, adds to the quality of the experiment. It is somewhat unfortunate that the paper doesn’t give more detail about the precise hyper-parameters involved and that there is no comparison with the associative LSTM from Danihelka et al. Did these hyper-parameters also include the sizes of the models? Otherwise it’s not very clear to me why the numbers of parameters are so much higher for the baseline models. While I think that this experiment is well done, it is unfortunate that it is the only experiment the authors carried out and the paper would be more impactful if there would have been results for a wider variety of tasks. It is commendable that the authors also discuss the memory requirements and increased wall clock time of the model. Clarity I found the paper hard to read at times and it is often not very clear what the most important differences are between the proposed methods and earlier ones in the literature. I’m not saying those differences aren’t there, but the paper simply didn’t emphasize them very well and I had to reread the paper from Ba et al. (2016) to get the full picture. Originality/Significance While the architecture is new, it is based on a combination of previous ideas about fast weights, hypernetworks and activation gating and I’d say that the novelty of the approach is average. The architecture does seem to work well on the associative retrieval task, but it is not clear yet if this will also be true for other types of tasks. Until that has been shown, the impact of this paper seems somewhat limited to me. Pros Experiments seem well done. Good baselines. Good results. Cons Hard to extract the most important changes from the text. Only a single synthetic task is reported.
iclr_2018_S17mtzbRb
Learning a better representation with neural networks is a challenging problem, which has been tackled from different perspectives in the past few years. In this work, we focus on learning a representation that would be useful in a clustering task. We introduce two novel loss components that substantially improve the quality of produced clusters, are simple to apply to arbitrary models and cost functions, and do not require a complicated training procedure. We perform an extensive set of experiments, supervised and unsupervised, and evaluate the proposed loss components on two most common types of models, Recurrent Neural Networks and Convolutional Neural Networks, showing that the approach we propose consistently improves the quality of KMeans clustering in terms of mutual information scores and outperforms previously proposed methods.
Summary This paper proposes two regularizers that are intended to make the representations learned in the penultimate layer of a classifier more conforming to inherent structure in the data, rather than just the class structure enforced by the classifier. One regularizer encourages the weights feeding into the penultimate layer to be dissimilar and the other encourages the activations across samples (even if they belong to the same class) to be dissimilar. Pros - The proposed regularizers are able to separate out the classes inherent in the data, even if this information is not provided through class labels. This is validated on several datasets using visualizations as well as quantitative metrics based on mutual information. Cons - It is not explained why it makes sense to first convert the weight vectors into probability distributions by applying the softmax function, and then measuring distances using KL divergence between the probability distributions. It should be explained more clearly if there is there a natural interpretation of the weight vectors as probability distributions. Otherwise it is not obvious why the distance between the weight vectors is measured the way it is. - Similarly, the ReLU activations are also first converted into probability distributions by applying a softmax. It should be explained why the model does this, as opposed to simply using dot products to measure similarity. - The model is not compared to simpler alternatives such as adding an orthogonality regularization on the weights, i.e., computing W^TW and making the diagonals close to 1 and all other terms 0. Similar regularizers can be applied for activation vectors as well. - The objective of this paper seems to be to produce representations that are easy to separate into clusters. This topic has a wealth of previous work. Of particular relevance are methods such as t-SNE [1], parametric t-SNE [2], and DEC [3]. The losses introduced in this paper are fairly straight-forward. Therefore it would be good to compare to these baselines to show that a simple loss function is sufficient to achieve the objective. - Disentangling usually refers to disentangling factors of variation, for example, lighting, pose, and object identity which affect the appearance of a data point. This is different from separability, which is the property of a representation that makes the presence of clusters evident. This paper seems to be about learning separable representations, whereas the title suggests that it is about disentangled ones. Quality The design choices made in the paper (such as the choice of distance function) is not well explained. Also, given that the modifications introduced are quite simple, it can be improved by doing more thorough comparisons to other baselines. Clarity The paper is easy to follow. Originality The novel aspect of the paper is the way distance is measured by converting the weights (and activations) to probability distributions and using KL divergence to measure distance. However, it is not explained what motivated this choice. Significance The objective of this model is to produce representations that are separable, which is of general interest. However, given the wealth of previous work done in clustering, this paper would only be impactful if it compares to other hard baselines and shows clear advantages. [1] van der Maaten, Laurens and Hinton, Geoffrey. Visualizing data using t-SNE. JMLR, 2008. [2] van der Maaten, Laurens. Learning a parametric embedding by preserving local structure. In International Conference on Artificial Intelligence and Statistics, 2009. [3] Junyuan Xie, Ross Girshick, and Ali Farhadi. Unsupervised deep embedding for clustering analysis. ICML 2016.
iclr_2018_BJvVbCJCb
We propose a neural clustering model that jointly learns both latent features and how they cluster. Unlike similar methods our model does not require a predefined number of clusters. Using a supervised approach, we agglomerate latent features towards randomly sampled targets within the same space whilst progressively removing the targets until we are left with only targets which represent cluster centroids. To show the behavior of our model across different modalities we apply our model on both text and image data and achieve very competitive results on MNIST against methods that require a predefined number of clusters. We also provide results against baseline models for fashion-MNIST, the 20 newsgroups dataset, and a Twitter dataset we ourselves create. 1 .
This paper proposes a neural clustering model following the "Noise as Target" technique. Combining with an reconstruction objective and "delete-and-copy" trick, it is able to cluster the data points into different groups and is shown to give competitive results on different benchmarks. It is nice that the authors tried to extend the "noise as target" to the clustering problem, and proposed the simple "delete-and-copy" technique to group different data points into clusters. Even tough a little bit ad-hoc, it seems promising based on the experiment results. However, it is unclear to me why it is necessary to have the optimal matching here and why the simple nearest target would not work. After all, the cluster membership is found based on the nearest target in the test stage. Also, the authors should provide more detailed description regarding the scheduling of the alpha and lambda values during training, and how sensitive it is to the final clustering performance. The authors cited the no requirement of "a predefined number of clusters" as one of the contributions, but the tuning of alpha seems more concerning. I like the authors experimented with different benchmarks, but lack of comparisons with existing deep clustering techniques is definitely a weakness. The only baseline comparison provided is the k-means clustering, but the comparisons were somewhat unfair. For all the text datasets, there were no comparisons with k-means on the features learned from the auto-encoders or clusterings learned from similar number of clusters. The comparisons for the Twitter dataset were even based on character-level with word-level. It is more convincing to show the superiority of the proposed method than existing ones on the same ground. Some other issues regarding quantitative results: - In Table 1, there are 152 clusters for 10-d latent space after convergence, but there are 61 clusters for 10-d latent space in Table 2 for the same MNIST dataset. Are they based on different alpha and lambda values? - Why does NATAC perform much better than NATAC-k? Would NATAC-k need a different number of clusters than the one from NATAC? The number of centroids learned from NATAC may not be good for k-means clustering. - It seems like the performance of AE-k is increasing with increase of dimensionality of latent space for Fashion-MNIST. Would AE-k beat NATAC with a different dimensionality of latent space and k?
iclr_2018_ry80wMW0W
HIERARCHICAL SUBTASK DISCOVERY WITH NON-NEGATIVE MATRIX FACTORIZATION Hierarchical reinforcement learning methods offer a powerful means of planning flexible behavior in complicated domains. However, learning an appropriate hierarchical decomposition of a domain into subtasks remains a substantial challenge. We present a novel algorithm for subtask discovery, based on the recently introduced multitask linearly-solvable Markov decision process (MLMDP) framework. The MLMDP can perform never-before-seen tasks by representing them as a linear combination of a previously learned basis set of tasks. In this setting, the subtask discovery problem can naturally be posed as finding an optimal low-rank approximation of the set of tasks the agent will face in a domain. We use non-negative matrix factorization to discover this minimal basis set of tasks, and show that the technique learns intuitive decompositions in a variety of domains. Our method has several qualitatively desirable features: it is not limited to learning subtasks with single goal states, instead learning distributed patterns of preferred states; it learns qualitatively different hierarchical decompositions in the same domain depending on the ensemble of tasks the agent will face; and it may be straightforwardly iterated to obtain deeper hierarchical decompositions.
The present paper extends a previous work by Saxe et al (2017) that considered multitask learning in RL and proposed a hierarchical learner based on concurrent execution of many actions in parallel. That framework made heavy use of the framework of linearly solvable Markov decision process (LMDP) proposed by Todorov, which allows for closed form solutions of the control due to the linearity of the Bellman optimality equations. The simple form of the solutions allow them to be composed naturally, and to form deep hierarchies through iteration. The framework is restricted to domains where the transitions are fixed but the rewards may change between tasks. A key role is played in the formalism by the so-called ‘free dynamics’ that serves to regularize the action selected. The present paper goes beyond Saxe et al. in several ways. First, it renders the process of deep hierarchy formation automatic, by letting the algorithm determine the new passive dynamics at each stage, as well as the subtasks themselves. The process of subtask discovery is done via non-negative matrix factorization, whereby the matrix of desirability functions, determined by the solution of the LMDPs with exponentiated reward. Since the matrix is non-negative, the authors propose a non-negative factorization into a product of non-negative low rank matrices that capture its structure at a more abstract level of detail. A family of optimization criteria for this process are suggested, based on a subclass if Bregman divergences. Interestingly, the subtasks discovered correspond to distributions over states, rather than single states as in many previous approaches. The authors present several demonstrations of the intuitive decomposition achieved. A nice feature of the present framework is that a fully autonomous scheme (given some assumed parameter values) is demonstrated for constructing the full hierarchical decomposition. I found this to be an interesting approach to hierarchical multitask learning, augmenting a previous approach with several steps leading to increased autonomy, an essential agent for any learning agent. Both the intuition behind the construction and the application to test problem reveal novel insight. The utilization on the analytic framework of LMDP facilitates understanding and efficient algorithms. I would appreciate the authors’ clarification of several issues. First, the LMDP does not seem to be completely general, so I would appreciate a description of the limitations of this framework. The description of the elbow-joint behavior around eq. (4) was not clear to me, please expand. The authors do not state any direct or indirect extensions – please do so. Please specify how many free parameters the algorithm requires, and what is a reasonable way to select them. Finally, it would be instructive to understand where the algorithm may fail.
iclr_2018_HJMN-xWC-
Convolutional neural networks and recurrent neural networks are designed with network structures well suited to the nature of spacial and sequential data respectively. However, the structure of standard feed-forward neural networks (FNNs) is simply a stack of fully connected layers, regardless of the feature correlations in data. In addition, the number of layers and the number of neurons are manually tuned on validation data, which is time-consuming and may lead to suboptimal networks. In this paper, we propose an unsupervised structure learning method for learning parsimonious deep FNNs. Our method determines the number of layers, the number of neurons at each layer, and the sparse connectivity between adjacent layers automatically from data. The resulting models are called Backbone-Skippath Neural Networks (BSNNs). Experiments on 17 tasks show that, in comparison with FNNs, BSNNs can achieve better or comparable classification performance with much fewer parameters. The interpretability of BSNNs is also shown to be better than that of FNNs.
The main strengths of the paper are the supporting experimental results in comparison to plain feed-forward networks (FNNs). The proposed method is focused on discovering sparse neural networks. The experiments show that sparsity is achieved and still the discovered sparse networks have comparable or better performance compared to dense networks. The main weakness of the paper is lack of cohesion in contributions and difficulty in delineating the scope of their proposed approach. Below are some suggestions for improving the paper: Can you enumerate the paper’s contributions and specify the scope of this work? Where is this method most applicable and where is it not applicable? Why is the paper focused on these specific contributions? What problem does this particular set of contributions solve that is not solvable by the baselines? There needs to be a cohesive story that puts the elements together. For example, you explain how the algorithm for creating the backbone can use unsupervised data. On the other hand, to distinguish this work from the baselines you mention that this work is the first to apply the method to supervised learning problems. The motivation section in the beginning of the paper motivates using the backbone structure to get a sparse network. However, it does not adequately motivate the skip-path connections or applications of the method to supervised tasks. Is this work extending the applicability of baselines to new types of problems? Or is this work focused on improving the performance of existing methods? Answers to these questions can automatically determine suitable experiments to run as well. It's not clear if Pruned FNNs are the most suitable baseline for evaluating the results. Can your work be compared experimentally with any of the constructive methods from the related work section? If not, why? When contrasting this work with existing approaches, can you explain how existing work builds toward the same solution that you are focusing on? It would be more informative to explain how the baselines contribute to the solution instead of just citing them and highlighting their differences. Regarding the experimental results, is there any insight on why the dense networks are falling short? For example, if it is due to overfitting, is there a correlation between performance and size of FNNs? Do you observe a similar performance vs FNNs in existing methods? Whether this good performance is due to your contributions or due to effectiveness of the baseline algorithm, proper analysis and discussion is required and counts as useful research contribution.
iclr_2018_Hyp-JJJRW
Deep networks have shown great performance in classification tasks. However, the parameters learned by the classifier networks usually discard stylistic information of the input, in favour of information strictly relevant to classification. We introduce a network that has the capacity to do both classification and reconstruction by adding a "style memory" to the output layer of the network. We also show how to train such a neural network as a deep multi-layer autoencoder, jointly minimizing both classification and reconstruction losses. The generative capacity of our network demonstrates that the combination of style-memory neurons with the classifier neurons yield good reconstructions of the inputs when the classification is correct. We further investigate the nature of the style memory, and how it relates to composing digits and letters.
This paper proposes to train a classifier neural network not just to classifier, but also to reconstruct a representation of its input, in order to factorize the class information from the appearance (or "style" as used in this paper). This is done by first using unsupervised pretraining and then fine-tuning using a weighted combination of the regular multinomial NLL loss and a reconstruction loss at the last hidden layer. Experiments on MNIST are provided to analyse what this approach learns. Unfortunately, I fail to see a significantly valuable contribution from this work. First, the paper could do a better job at motivating the problem being addressed. Why is it important to separate class from style? Should it allow better classification performance? If so, it's never measured in this work. If that's not the motivation, then what is it? Second, all experiments were conducted on the MNIST dataset. In 2017, most would expect experiments on at least one other, more complex dataset, to trust any claims on a method. Finally, the results are not particularly impressive. I don't find the reconstructions demonstrated particularly compelling (they are generally pretty different from the original input). Also, that the "style" representation contain less (and I'd say slightly less, in Figure 7 b and d, we still see a lot of same class nearest neighbors) is not exactly a surprising result. And the results of figure 9, showing poor reconstructions when changing the class representation essentially demonstrates that the method isn't able to factorize class and style successfully. The interpolation results of Figure 11 are also underwhelming, though possibly mostly because the reconstructions are in general not great. But most importantly, none of these results are measured in a quantitative way: they are all qualitative, and thus subjective. For all these reasons, I'm afraid I must recommend this paper be rejected.
iclr_2018_HyH9lbZAW
Published as a conference paper at ICLR 2018 VARIATIONAL MESSAGE PASSING WITH STRUCTURED INFERENCE NETWORKS Recent efforts on combining deep models with probabilistic graphical models are promising in providing flexible models that are also easy to interpret. We propose a variational message-passing algorithm for variational inference in such models. We make three contributions. First, we propose structured inference networks that incorporate the structure of the graphical model in the inference network of variational auto-encoders (VAE). Second, we establish conditions under which such inference networks enable fast amortized inference similar to VAE. Finally, we derive a variational message passing algorithm to perform efficient naturalgradient inference while retaining the efficiency of the amortized inference. By simultaneously enabling structured, amortized, and natural-gradient inference for deep structured models, our method simplifies and generalizes existing methods.
This paper presents a variational inference algorithm for models that contain deep neural network components and probabilistic graphical model (PGM) components. The algorithm implements natural-gradient message-passing where the messages automatically reduce to stochastic gradients for the non-conjugate neural network components. The authors demonstrate the algorithm on a Gaussian mixture model and linear dynamical system where they show that the proposed algorithm outperforms previous algorithms. Overall, I think that the paper proposes some interesting ideas, however, in its current form I do not think that the novelty of the contributions are clearly presented and that they are not thoroughly evaluated in the experiments. The authors propose a new variational inference algorithm that handles models with deep neural networks and PGM components. However, it appears that the authors rely heavily on the work of (Khan & Lin, 2017) that actually provides the algorithm. As far as I can tell this paper fits inference networks into the algorithm proposed in (Khan & Lin, 2017) which boils down to i) using an inference network to generate potentials for a conditionally-conjugate distribution and ii) introducing new PGM parameters to decouple the inference network from the model parameters. These ideas are a clever solution to work inference networks into the message-passing algorithm of (Khan & Lin, 2017), but I think the authors may be overselling these ideas as a brand new algorithm. I think if the authors sold the paper as an alternative to (Johnson, et al., 2016) that doesn't suffer from the implicit gradient problem the paper would fit into the existing literature better. Another concern that I have is that there are a lot of conditiona-conjugacy assumptions baked into the algorithm that the authors only mention at the end of the presentation of their algorithm. Additionally, the authors briefly state that they can handle non-conjugate distributions in the model by just using conjugate distributions in the variational approximation. Though one could do this, the authors do not adequately show that one should, or that one can do this without suffering a lot of error in the posterior approximation. I think that without an experiment the small section on non-conjugacy should be removed. Finally, I found the experimental evaluation to not thoroughly demonstrate the advantages and disadvantages of the proposed algorithm. The algorithm was applied to the two models originally considered in (Johnson, et al., 2016) and the proposed algorithm was shown to attain lower mean-square errors for the two models. The experiments do not however demonstrate why the algorithm is performing better. For instance, is the (Johnson, et al., 2016) algorithm suffering from the implicit gradient? It also would have been great to have considered a model that the (Johnson, et. al., 2016) algorithm would not work well on or could not be applied to show the added applicability of the proposed algorithm. I also have some minor comments on the paper: - There are a lot of typos. - The first two sentences of the abstract do not really contribute anything to the paper. What is a powerful model? What is a powerful algorithm? - DNN was used in Section 2 without being defined. - Using p() as an approximate distribution in Section 3 is confusing notation because p() was used for the distributions in the model. - How is the covariance matrix parameterized that the inference network produces? - The phrases "first term of the inference network" are not clear. Just use The DNN term and the PGM term of the inference networks, and better still throw in a reference to Eq. (4). - The term "deterministic parameters" was used and never introduced. - At the bottom of page 5 the extension to the non-conjugate case should be presented somewhere (probably the appendix) since the fact that you can do this is a part of your algorithm that's important.
iclr_2018_H1DGha1CZ
In this paper, we turn our attention to the interworking between the activation functions and the batch normalization, which is virtually mandatory technique to train deep networks currently. We propose the activation function Displaced Rectifier Linear Unit (DReLU) by conjecturing that extending the identity function of ReLU to the third quadrant enhances compatibility with batch normalization. Moreover, we used statistical tests to compare the impact of using distinct activation functions (ReLU, LReLU, PReLU, ELU, and DReLU) on the learning speed and test accuracy performance of standardized VGG and Residual Networks state-of-the-art models. These convolutional neural networks were trained on CIFAR-100 and CIFAR-10, the most commonly used deep learning computer vision datasets. The results showed DReLU speeded up learning in all models and datasets. Besides, statistical significant performance assessments (p < 0.05) showed DReLU enhanced the test accuracy presented by ReLU in all scenarios. Furthermore, DReLU showed better test accuracy than any other tested activation function in all experiments with one exception, in which case it presented the second best performance. Therefore, this work demonstrates that it is possible to increase performance replacing ReLU by an enhanced activation function.
This paper proposes an activation function, called displaced ReLU (DReLU), to improve the performance of CNNs that use batch normalization. Compared to ReLU, DReLU cut the identity function at a negative value rather than the zero. As a result, the activations outputted by DReLU can have a mean closer to 0 and a variance closer to 1 than the standard ReLU. The DReLU is supposed to remedy the problem of covariate shift better. The presentation of the paper is clear. The proposed method shows encouraging results in a controlled setting (i.e., all other units, like dropout, are removed). Statistical tests are performed for many of the experimental results, which is solid. However, I have some concerns. 1) As DReLU(x) = max{-\delta, x}, what is the optimal strategy to determine \delta? If it is done by hyperparameter tuning with cross-validation, the training cost may be too high. 2) I believe the control experiments are encouraging, but I do not agree that other techniques like Dropouts are not useful. Using DReLU to improve the state-of-art neural network in an uncontrolled setting is important. The arguments for skipping this experiments are respectful, but not convincing enough. 3) Batch normalization is popular, especially for the convolutional neural networks. However, its application is not universal, which can limit the use of the proposed DReLU. It is a minor concern, anyway.
iclr_2018_SJtfOEn6-
Recent efforts on training light-weight binary neural networks offer promising execution/memory efficiency. This paper introduces ResBinNet, which is a composition of two interlinked methodologies aiming to address the slow convergence speed and limited accuracy of binary convolutional neural networks. The first method, called residual binarization, learns a multi-level binary representation for the features within a certain neural network layer. The second method, called temperature adjustment, gradually binarizes the weights of a particular layer. The two methods jointly learn a set of soft-binarized parameters that improve the convergence rate and accuracy of binary neural networks. We corroborate the applicability and scalability of ResBinNet by implementing a prototype hardware accelerator. The accelerator is reconfigurable in terms of the numerical precision of the binarized features, offering a trade-off between runtime and inference accuracy.
This paper proposes ResBinNet, with residual binarization, and temperature adjustment. It is a reconfigurable binarization method for neural networks. It improves the convergence rate during training. I appreciate a lot that the authors were able to validate their idea by building a prototype of an actual hardware accelerator. I am wondering what are the values of \gamma’s in the residual binarization after learning? What is its advantage over having only one \gamma, and then the rest are just 1/2*\gamma, 1/4* \gamma, … , etc.? The latter is an important baseline for residual binarization because that corresponds to the widely used fixed point format for real numbers. If you can show some results that residual encoding is better than having {\gamma, 1/2*\gamma, 1/4* \gamma, …, } (which contains only one \gamma), it would validate the need of using this relatively complex binarization scheme. Otherwise, we can just use the l-bit fixed point multiplications, which is off-the-shelf and already highly optimized in many hardwares. For the temperature adjustment, modifying the tanh() scale has already had a long history, for example, http://yann.lecun.com/exdb/publis/pdf/lecun-89.pdf page 7, which is exactly the same form as in this paper. Adjusting the slope during training has also been explored in some straight-through estimator approaches, such as https://arxiv.org/pdf/1609.01704.pdf. In addition, having this residual binarization and adjustable tanh(), is already adding extra computations for training. Could you provide some data for comparing the computations before and after adding residual binarization and temperature adjustment? The authors claimed that ResBinNet converges faster during training, and in table 2 it shows that ResBinNet just needs 1/10 of the training epochs needed by BinaryNet. However, I don’t find it very fair. Given that the accuracy RBN gets is much lower than Binary Net, the readers might suspect that maybe the other two models already reach ResBinNet’s accuracy at an earlier training epochs (like epoch 50), and just take all the remaining epochs to reach a higher accuracy. On the other hand, this comparison is not fair for ResBinNet as well. The model size was much larger in BinaryNet than in ResBinNet. So it makes sense to train a BinaryNet or FINN, in the same size, and then compare the training curves. Lastly, in CIFAR-10 1-level case, it didn’t outperform FINN, which has the same size. Given these experiments, I can’t draw any convincing conclusion. Apart from that, There is an error in Figure 7 (b), where the baseline has an accuracy of 80.1% but its corresponding bar is lower than RBN1, which has an accuracy of 76%.
iclr_2018_H1meywxRW
Published as a conference paper at ICLR 2018 DCN+: MIXED OBJECTIVE AND DEEP RESIDUAL COATTENTION FOR QUESTION ANSWERING Traditional models for question answering optimize using cross entropy loss, which encourages exact answers at the cost of penalizing nearby or overlapping answers that are sometimes equally accurate. We propose a mixed objective that combines cross entropy loss with self-critical policy learning. The objective uses rewards derived from word overlap to solve the misalignment between evaluation metric and optimization objective. In addition to the mixed objective, we improve dynamic coattention networks (DCN) with a deep residual coattention encoder that is inspired by recent work in deep self-attention and residual networks. Our proposals improve model performance across question types and input lengths, especially for long questions that requires the ability to capture long-term dependencies. On the Stanford Question Answering Dataset, our model achieves state-of-the-art results with 75.1% exact match accuracy and 83.1% F1, while the ensemble obtains 78.9% exact match accuracy and 86.0% F1.
The authors of this paper propose some extensions to the Dynamic Coattention Networks models presented last year at ICLR. First they modify the architecture of the answer selection model by adding an extra coattention layer to improve the capture of dependencies between question and answer descriptions. The other main modification is to train their DCN+ model using both cross entropy loss and F1 score (using RL supervision) in order to reward the system for making partial matching predictions. Empirical evaluations conducted on the SQuAD dataset indicates that this architecture achieves an improvement of at least 3%, both on F1 and exact match accuracy, over other comparable systems. An ablation study clearly shows the contribution of the deep coattention mechanism and mixed objective training on the model performance. The paper is well written, ideas are presented clearly and the experiments section provide interesting insights such as the impact of RL on system training or the capability of the model to handle long questions and/or answers. It seems to me that this paper is a significant contribution to the field of question answering systems.
iclr_2018_rytNfI1AZ
Published as a conference paper at ICLR 2018 TRAINING WIDE RESIDUAL NETWORKS FOR DEPLOY- MENT USING A SINGLE BIT FOR EACH WEIGHT For fast and energy-efficient deployment of trained deep neural networks on resource-constrained embedded hardware, each learned weight parameter should ideally be represented and stored using a single bit. Error-rates usually increase when this requirement is imposed. Here, we report large improvements in error rates on multiple datasets, for deep convolutional neural networks deployed with 1-bit-per-weight. Using wide residual networks as our main baseline, our approach simplifies existing methods that binarize weights by applying the sign function in training; we apply scaling factors for each layer with constant unlearned values equal to the layer-specific standard deviations used for initialization. For CIFAR-10, CIFAR-100 and ImageNet, and models with 1-bitper-weight requiring less than 10 MB of parameter memory, we achieve error rates of 3.9%, 18.5% and 26.0% / 8.5% (Top-1 / Top-5) respectively. We also considered MNIST, SVHN and ImageNet32, achieving 1-bit-per-weight test results of 0.27%, 1.9%, and 41.3% / 19.1% respectively. For CIFAR, our error rates halve previously reported values, and are within about 1% of our errorrates for the same network with full-precision weights. For networks that overfit, we also show significant improvements in error rate by not learning batch normalization scale and offset parameters. This applies to both full precision and 1-bit-per-weight networks. Using a warm-restart learning-rate schedule, we found that training for 1-bit-per-weight is just as fast as full-precision networks, with better accuracy than standard schedules, and achieved about 98%-99% of peak performance in just 62 training epochs for CIFAR-10/100. For full training code and trained models in MATLAB, Keras and PyTorch see https://github.com/McDonnell-Lab/1-bit-per-weight/.
The authors propose to train neural networks with 1bit weights by storing and updating full precision weights in training, but using the reduced 1bit version of the network to compute predictions and gradients in training. They add a few tricks to keep the optimization numerically efficient. Since right now more and more neural networks are deployed to end users, the authors make an interesting contribution to a very relevant question. The approach is precisely described although the text sometimes could be a bit clearer (for example, the text contains many important references to later sections). The authors include a few other methods for comparision, but I think it would be very helpful to include also some methods that use a completely different approach to reduce the memory footprint. For example, weight pruning methods sometimes can give compression rates of around 100 while the 1bit methods by definition are limited to a compression rate of 32. Additionally, for practical applications, methods like weight pruning might be more promising since they reduce both the memory load and the computational load. Side mark: the manuscript has quite a few typos.
iclr_2018_r1YUtYx0-
Workshop track -ICLR 2018 ENSEMBLE ROBUSTNESS AND GENERALIZATION OF STOCHASTIC DEEP LEARNING ALGORITHMS The question why deep learning algorithms generalize so well has attracted increasing research interest. However, most of the well-established approaches, such as hypothesis capacity, stability or sparseness, have not provided complete explanations (Zhang et al., 2017;Kawaguchi et al., 2017). In this work, we focus on the robustness approach (Xu & Mannor, 2012), i.e., if the error of a hypothesis will not change much due to perturbations of its training examples, then it will also generalize well. As most deep learning algorithms are stochastic (e.g., Stochastic Gradient Descent, Dropout, and Bayes-by-backprop), we revisit the robustness arguments of Xu & Mannor, and introduce a new approach -ensemble robustness -that concerns the robustness of a population of hypotheses. Through the lens of ensemble robustness, we reveal that a stochastic learning algorithm can generalize well as long as its sensitiveness to adversarial perturbations is bounded in average over training examples. Moreover, an algorithm may be sensitive to some adversarial examples (Goodfellow et al., 2015) but still generalize well. To support our claims, we provide extensive simulations for different deep learning algorithms and different network architectures exhibiting a strong correlation between ensemble robustness and the ability to generalize.
Summary: This paper presents an adaptation of the algorithmic robustness of Xu&Mannor'12 to a notion robustness of ensemble of hypothesis allowing the authors to study generalization ability of stochastic learning algorithms for Deep Learning Networks. Generalization can be established as long as the sensitiveness of the learning algorithm to adversarial perturbations is bounded. The paper presents learning bounds and an experimental showing correlation between empirical ensemble robustness and generalization error. Quality: Globally correct Clarity: Paper clear Originality: Limited with respect to the original definition of algorithmic robustness Significance: The paper provides a new theoretical analysis for stochastic learning of Deep Networks but the contribution is limited in its present form. Pros: -New theoretical study for DL algorithms -Focus on adversarial learning Cons -I find the contribution a bit limited -Some aspects have to be precised/more argumented -Experimental study could have been more complete Comments: --------- *About the proposed framework. The idea of taking a max over instances of partition C_i (Def 3) already appeared in the proof of results of Xu&Mannor, and the originality of the contribution is essentially to add an expectation over the result of the algorithm. In Xu&Mannor paper, there is a notion of weak robustness that is proved to be necessary and sufficient to generalize. The contribution of the authors would be stronger if they can discuss an equivalent notion in their context. The partition considered by the framework is never discussed nor taken into account, while this is an important aspect of the analysis. In particular, there is a tradeoff between \epsilon(s) and K: using a very fine tiling it is always possible to have a very small \epsilon(s) at the price of a very large K (if you think of a covering number, K can be exponential in the size of the tiling and hard to calculate). In the context of adversarial examples, this is actually important because it can be very likely that the adversarial example can belong to a partition set different from the set the original example belong to. In this context, I am not sure to understand the validity of the framework because we can then compare 2 instances of different set which is outside of the framework. So I wonder if the way the adervarial examples are generates should be taken into account for the definition of the partition. Additionnally, the result is given in the contect of IID data, and with a multinomial distribution according to the partition set - adversarial generation can violate this IID assumption. In the experimental setup, the partition set is not explained and we have no guarantee to compare instances of the same set. Nothing is said about $r$ and its impact on the results. This is a clear weak aspect of the experimental analysis In the experimental setup, as far as I understand the setup, I find the term "generalization error" a bit abusive since it is actually the error on the test set. Using cross validation or considering multiple training/test sets would be more appropriate. In the proof of Lemma 2, I am not sure to understand where the term 1/n comes from in the term 2M^2/2 (before "We then bound the term H as follows")
iclr_2018_BkPrDFgR-
The success of Deep Learning and its potential use in many important safetycritical applications has motivated research on formal verification of Neural Network (NN) models. Despite the reputation of learned NN models to behave as black boxes and the theoretical hardness of proving their properties, researchers have been successful in verifying some classes of models by exploiting their piecewise linear structure. Unfortunately, most of these approaches test their algorithms without comparison with other approaches. As a result, the pros and cons of the different algorithms are not well understood. Motivated by the need to accelerate progress in this very important area, we investigate the trade-offs of a number of different approaches based on Mixed Integer Programming, Satisfiability Modulo Theory, as well as a novel method based on the Branch-and-Bound framework. We also propose a new data set of benchmarks, in addition to a collection of previously released testcases that can be used to compare existing methods. Our analysis not only allows a comparison to be made between different strategies, the comparison of results from different solvers also revealed implementation bugs in published methods. We expect that the availability of our benchmark and the analysis of the different approaches will allow researchers to develop and evaluate promising approaches for making progress on this important topic.
The paper compares some recently proposed method for validation of properties of piece-wise linear neural networks and claims to propose a novel method for the same. Unfortunately, the proposed "branch and bound method" does not explain how to implement the "bound" part ("compute lower bound") -- and has been used several times in the same application, incl.: Ruediger Ehlers. Planet. https://github.com/progirep/planet, Chih-Hong Cheng, Georg Nuhrenberg, and Harald Ruess. Maximum resilience of artificial neural networks. Automated Technology for Verification and Analysis Alessio Lomuscio and Lalit Maganti. An approach to reachability analysis for feed-forward relu neural networks. arXiv:1706.07351 Specifically, the authors say: "In our experiments, we use the result of minimising the variable corresponding to the output of the network, subject to the constraints of the linear approximation introduced by Ehlers (2017a)" which sounds a bit like using linear programming relaxations, which is what the approaches using branch and bound cited above use. If that is the case, the paper does not have any original contribution. If that is not the case, the authors may have some contribution to make, but have not made it in this paper, as it does not explain the lower bound computation other than the one based on LPs. Generally, I find a jarring mis-fit between the motivation (deep learning for driving, presumably involving millions or billions of parameters) and the actual reach of the methods proposed (hundreds of parameters). This reach is NOT inherent in integer programming, per se. Modern solvers routinely solve instances with tens of millions of non-zeros in the constraint matrix, but require a strong relaxation. The authors may hence consider improving the LP relaxation, noting that the big-M constraint are notorious for producing weak relaxations.
iclr_2018_S1WRibb0Z
EXPRESSIVE POWER OF RECURRENT NEURAL NET- WORKS Deep neural networks are surprisingly efficient at solving practical tasks, but the theory behind this phenomenon is only starting to catch up with the practice. Numerous works show that depth is the key to this efficiency. A certain class of deep convolutional networks -namely those that correspond to the Hierarchical Tucker (HT) tensor decomposition -has been proven to have exponentially higher expressive power than shallow networks. I.e. a shallow network of exponential width is required to realize the same score function as computed by the deep architecture. In this paper, we prove the expressive power theorem (an exponential lower bound on the width of the equivalent shallow network) for a class of recurrent neural networks -ones that correspond to the Tensor Train (TT) decomposition. This means that even processing an image patch by patch with an RNN can be exponentially more efficient than a (shallow) convolutional network with one hidden layer. Using theoretical results on the relation between the tensor decompositions we compare expressive powers of the HT-and TT-Networks. We also implement the recurrent TT-Networks and provide numerical evidence of their expressivity.
In this paper, the expressive power of neural networks characterized by tensor train (TT) decomposition, a chain-type tensor decomposition, is investigated. Here, the expressive power refers to the rank of tensor decomposition, i.e., the number of latent components. The authors compare the complexity of TT-type networks with networks structured by CP decomposition, which corresponds to shallow networks. It is proved that the space of TT-type networks with rank O(r) can be complex as the same as the space of CP-type networks with rank poly(r). The paper is clearly written and easy to follow. The contribution is clear and it is distinguished from previous studies. Though I enjoyed reading this paper, I have several concerns. 1. The authors compare the complexity of TT representation with CP representation (and HT representation). However, CP representation does not have universality (i.e., some tensors cannot be expressed by CP representation with finite rank, see [1]), this comparison may not make sense. It seems the comparison with Tucker-type representation makes much more sense because it has universality. 2. Connecting RNN and TT representation is a bit confusing. Specifically, I found two gaps. (a) RNNs reuse the same parameter against all the input x_1 to x_d. This means that G_1 to G_d in Figure 1 are all the same. That's why RNNs can handle size-varying sequences. (b) Standard RNNs do not use the multilinear units shown in Figure 3, but use a simple addition of an input and the output from the previous layer (i.e., h_t = f(Wx_t + Vh_{t-1}), where h_t is the t-th hidden unit, x_t is the t-th input, W and V are weights, and f is an activation function.) Due to the gaps, the analysis used in this paper seems not applicable to RNNs. If this is true, the story of this paper is somewhat misleading. Or, is your theory still applicable? [1] Hackbusch, Wolfgang. Tensor spaces and numerical tensor calculus. Vol. 42. Springer Science & Business Media, 2012.
iclr_2018_S1xDcSR6W
Neural embeddings have been used with great success in Natural Language Processing (NLP) where they provide compact representations that encapsulate word similarity and attain state-of-the-art performance in a range of linguistic tasks. The success of neural embeddings has prompted significant amounts of research into applications in domains other than language. One such domain is graph-structured data, where embeddings of vertices can be learned that encapsulate vertex similarity and improve performance on tasks including edge prediction and vertex labelling. For both NLP and graph-based tasks, embeddings in high-dimensional Euclidean spaces have been learned. However, recent work has shown that the appropriate isometric space for embedding complex networks is not the flat Euclidean space, but a negatively curved hyperbolic space. We present a new concept that exploits these recent insights and propose learning neural embeddings of graphs in hyperbolic space. We provide experimental evidence that hyperbolic embeddings significantly outperform Euclidean embeddings on vertex classification tasks for several real-world public datasets.
== Preamble == As promised, I have read the updated paper from scratch and this is my revised review. My original review is kept below for reference. My original review had rating "4: Ok but not good enough - rejection". == Updated review == The revised improves upon the original submission in several ways and, in particular, does a much better job at positioning itself within the existing body of literature. The new experiments also indicate that the proposed model offer some improvement over Nickel & Douwe, NIPS 2017). I do have remaining concerns that unfortunately still prevent me from recommending acceptance: - Throughout the paper it is argued that we should embed into a hyperbolic space. Such a space is characterized by its metric, but the proposed model do not use a hyperbolic metric. Rather it relies on a heuristic similarity measure that is inspired by the hyperbolic metric. I understand that this may be a practical choice, but then I find it misleading that the paper repeatedly states that points are embedded into a hyperbolic space (which is incorrect). This concern was also raised on this forum prior to the revision. - The resulting optimization is one of the key selling points of the proposed method as it is unconstrained while Nickel & Douwe resort to a constrained optimization. Clearly unconstrained optimization is to be preferred. However, it is not entirely correct (from what I understand), that the resulting optimization is indeed unconstrained. Nickel & Douwe work under the constraint that |x| < 1, while the proposed model use polar coordinates (r, theta): r in (0, infinity) and theta in (0, 2 pi]. Note that theta parametrize a circle, and therefore wrapping may occur (this should really be mentioned in the paper). The constraints on theta are quite easy to cope with, so I agree with the authors that they have a more simple optimization problem. However, this is only true since points are embedded on the unit disk (2D). Should you want to embed into higher dimensional spaces, then theta need to be confined to live on the unit sphere, i.e. |theta| = 1 (the current setting is just a special-case of the unit sphere). While optimizing over the unit sphere is manageable it is most definitely a constrained optimization problem, and it is far from clear that it is much easier than working under the Nickel & Douwe constraint, |x| < 1. Other comments: - The sentence "even infinite trees have nearly isometric embeddings in hyperbolic space (Gromov, 2007)" sounds cool (I mean, we all want to cite Gromov), but what does it really mean? An isometric embedding is merely one that preserves a metric, so this statement only makes sense if the space of infinite trees had a single meaningful metric in the first place (it doesn't; that's a design choice). - In the "Contribution" and "Conclusion" sections it is claimed that the paper "introduce the new concept of neural embeddings in hyperbolic space". I thought that was what Nickel & Douwe did... I understand that the authors are frustrated by this parallel work, but at this stage, I don't think the present paper can make this "introducing" claim. - The caption in Figure 2 miss some indication that "a" and "b" refer to subfigures. I recommend "a" --> "a)" and "b" --> "b)". - On page 4 it is mentioned that under the heuristic similarity measure some properties of hyperbolic spaces are lost while other are retained. From what I can read, it is only claimed that key properties are kept; a more formal argument (even if trivial) would have been helpful. == Original Review == The paper considers embeddings of graph-structured data onto the hyperbolic Poincare ball. Focus is on word2vec style models but with hyperbolic embeddings. I am unable to determine how suitable an embedding space the Poincare ball really is, since I am not familiar enough with the type of data studied in the paper. I have a few minor comments/questions to the work, but my main concern is a seeming lack of novelty: The paper argues that the main contribution is that this is the first neural embedding onto a hyperbolic space. From what I can see, the paper Poincaré Embeddings for Learning Hierarchical Representations https://arxiv.org/abs/1705.08039 consider an almost identical model to the one proposed here with an almost identical motivation and application set. Some technicalities appear different, but (to me) it seems like the main claimed novelties of the present paper has already been out for a while. If this analysis is incorrect, then I encourage the authors to provide very explicit arguments for this in the rebuttal phase. Other comments: *) It seems to me that, by construction, most data will be pushed towards the boundary of the Poincare ball during the embedding. Is that a property you want? *) I found it rather surprising that the log-likelihood under consideration was pushed to an appendix of the paper, while its various derivatives are part of the main text. Given the not-so-tight page limits of ICLR, I'd recommend to provide the log-likelihood as part of the main text (it's rather difficult to evaluate the correctness of a derivative when its base function is not stated). *) In the introduction must energy is used on the importance of large data sets, but it appears that only fairly small-scale experiments are considered. I'd recommend a better synchronization. *) I find visual comparisons difficult on the Poincare ball as I am so trained at assuming Euclidean distances when making visual comparisons (I suspect most readers are as well). I think one needs to be very careful when making visual comparisons under non-trivial metrics. *) In the final experiment, a logistic regressor is fitted post hoc to the embedded points. Why not directly optimize a hyperbolic classifier? Pros: + well-written and (fairly) well-motivated. Cons: - It appears that novelty is very limited as highly similar work (see above) has been out for a while.
iclr_2018_B1suU-bAW
Word embedding is a useful approach to capture co-occurrence structures in a large corpus of text. In addition to the text data itself, we often have additional covariates associated with individual documents in the corpus-e.g. the demographic of the author, time and venue of publication, etc.-and we would like the embedding to naturally capture the information of the covariates. In this paper, we propose a new tensor decomposition model for word embeddings with covariates. Our model jointly learns a base embedding for all the words as well as a weighted diagonal transformation to model how each covariate modifies the base embedding. To obtain the specific embedding for a particular author or venue, for example, we can then simply multiply the base embedding by the transformation matrix associated with that time or venue. The main advantages of our approach is data efficiency and interpretability of the covariate transformation matrix. Our experiments demonstrate that our joint model learns substantially better embeddings conditioned on each covariate compared to the standard approach of learning a separate embedding for each covariate using only the relevant subset of data. Furthermore, our model encourages the embeddings to be "topic-aligned" in the sense that the dimensions have specific independent meanings. This allows our covariate-specific embeddings to be compared by topic, enabling downstream differential analysis. We empirically evaluate the benefits of our algorithm on several datasets, and demonstrate how it can be used to address many natural questions about the effects of covariates.
The authors present a method for learning word embeddings from related groups of data. The model is based on tensor factorization which extends GloVe to higher order co-ocurrence tensors, where the co-ocurrence is of words within subgroups of the text data. These two papers need to be cited: Rudolph et al., NIPS 2017, "Sturctured Embedding Models for Grouped Data": This paper also presents a method for learning embeddings specific for subgroups of the data, but based on hierarchical modeling. An experimental comparison is needed. Cotterell et al., EACL 2017 "Explaining and Generalizing Skip-Gram through Exponential Family Principal Component Analysis": This paper also derives a tensor factorization based approach for learning word embeddings for different covariates. Here the covariates are morphological tags such as part-of-speech tags of the words. Due to these two citations, the novelty of both the problem set-up of learning different embeddings for each covariate and the novelty of the tensor factorization based model are limited. The writing is ok. I appreciated the set-up of the introduction with the two questions. However, the questions themselves could have been formulated differently: Q1: the way Q1 is formulated makes it sound like the covariates could be both discrete and continuous while the method presented later in the paper is only for discrete covariates (i.e. group structure of the data). Q2: The authors mention topic alignment without specifying what the topics are aligned to. It would be clearer if they stated explicitly that the alignment is between covariate-specific embeddings. It is also distracting that they call the embedding dimensions topics. Also, why highlight the problem of authorship attribution of Shakespear's work in the introduction, if that problem is not addressed later on? In the model section, the paragraphs "notation" and "objective function and discussion" are clear. I also liked the idea of having the section "A geometric view of embeddings and tensor decomposition", but that section needs to be improved. For example, the authors describe RandWalk (Arora et al. 2016) but how their work falls into that framework is unclear. In the third paragraph, starting with "Therefore we consider a natural extension of this model, ..." it is unclear which model the authors are referring to. (RandWalk or their tensor factorization?). What are the context vectors in Figure 1? I am guessing the random walk transitions are the ellipsoids? How are they to be interpreted? In the last paragraph, beginning with "Note that this is essentially saying...", I don't agree with the argument that the "base embeddings" decompose into independent topics. The dimensions of the base embeddings are some kind of latent attributes and each individual dimension could be used by the model to capture a variety of attributes. There is nothing that prevents the model from using multiple dimensions to capture related structure of the data. Also, the qualitative results in Table 3 do not convince me that the embedding dimensions represent topics. For example "horses" has highest value in embedding dimension 99. It's nearest neighbours in the embedding space (i.e. semantically similar words) will also have high values in coordinate 99. Hence, the apparent semantic coherence in what the authors call "topics". The authors present multiple qualitative and quantitative evaluations. The clustering by weight (4.1.) is nice and convincing that the model learns something useful. 4.2, the only quantitative analysis was missing some details. Please give references for the evaluation metrics used, for proper credit and so people can look up these tasks. Also, comparison needed to fitting GloVe on the entire corpus (without covariates) and existing methods Rudolph et al. 2017 and Cotterell et al. 2017. Section 5.2 was nice and so was 5.3. However, for the covariate specific analogies (5.3.) the authors could also analyze word similarities without the analogy component and probably see similar qualitative results. Specifically, they could analyze for a set of query words, what the most similar words are in the embeddings obtained from different subsections of the data. PROS: + nice tensor factorization model for learning word embeddings specific to discrete covariates. + the tensor factorization set-up ensures that the embedding dimensions are aligned + clustering by weights (4.1) is useful and seems coherent + covariate-specific analogies are a creative analysis CONS: - problem set-up not novel and existing approach not cited (experimental comparison needed) - interpretation of embedding dimensions as topics not convincing - connection to Rand-Walk (Aurora 2016) not stated precisely enough - quantitative results (Table 1) too little detail: * why is this metric appropriate? * comparison to GloVe on the entire corpus (not covariate specific) * no reference for the metrics used (AP, BLESS, etc.?) - covariate specific analogies presented confusingly and similar but simpler analysis might be possible by looking at variance in neighbours v_b and v_d without involving v_a and v_c (i.e. don't talk about analogies but about similarities)
iclr_2018_SJICXeWAb
Some recent work has shown separation between the expressive power of depth-2 and depth-3 neural networks. These separation results are shown by constructing functions and input distributions, so that the function is well-approximable by a depth-3 neural network of polynomial size but it cannot be well-approximated under the chosen input distribution by any depth-2 neural network of polynomial size. These results are not robust and require carefully chosen functions as well as input distributions. We show a similar separation between the expressive power of depth-2 and depth-3 sigmoidal neural networks over a large class of input distributions, as long as the weights are polynomially bounded. While doing so, we also show that depth-2 sigmoidal neural networks with small width and small weights can be wellapproximated by low-degree multivariate polynomials.
This paper proves a new separation results from 3-layer neural networks to 2-layer neural networks. The core of the analysis is a proof that any 2-layer neural networks can be well approximated by a polynomial function with reasonably low degrees. Then the authors constructs a highly non-smooth function can be represented by a 3-layer network, but impossible to approximate by any polynomial-degree polynomial function. Similar results about polynomial approximation can be found in [1] (Theorem 4). To me, the result proved in [1] is spiritually very similar to propositions 3-4. The authors need to justify the difference. The main strength of the new separation result is that it holds for a larger class of input distributions. Comparing to Daniely (2017) which requires the input distribution to be spherically uniform, the new result only needs the distribution to be lower bounded by 1/poly(d) in a small ball of radius 1/poly(d). Conceptually I don't think this is a much weaker condition. For a "truly" non-uniform distribution, one should allow its density function to be very close to zero at certain regions of the ball. Nevertheless, the result is a step forward from Daniely (2017) and the paper is well written. I am still in doubt of the practical value of such kind of separation results. The paper proves the separation by constructing a very specific function that cannot be approximated by 2-layer networks. This function has a super large Lipschitz constant, which we don't expect to see in practice. Consider the function f(x)=cos(Nx). When N is chosen large enough, the function f can not be well approximated by any 2-layer network with polynomial size. Does it imply that the family of cosine functions is rich enough so that it is a better family to learn than 2-layer neural networks? I guess the answer would be negative. In addition, the paper doesn't show that any 2-layer network can be well approximated by a 3-layer network, which is a missing piece in justifying the richness of 3-layer nets. Finally, the constructed "hard" function has order d^5 Lipschitz constant, but Theorem 7 assumes that the 2-layer networks' weight must be bounded by O(d^2). This assumption is crucial to the proof but not well justified (especially considering the d^5 factor in the function definition). [1] On the Computational Efficiency of Training Neural Networks, Livni et al., NIPS'14
iclr_2018_B1KFAGWAZ
Many tasks in artificial intelligence require the collaboration of multiple agents. We exam deep reinforcement learning for multi-agent domains. Recent research efforts often take the form of two seemingly conflicting perspectives, the decentralized perspective, where each agent is supposed to have its own controller; and the centralized perspective, where one assumes there is a larger model controlling all agents. In this regard, we revisit the idea of the master-slave architecture by incorporating both perspectives within one framework. Such a hierarchical structure naturally leverages advantages from one another. The idea of combining both perspectives is intuitive and can be well motivated from many real world systems, however, out of a variety of possible realizations, we highlights three key ingredients, i.e. composed action representation, learnable communication and independent reasoning. With network designs to facilitate these explicitly, our proposal consistently outperforms latest competing methods both in synthetic experiments and when applied to challenging StarCraft 1 micromanagement tasks.
This paper investigates multiagent reinforcement learning making used of a "master slave" architecture (MSA). On the positive side, the paper is mostly well-written, seems technically correct, and there are some results that indicate that the MSA is working quite well on relatively complex tasks. On the negative side, there seems to be relatively limited novelty: we can think of MSA as one particular communication (i.e, star) configuration one could use is a multiagent system. One aspect does does strike me as novel is the "gated composition module", which allows differentiation of messages to other agents based on the receivers internal state. (So, the *interpretation* of the message is learned). I like this idea, however, the results are mixed, and the explanation given is plausible, but far from a clearly demonstrated answer. There are some important issues that need clarification: * "Sukhbaatar et al. (2016) proposed the “CommNet”, where broadcasting communication channel among all agents were set up to share a global information which is the summation of all individual agents. [...] however the summed global signal is hand crafted information and does not facilitate an independently reasoning master agent." -Please explain what is meant here by 'hand crafted information', my understanding is that the f^i in figure 1 of that paper are learned modules? -Please explain what would be the differences with CommNet with 1 extra agent that takes in the same information as your 'master'. *This relates also to this: "Later we empirically verify that, even when the overall in- formation revealed does not increase per se, an independent master agent tend to absorb the same information within a big picture and effectively helps to make decision in a global manner. Therefore compared with pure in-between-agent communications, MS-MARL is more efficient in reasoning and planning once trained. [...] Specifically, we compare the performance among the CommNet model, our MS-MARL model without explicit master state (e.g. the occupancy map of controlled agents in this case), and our full model with an explicit occupancy map as a state to the master agent. As shown in Figure 7 (a)(b), by only allowed an independently thinking master agent and communication among agents, our model already outperforms the plain CommNet model which only supports broadcast- ing communication of the sum of the signals." -Minor: I think that the statement "which only supports broadcast-ing communication of the sum of the signals" is not quite fair: surely they have used a 1-channel communication structure, but it would be easy to generalize that. -Major: When I look at figure 4D, I see that the proposed approach *also* only provides the master with the sum (or really mean) with of the individual messages...? So it is not quite clear to me what explains the difference. *In 4.4, it is not quite clear exactly how the figure of master and slave actions is created. This seems to suggest that the only thing that the master can communicate is action information? It this the case? * In table 2, it is not clear how significant these differences are. What are the standard errors? * The section 3.2 explains standard things (policy gradient), but the details are a bit unclear. In particular, I do not see how the Gaussian/softmax layers are integrated; they do not seem to appear in figure 4? * I cannot understand figure 7 without more explanation. (The background is all black - did something go wrong with the pdf?) Details: * references are wrongly formatted throughout. * "In this regard, we are among the first to combine both the centralized perspective and the decentralized perspective" This is a weak statement (E.g., I suppose that in the greater scheme of things all of us will be amongst the first people that have walked this earth...) * "Therefore they tend to work more like a commentator analyzing and criticizing the play, rather than a coach coaching the game." -This sounds somewhat vague. Can it be made crisper? * "Note here that, although we explicitly input an occupancy map to the master agent, the actual infor- mation of the whole system remains the same." This is a somewhat peculiar statement. Clearly, the distribution of information over the agents is crucial. For more insights on this one could refer to the literature on decentralized POMDPs.
iclr_2018_H1LAqMbRW
Model-free deep reinforcement learning approaches have shown superhuman performance in simulated environments (e.g., Atari games, Go, etc). During training, these approaches often implicitly construct a latent space that contains key information for decision making. In this paper, we learn a forward model on this latent space and apply it to model-based planning in a miniature Real-time Strategy game with incomplete information (MiniRTS [Tian et al. (2017)]). We first show that the latent space constructed from existing actor-critic models contains relevant information of the game, and design training procedure to learn forward models. We also show that our learned forward model can predict meaningful future state and is usable for latent space Monte-Carlo Tree Search (MCTS), in terms of win rates against rule-based agents.
Summary: This paper proposes to use the latent representations learned by a model-free RL agent to learn a transition model for use in model-based RL (specifically MCTS). The paper introduces a strong model-free baseline (win rate ~80% in the MiniRTS environment) and shows that the latent space learned by this baseline does include relevant game information. They use the latent state representation to learn a model for planning, which performs slightly better than a random baseline (win rate ~25%). Pros: - Improvement of the model-free method from previous work by incorporating information about previously observed states, demonstrating the importance of memory. - Interesting evaluation of which input features are important for the model-free algorithm, such as base HP ratio and the amount of resources available. Cons: - The model-based approach is disappointing compared to the model-free approach. Quality and Clarity: The paper in general is well-written and easy to follow and seems technically correct, though I found some of the figures and definitions confusing, specifically: - The terms for different forward models are not defined (e.g. MatchPi, MatchA, etc.). I can infer what they mean based on Figure 1 but it would be helpful to readers to define them explicitly. - In Figure 3b, it is not clear to me what the difference between the red and blue curves is. - In Figure 4, it would be helpful to label which color corresponds to the agent and which to the rule-based AI. - The caption in Figure 8 is malformatted. - In Figure 7, the baseline of \hat{h_t}=h_{t-2} seems strange---I would find it more useful for Figure 7 to compare to the performance if the model were not used (i.e. if \hat{h_t}=h_t) to see how much performance suffers as a result of model error. Originality: I am unfamiliar with the MiniRTS environment, but given that it is only published in this year's NIPS (and that I couldn't find any other papers about it on Google Scholar) it seems that this is the first paper to compare model-free and model-based approaches in this domain. However, the model-free approach does not seem particularly novel in that it is just an extension of that from Tian et al. (2017) plus some additional features. The idea of learning a model based on the features from a model-free agent seems novel but lacks significance in that the results are not very compelling (see below). Significance: I feel the paper overstates the results in saying that the learned forward model is usable in MCTS. The implication in the abstract and introduction (at least as I interpreted it) is that the learned model would outperform a model-free method, but upon reading the rest of the paper I was disappointed to learn that in reality it drastically underperforms. The baseline used in the paper is a random baseline, which seems a bit unfair---a good baseline is usually an algorithm that is an obvious first choice, such as the model-free approach.
iclr_2018_r1ISxGZRb
Deep lifelong learning systems need to efficiently manage resources to scale to large numbers of experiences and non-stationary goals. In this paper, we explore the relationship between lossy compression and the resource constrained lifelong learning problem of knowledge transfer. We demonstrate that lossy episodic experience storage can enable efficient knowledge transfer between different architectures and algorithms at a fraction of the storage cost of lossless storage. This is achieved by introducing a generative knowledge distillation strategy that does not store any training examples. As an important extension of this idea, we show that lossy recollections stabilize deep networks much better than lossless sampling in resource constrained settings of lifelong learning while avoiding catastrophic forgetting. For this setting, we propose a novel dual purpose recollection buffer used to both stabilize the recollection generator itself and an accompanying reasoning model.
This paper presents an approach to lifelong learning with episodic experience storage under resource constraints. The key idea of the approach is to store the latent code obtained from a categorical Variational Autoencoder as opposed to the input example itself. When a new task is learnt, catastrophic forgetting is avoided by randomly sampling stored codes corresponding to past experience and adding the corresponding reconstruction to a batch of data from a new problem. The authors show that explicitly storing data provides better results than random sampling from the generative model. Furthermore, the method is compared to other techniques relying on episodic memory and as expected, achieves better results given a fixed effective buffer size due to being able to store more experience. While the core idea of this paper is reasonable, it provides little insight into how episodic experience storage compares to related methods as an approach to lifelong learning. While the authors compare their method to other techniques based on experience replay, I feel that a comparison to other techniques is important. A natural choice would be a model which introduces task-specific parameters for each problem (e.g. (Li & Hoiem, 2016) or (Rusu et al., 2016)). A major concern is the fact that the VAE with categorical latents itself suffers from catastrophic forgetting. While the authors propose to "freeze decoder parameters right before each incoming experience and train multiple gradient descent iterations over randomly selected recollection batches before moving on to the next experience", this makes the approach both less straight-forward to apply and more computationally expensive. Moreover, the authors only evaluate the approach on simple image recognition tasks (MNIST, CIFAR-100, Omniglot). I feel that an experiment in Reinforcement Learning (e.g. as proposed in (Rusu et al., 2016)) would provide more insight into how the approach behaves in more challenging settings. In particular, it is not clear whether experience replay may lead to negative transfer when subsequent tasks are more diverse. Finally, the manuscript lacks clarity. As another reviewer noted, detailed sections of weakly related motivations fail to strengthen the reader's understanding. As a minor point, the manuscript contains several grammar and spelling mistakes.
iclr_2018_ryykVe-0W
Reliable measures of statistical dependence could be useful tools for learning independent features and performing tasks like source separation using Independent Component Analysis (ICA). Unfortunately, many of such measures, like the mutual information, are hard to estimate and optimize directly. We propose to learn independent features with adversarial objectives (Goodfellow et al., 2014;Huszar, 2016) which optimize such measures implicitly. These objectives compare samples from the joint distribution and the product of the marginals without the need to compute any probability densities. We also propose two methods for obtaining samples from the product of the marginals using either a simple resampling trick or a separate parametric distribution. Our experiments show that this strategy can easily be applied to different types of model architectures and solve both linear and non-linear ICA problems.
The focus of the paper is independent component analysis (ICA) and its nonlinear variants such as the post non-linear (PNL) ICA model. Motivated by the fact that estimating mutual information and similar dependency measures require density estimates and hard to optimize, the authors propose a Wasserstein GAN (generative adversarial network) based solution to tackle the problem, with illustrations on 6 (synthetic) and 3-dimemensional (audio) examples. The primary idea of the paper is to use the Wasserstein distance as an independence measure of the estimated source coordinates, and optimize it in a neural network (NN) framework. Although finding novel GAN applications is an exciting topic, I am not really convinced that ICA with the proposed Wasserstein GAN based technique fulfills this goal. Below I detail my reasons: 1)The ICA problem can be formulated as the minimization of pairwise mutual information [1] or one-dimensional entropy [2]. In other words, estimating the joint dependence of the source coordinates is not necessary; it is worthwhile to avoid it. 2)The PNL ICA task can be efficiently tackled by first 'removing' the nonlinearity followed by classical linear ICA; see for example [3]. 3)Estimating information theoretic (IT) measures (mutual information, divergence) is a quite mature field with off-the-self techniques, see for example [4,5,6,8]. These methods do not estimate the underlying densities; it would be superfluous (and hard). 4)Optimizing non-differentiable IT measures can computationally quite efficiently carried out in the ICA context by e.g., Givens rotations [7]; differentiable ICA cost functions can be robustly handled by Stiefel manifold methods; see for example [8,9]. 5)Section 3.1: This section is devoted to generating samples from the product of the marginals, even using separate generator networks. I do not see the necessity of these solutions; the subtask can be solved by independently shuffling all the coordinates of the sample. 6)Experiments (Section 6): i) It seems to me that the proposed NN-based technique has some quite serious divergence issues: 'After discarding diverged models, ...' or 'Unfortunately, the model selection procedure also didn't identify good settings for the Anica-g model...'. ii) The proposed method gives pretty comparable results to the chosen baselines (fastICA, PNLMISEP) on the selected small-dimensional tasks. In fact, [7,8,9] are likely to provide more accurate (fastICA is a simple kurtosis based method, which is a somewhat crude 'estimate' of entropy) and faster estimates; see also 2). References: [1] Pierre Comon. Independent component analysis, a new concept? Signal Processing, 36:287-314, 1994. [2] Aapo Hyvarinen and Erkki Oja. Independent Component Analysis: Algorithms and Applications. Neural Networks, 13(4-5):411-30, 2000. [3] Andreas Ziehe, Motoaki Kawanabe, Stefan Harmeling, and Klaus-Robert Muller. Blind separation of postnonlinear mixtures using linearizing transformations and temporal decorrelation. Journal of Machine Learning Research, 4:1319-1338, 2003. [4] Barnabas Poczos, Liang Xiong, and Jeff Schneider. Nonparametric divergence: Estimation with applications to machine learning on distributions. In Conference on Uncertainty in Artificial Intelligence, pages 599-608, 2011. [5] Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Scholkopf, Alexander Smola. A Kernel Two-Sample Test. Journal of Machine Learning Research, 13:723-773, 2012. [6] Alan Wisler, Visar Berisha, Andreas Spanias, Alfred O. Hero. A data-driven basis for direct estimation of functionals of distributions. TR, 2017. (https://arxiv.org/abs/1702.06516) [7] Erik G. Learned-Miller, John W. Fisher III. ICA using spacings estimates of entropy. Journal of Machine Learning Research, 4:1271-1295, 2003. [8] Francis R. Bach. Michael I. Jordan. Kernel Independent Component Analysis. Journal of Machine Learning Research 3: 1-48, 2002. [9] Hao Shen, Stefanie Jegelka and Arthur Gretton. Fast Kernel-Based Independent Component Analysis, IEEE Transactions on Signal Processing, 57:3498-3511, 2009.
iclr_2018_Skx5txzb0W
A Boo(n) for Evaluating Architecture Performance We point out important problems with the common practice of using the best single model performance for comparing deep learning architectures, and we propose a method that corrects these flaws. Each time a model is trained, one gets a different result due to random factors in the training process, which include random parameter initialization and random data shuffling. Reporting the best single model performance does not appropriately address this stochasticity. We propose a normalized expected best-out-of-n performance (Boo n ) as a way to correct these problems.
This paper addresses multiple issues arising from the fact that commonly reported best model performance numbers are a single sample from a performance distribution. These problems are very real, and they deserve significant attention from the ML community. However, I feel that the proposed solution may actually compound the issues highlighted. Firstly, the proposed metric requires calculation of multiple test set experiments for every evaluation. In the paper up to 100 experiments were used. This may be reasonable in scenarios where the test set is hidden, and individual test numbers are never revealed. It also may be reasonable if we cynically assume that researchers are already running many test-set evaluations. But I am very opposed to any suggestion that we should relax the maxim that the test set should be used only once, or as close to once as is possible. Even the idea of researchers knowing their test set variance makes me very uneasy. Secondly, this paper tries to account for variation in results due to different degrees of hyper-parameter tuning. This is certainly an admirable aim, since different research groups have access to very different types of resources. However, the suggested approach relies on randomly picking hyper-parameters from "a range that we previously found to work reasonably well". This randomization does not account for the many experiments that were required to find this range. And the randomization is also not extended to parameters controlling the model architecture (I suspect that a number of experiments went into picking the 32 layers in the ResNet used by this paper). Without a solid and consistent basis for these hyper-parameter perturbations, I worry that this approach will fail to normalize the effect of experiment numbers while also giving researchers an excuse to avoid reporting their experimental process. I think this is a nice idea and the metric does merge the stability and low variance of mean score with the aspirations of best score. The metric may be very useful at development time in helping researchers build a reasonable expectation of test time performance in cases where the dev and test sets are strongly correlated. However, for the reasons outlined above, I don't think the proposed approach solves the problems that it addresses. Ultimately, the decision about this paper is a subjective one. Are we willing to increase the risk of inadvertent hyper-parameter tuning on the test set for the sake of a more stable metric?
iclr_2018_Hk3ddfWRW
Published as a conference paper at ICLR 2018 IMITATION LEARNING FROM VISUAL DATA WITH MULTIPLE INTENTIONS Recent advances in learning from demonstrations (LfD) with deep neural networks have enabled learning complex robot skills that involve high dimensional perception such as raw image inputs. LfD algorithms generally assume learning from single task demonstrations. In practice, however, it is more efficient for a teacher to demonstrate a multitude of tasks without careful task set up, labeling, and engineering. Unfortunately in such cases, traditional imitation learning techniques fail to represent the multi-modal nature of the data, and often result in sub-optimal behavior. In this paper we present an LfD approach for learning multiple modes of behavior from visual data. Our approach is based on a stochastic deep neural network (SNN), which represents the underlying intention in the demonstration as a stochastic activation in the network. We present an efficient algorithm for training SNNs, and for learning with vision inputs, we also propose an architecture that associates the intention with a stochastic attention module. Furthermore, we demonstrate our method on real robot visual object reaching tasks, and show that it can reliably learn the multiple behavior modes in the demonstration data. Video results are available at https://vimeo.com/240212286/fd401241b9.
The authors provide a method for learning from demonstrations where several modalities of the same task are given. The authors argue that in the case where several demonstrations exists and a deterministic (i.e., regular network) is given, the network learns some average policy from the demonstrations. The paper begins with the authors stating the motivation and problem of how to program robots to do a task based only on demonstrations rather on explicit modeling or programming. They put the this specific work in the right context of imitation learning and IRL. Afterward, the authors argue that deterministic network cannot adequately several modalities. The authors cover in Section 2 related topics, and indeed the relevant literature includes behavioral cloning, IRL , Imitation learning, GAIL, and VAEs. I find that recent paper by Tamar et al 2016. on Value Iteration Networks is highly relevant to this work: the authors there learn similar tasks (i.e., similar modalities) using the same network. Even the control task is very similar to the current proposed task in this paper. The authors argue that their contribution is 3-fold: (1) does not require robot rollouts, (2) does not require label for a task, (3) work within raw image inputs. Again, Tamar et al. 2016 deals with this 3 points. I went over the math. It seems right and valid. Indeed, SNN is a good choice for adding (Bayesian) context to a task. Also, I see the advantage of referring only to the "good" quantiles when needed. It is indeed a good method for dealing with the variance. I must say that I was impressed with the authors making the robot succeed in the tasks in hand (although reaching to an object is fairly simple task). My concerns are as follows: 1) Seems like that the given trajectories are naturally divided with different tasks, i.e., a single trajectory consists only a single task. For me, this is not the pain point in this tasks. the pain point is knowing when tasks are begin and end. 2) I'm not sure, and I haven't seen evidence in the paper (or other references) that SNN is the only (optimal?) method for this context. Why not adding (non Bayesian) context (not label) to the task will not work as well? 3) the robot task is impressive. but proving the point, and for the ease of comparing to different tasks, and since we want to show the validity of the work on more than 200 trials, isn't showing the task on some simulation is better for understanding the different regimes that this method has advantage? I know how hard is to make robotic tasks work... 4) I’m not sure that the comparison of the suggested architecture to one without any underlying additional variable Z or context (i.e., non-Bayesian setup) is fair. "Vanilla" NN indeed may fail miserably . So, the comparison should be to any other work that can deal with "similar environment but different details". To summarize, I like the work and I can see clearly the motivation. But I think some more work is needed in this work: comparing to the right current state of the art, and show that in principal (by demonstrating on other simpler simulations domains) that this method is better than other methods.
iclr_2018_BJaU__eCZ
Human brain function as measured by functional magnetic resonance imaging (fMRI), exhibits a rich diversity. In response, understanding the individual variability of brain function and its association with behavior has become one of the major concerns in modern cognitive neuroscience. Our work is motivated by the view that generative models provide a useful tool for understanding this variability. To this end, this manuscript presents two novel generative models trained on real neuroimaging data which synthesize task-dependent functional brain images. Brain images are high dimensional tensors which exhibit structured spatial correlations. Thus, both models are 3D conditional Generative Adversarial networks (GANs) which apply Convolutional Neural Networks (CNNs) to learn an abstraction of brain image representations. Our results show that the generated brain images are diverse, yet task dependent. In addition to qualitative evaluation, we utilize the generated synthetic brain volumes as additional training data to improve downstream fMRI classifiers (also known as decoding, or brain reading). Our approach achieves significant improvements for a variety of datasets, classification tasks and evaluation scores. Our classification results provide a quantitative evaluation of the quality of the generated images, and also serve as an additional contribution of this manuscript.
The work is motivated by a real challenge of neuroimaging analysis: how to increase the amount of data to support the learning of brain decoding. The contribution seems to mix two objectives: on one hand to prove that it is possible to do data augmentation for fMRI brain decoding, on the other hand to design (or better to extend) a new model (to be more precise two models). Concerning the first objective the empirical results do not provide meaningful support that the generative model is really effective. The improvement is really tiny and a statistical test (not included in the analysis) probably wouldn't pass a significant threshold. This analysis is missing a straw man. It is not clear whether the difference in the evaluation measures is related to the greater number of examples or by the specific generative model. Concerning the contribution of the model, one novelty is the conditional formulation of the discriminator. The design of the empirical evaluation doesn't address the analysis of the impact of this new formulation. It is not clear whether the supposed improvement is related to the conditional formulation. Figure 3 and Figure 5 illustrate the brain maps generated for Collection 1952 with ICW-GAN and for collection 503 with ACD-GAN. It is not clear how the authors operated the choices of these figures. From the perspective of neuroscience a reader, would expect to look at the brain maps for the same collection with different methods. The pairwise brain maps would support the interpretation of the generated data. It is worthwhile to remember that the location of brain activations is crucial to detect whether the brain decoding (classification) relies on artifacts or confounds. Minor comments - typos: "a first application or this" => "a first application of this" (p.2) - "qualitative quality" (p.2)
iclr_2018_rJma2bZCW
We study the statistical properties of the endpoint of stochastic gradient descent (SGD). We approximate SGD as a stochastic differential equation (SDE) and consider its Boltzmann Gibbs equilibrium distribution under the assumption of isotropic variance in loss gradients.. Through this analysis, we find that three factors -learning rate, batch size and the variance of the loss gradients -control the trade-off between the depth and width of the minima found by SGD, with wider minima favoured by a higher ratio of learning rate to batch size. In the equilibrium distribution only the ratio of learning rate to batch size appears, implying that it's invariant under a simultaneous rescaling of each by the same amount. We experimentally show how learning rate and batch size affect SGD from two perspectives: the endpoint of SGD and the dynamics that lead up to it. For the endpoint, the experiments suggest the endpoint of SGD is similar under simultaneous rescaling of batch size and learning rate, and also that a higher ratio leads to flatter minima, both findings are consistent with our theoretical analysis. We note experimentally that the dynamics also seem to be similar under the same rescaling of learning rate and batch size, which we explore showing that one can exchange batch size and learning rate in a cyclical learning rate schedule. Next, we illustrate how noise affects memorization, showing that high noise levels lead to better generalization. Finally, we find experimentally that the similarity under simultaneous rescaling of learning rate and batch size breaks down if the learning rate gets too large or the batch size gets too small.
The paper investigates how the learning rate and mini-batch size in SGD impacts the optima that the SGD algorithm finds. Empirically, the authors argue that it was observed that larger learning rates converge to minima which are more wide, and that smaller learning rates more often lead to convergence to minima which are narrower, i.e. where the Hessian has large Eigenvalues. In this paper, the authors derive an analytical theory that aims at explaining this phenomenon. Point of departure is an analytical theory proposed by Mandt et al., where SGD is analyzed in a continuous-time stochastic formalism. In more detail, a stochastic differential equation is derived which mimicks the behavior of SGD. The advantage of this theory is that under specific assumptions, analytic stationary distributions can be derived. While Mandt et al. focused on the vicinity of a local optima, the authors of the present paper assumed white diagonal gradient noise, which allows to derive an analytic, *global* stationary distribution (this is similar as in Langevin dynamics). Then, the authors focus again on individual local optima and "integrate out" the stationary distribution around a local optimum, using again a Gaussian assumption. As a result, the authors obtain un-normalized probabilities of getting trapped in a given local optimum. This un-normalized probability depends on the strength of the value of the loss function in the vicinity of the optimum, the gradient noise, and the width of the optima. In the end, these un-normalized probabilities are taken as probabilities that the SGD algorithm will be trapped around the given optimum in finite time. Overall assessment: I find the analytical results of the paper very original and interesting. The experimental part has some weaknesses. The paper could be drastically improved when focusing on the experimental part. Detailed comments: Regarding the analytical part, I think this is all very nice and original. However, I have some comments/requests: 1. Since the authors focus around Gaussian regions around the local minima, perhaps the diagonal white noise assumption could be weakened. This is again the multivariate Ornstein-Uhlenbeck setup examined in Mandt et al., and probably possesses an analytical solution for the un-normalized probabilities (even if the noise is multivariate Gaussian). Would the authors to consider generalizing the proof for the camera-ready version perhaps? 2. It would be nice to sketch the proof of theorem 2 in the main paper, rather than to just refer to the appendix. In my opinion, the theorem results from a beautiful and instructive calculation that should provide the reader with some intuition. 3. Would the authors comment on the underlying theoretical assumptions a bit more? In particular, the stationary distribution predicted by the Ornstein-Uhlenbeck formalism is never reached in practice. When using SGD in practice, one is in the initial mode-seeking phase. So, why is it a reasonable assumption to still use results obtained from the stationary (equilibrated) distribution which is never reached? Regarding the experiments: here I see a few problems. First, the writing style drops in quality. Second, figures 2 and 3 are cryptic. Why do the authors focus on two manually selected optima? In which sense is this statistically significant? How often were the experiments repeated? The figures are furthermore hard to read. I would recommend overhauling the entire experiments section. Details: - Typo in Figure 2: ”with different with different”. - “the endpoint of SGD with a learning rate schedule η → η/a, for some a > 0, and a constant batch size S, should be the same as the endpoint of SGD with a constant learning rate and a batch size schedule S → aS.” This is clearly wrong as there are many local minima, and running teh algorithm twice results in different local optima. Maybe add something that this only true on average, like “the characteristics of these minima ... should be the same”.
iclr_2018_BkLhaGZRW
Published as a conference paper at ICLR 2018 IMPROVING GAN TRAINING VIA BINARIZED REPRESENTATION ENTROPY (BRE) REGULARIZATION We propose a novel regularizer to improve the training of Generative Adversarial Networks (GANs). The motivation is that when the discriminator D spreads out its model capacity in the right way, the learning signals given to the generator G are more informative and diverse. These in turn help G to explore better and discover the real data manifold while avoiding large unstable jumps due to the erroneous extrapolation made by D . Our regularizer guides the rectifier discriminator D to better allocate its model capacity, by encouraging the binary activation patterns on selected internal layers of D to have a high joint entropy. Experimental results on both synthetic data and real datasets demonstrate improvements in stability and convergence speed of the GAN training, as well as higher sample quality. The approach also leads to higher classification accuracies in semi-supervised learning.
The paper proposes a regularizer that encourages a GAN discriminator to focus its capacity in the region around the manifolds of real and generated data points, even when it would be easy to discriminate between these manifolds using only a fraction of its capacity, so that the discriminator provides a more informative signal to the generator. The regularizer rewards high entropy in the signs of discriminator activations. Experiments show that this helps to prevent mode collapse on synthetic Gaussian mixture data and improves Inception scores on CIFAR10. The high-level idea of guiding model capacity by rewarding high-entropy activations is interesting and novel to my knowledge (though I am not an expert in this space). Figure `1 is a fantastic illustration that presents the core idea very clearly. That said I found the intuitive story a little bit difficult to follow -- it's true that in Figure 1b the discriminator won't communicate the detailed structure of the data manifold to the generator, but it's not clear why this would be a problem -- the gradients should still pull the generator *towards* the manifold of real data, and as this happens and the manifolds begin to overlap, the discriminator will naturally be forced to allocate its capacity towards finer-grained details. Is the implicit assumption that for real, high-dimensional data the generator and data manifolds will *never* overlap? But in that case much of the theoretical story goes out the window. I'd also appreciate further discussion of the relationship of this approach to Wasserstein GANs, which also attempt to provide a clearer training gradient when the data and generator manifolds do not overlap. More generally I'd like to better understand what effect we'd expect this regularizer to have. It appears to be motivated by improving training dynamics, which is understandably a significant concern. Does it also change the location of the Nash equilibria? (or equivalently, the optimal generator under the density-ratio-estimator interpretation of discriminators proposed by https://arxiv.org/abs/1610.03483). I'd expect that it would but the effects of this changed objective are not discussed in the paper. The experimental results seem promising, although not earthshattering. I would have appreciated a comparison to other methods for guiding discriminator representation capacity, e.g. autoencoding (I'd also imagine that learning an inference network (e.g. BiGAN) might serve as a useful auxiliary task?). Overall this feels like an cute hack, supported by plausible intuition but without deep theory or compelling results on real tasks (yet). As such I'd rate it as borderline; though perhaps interesting enough to be worth presenting and discussing. A final note: this paper was difficult to read due to many grammatical errors and unclear or misleading constructions, as well as missing citations (e.g. sec 2.1). From the second paragraph alone: "impede their wider applications in new data domain" -> domains "extreme collapse and heavily oscillation" -> heavy oscillation "modes of real data distribution" -> modes of the real data distribution "while D fails to exploit the failure to provide better training signal to G" -> should be "this failure" to refer to the previously-described generator mode collapse, or rewrite entirely "even when they are their Jensen-Shannon divergence" -> even when their Jensen-Shannon divergence I'm sympathetic to the authors who are presumably non-native English speakers; many good papers contain mistakes, but in my opinion the level in this paper goes beyond what is appropriate for published work. I encourage the authors to have the work proofread by a native speaker; clearer writing will ultimately increase the reach and impact of the paper.
iclr_2018_H1bhRHeA-
Recent neural network and language models have begun to rely on softmax distributions with an extremely large number of categories. In this context calculating the softmax normalizing constant is prohibitively expensive. This has spurred a growing literature of efficiently computable but biased estimates of the softmax. In this paper we present the first two unbiased algorithms for maximizing the softmax likelihood whose work per iteration is independent of the number of classes and datapoints (and does not require extra work at the end of each epoch). We compare our unbiased methods' empirical performance to the state-of-the-art on seven real world datasets, where they comprehensively outperform all competitors.
The paper presents interesting algorithms for minimizing softmax with many classes. The objective function is a multi-class classification problem (using softmax loss) and with linear model. The main idea is to rewrite the obj as double-sum using the dual formulation and then apply SGD to solve it. At each iteration, SGD samples a subset of training samples and labels. The main contribution of this paper is: 1) proposing a U-max trick to improve the numerical stability and 2) proposing an implicit SGD approach. It seems the implicit SGD approach is better in the experimental comparisons. I found the paper quite interesting, but meanwhile I have the following comments and questions: - As pointed out by the authors, the idea of this formulation and doubly SGD is not new. (Raman et al, 2016) has used a similar trick to derive the double-sum formulation and solved it by doubly SGD. The authors claim that the algorithm in (Raman et al) has an O(NKD) cost for updating u at the end of each epoch. However, since each epoch requires at least O(NKD) time anyway (sometimes larger, as in Proposition 2), is another O(NKD) a significant bottleneck? Also, since the formulation is similar to (Raman et al., 2016), a comparison is needed. - I'm confused by Proposition 1 and 2. In appendix E.1, the formulation of the update is derived, but why we need Newton to get log(1/epsilon) time complexity? I think most first order methods instead of Newton will have linear converge (log(1/epsilon) time)? Also, I guess we are assuming the obj is strongly convex? - The step size is selected in one dataset and used for all others. This might lead to divergence of other algorithms, since usually step size depends on data. As we can see, OVE, NCE and IS diverges on Wiki-small, which may be fixed if the step size is chosen for each data (in practice we can choose using subsamples for each data). - All the comparisons are based on "epochs", but the competing algorithms are quite different and can have very different running time for each epoch. For example, implicit SGD has another iterative solver for each update. Therefore, the timing comparison is needed in this paper to justify that implicit SGD is faster. - The claim that "implicit SGD never overshoots the optimum" needs more supports. Is it proved in some previous papers? - The presentation can be improved. I think it will be helpful to state the algorithms explicitly in the main paper.
iclr_2018_H1rRWl-Cb
We present an information-theoretic framework for understanding trade-offs in unsupervised learning of deep latent-variables models using variational inference. This framework emphasizes the need to consider latent-variable models along two dimensions: the ability to reconstruct inputs (distortion) and the communication cost (rate). We derive the optimal frontier of generative models in the two-dimensional rate-distortion plane, and show how the standard evidence lower bound objective is insufficient to select between points along this frontier. However, by performing targeted optimization to learn generative models with different rates, we are able to learn many models that can achieve similar generative performance but make vastly different trade-offs in terms of the usage of the latent variable. Through experiments on MNIST and Omniglot with a variety of architectures, we show how our framework sheds light on many recent proposed extensions to the variational autoencoder family.
EDIT: I have reviewed the authors revisions and still recommend acceptance. Summary This paper proposes assessing VAEs via two quantities: rate R (E[ KLD[q(z|x) || p(z)] ]) and distortion D (E[ log p(x|z) ]), which can be used to bound the mutual information (MI) I(x,z) from above and below respectively (i.e. H[x] - D <= I(x,z) <= R). This fact then implies the inequality H[x] <= R + D, where H[x] is the entropy of the true data distribution, and allows for the construction of a phase diagram (Figure 1) with R and D on the x and y axis respectively. Models can be plotted on the diagram to show the degree to which they favor reconstruction (D) or sampling diversity (R). The paper then reports several experiments, the first being a simulation to show that a VAE trained with vanilla ELBO cannot recover the true rate in even a 1D example. For the second experiment, 12 models are trained by varying the encoder/decoder strength (CNN vs autoregressive) and prior (fact. Gauss vs autoregressive vs VampPrior). Plots of the D vs R and ELBO vs R are shown for the models, revealing that the same ELBO value can be decomposed into drastically different R and D values. The point is further made through qualitative results in Figure 4. Evaluation Pros: While no one facet of the paper is particularly novel (as similar observations and discussion has been made by [1-4]), the paper, as far as I’m aware, is the first to formally decompose the ELBO into the R vs D tradeoff, which is natural. As someone who works with VAEs, I didn’t find the conclusions surprising, but I imagine the paper would be valuable to someone learning about VAEs. Moreover, it’s nice to have a clear reference for the unutilized-latent-space-behavior mentioned in various other VAE papers. The most impressive aspect of the paper is the number of models trained for the empirical investigation. Placing such varied models (CNN vs autoregressive vs VampPrior etc) onto the same plot from comparison (Figure 3) is a valuable contribution. Cons: As mentioned above, I didn’t find the paper conceptually novel, but this isn’t a significant detraction as its value (at least for VAE researchers) is primarily in the experiments (Figure 3). I do think the paper---as the ‘Discussion and Further Work’ section is only two sentences long---could be improved by providing a better summary of the findings and recommendations moving forward. Should generative modeling papers be reporting final R and D values in addition to marginal likelihood? How should an author demonstrate that their method isn’t doing auto-decoding? The conclusion claims that “[The rate-distortion tradeoff] provides new methods for training VAE-type models which can hopefully advance the state of the art in unsupervised representation learning.” Is this referring to the constrained optimization problem given in Equation #4? It seems to me that the optimal R-vs-D tradeoff is application dependant; is this not always true? Miscellaneous / minor comments: Figure 3 would be easier to read if the dots better reflected their corresponding tuple (although I realize representing all tuple combinations in terms of color, shape, etc is hard). I had to keep referring to the legend, losing my place in the scatter plot. I found sections 1 and 2 rather verbose; I think some text could be cut to make room for more final discussion / recommendations. For example, I think the first two whole paragraphs could be cut or at least condensed and moved to the related works section, as they are just summarizing research history/trends. The paper’s purpose clearly starts at the 3rd paragraph (“We are interested in understanding…”). The references need cleaned-up. There are several conference publications that are cited via ArXiv instead of the conference (IWAE should be ICLR, Bowman et al. should be CoNLL, Lossy VAE should be ICLR, Stick-Breaking VAE should be ICLR, ADAM should be ICLR, Inv Autoregressive flow should be NIPS, Normalizing Flows should be ICML, etc.), and two different versions of the VAE paper are cited (ArXiv and ICLR). Conclusions I found this paper to present valuable analysis of the ELBO objective and how it relates to representation learning in VAEs. I recommend the paper be accepted, although it could be substantially improved by including more discussion at the end. 1. S. Zhao, J. Song, and S. Ermon. “InfoVAE: Information Maximizing Variational Autoencoders.” ArXiv 2017. 2. X. Chen, D. Kingma, T. Salimans, Y. Duan, P. Dhariwal, J. Shulman, I. Sutskever, and P. Abbeel. “Variational Lossy Autoencoder.” ICLR 2017. 3. I. Higgins, L. Matthey, A. Pal, C. Burgess, X. Glorot, M. Botvinick, S. Mohamed, and A. Lerchner. “Beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework.” ICLR 2017 4. S. Bowman, L. Vilnis, O. Vinyas, A. Dai, R. Jozefowicz, and S. Bengio. “Generating Sentences from a Continuous Space.” CoNLL 2016.
iclr_2018_HkinqfbAb
Recently, there has been growing interest in methods that perform neural network compression, namely techniques that attempt to substantially reduce the size of a neural network without performance degradation. In this paper, we propose a simple compression algorithm that incorporates pruning within scalar quantization, whose non-probabilistic nature results in minimal change and overhead to standard network training. The key idea in our approach is to modify the original optimization problem by adding K independent Gaussian priors and a sparsityinducing prior over the parameters. We show that our approach is easy to implement using existing neural network libraries, generalizes 1 and 2 regularization, and elegantly enforces parameter tying/pruning constraints on all parameters across the network. Experimentally, we demonstrate that our method yields state-of-the-art compression on several standard benchmarks with minimal loss in accuracy while requiring significantly less hyperparameter tuning compared with related, competing approaches.
As the authors mentioned, weight-sharing and pruning are not new to neural network compression. The proposed method resembles a lot with the deep compression work (Han et. al. 2016), with the distinction of clustering across different layers and a Lasso regularizer to encourage sparsity of the weights. Even though the change seems minimal, the authors has demonstrated the effectiveness on the benchmark. But the description of the optimization strategy in Section 3 needs some refinement. In the soft-tying stage, why only the regularizer (1) is considered, not the sparsity one? In the hard-tying stage, would the clustering change in each iteration? If not, this has reduced to the constrained problem as in the Hashed Compression work (Chen et. al. 2015) where the regularizer (1) has no effect since the clustering is fixed and all the weights in the same cluster are equal. Even though it is claimed that the proposed method does not require a pre-trained model to initialize, the soft-tying stage seems to take the responsibility to "pre-train" the model. The experiment section is a weak point. It is much less convincing with no comparison result of compression on large neural networks and large datasets. The only compression result on large neural network (VGG-11) comes with no baseline comparisons. But it already tells something: 1) what is the classification result for reference network without compression? 2) the compression ratio has significantly reduced comparing with those for MNIST. It is hard to say if the compression performance could generalize to large networks. Also, it would be good to have an ablation test on different parts of the objective function and the two optimization stages to show the importance of each part, especially the removal of the soft-tying stage and the L1 regularizer versus a simple pruning technique after each iteration. This maybe a minor issue, but would be interesting to know: what would the compression performance be if the classification accuracy maintains the same level as that of the deep compression. As discussed in the paper, it is a trade-off between accuracy and compression. The network could be compressed to very small size but with significant accuracy loss. Some minor issues: - In Section 1, the authors discussed a bunch of pitfalls of existing compression techniques, such as large number of parameters, local minimum issues and layer-wise approaches. It would be clearer if the authors could explicitly and succinctly discuss which pitfalls are resolved and how by the proposed method towards the end of the Introduction section. - In Section 4.2, the authors discussed the insensitivity of the proposed method to switching frequency. But there is no quantitative results shown to support the claims. - What is the threshold for pruning zero weight used in Table 2? - There are many references and comparisons missing: Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations in NIPS 17 for instance. This paper also considers quantization for compression which is related to this work.
iclr_2018_S1FQEfZA-
A fundamental, and still largely unanswered, question in the context of Generative Adversarial Networks (GANs) is whether GANs are actually able to capture the key characteristics of the datasets they are trained on. The current approaches to examining this issue require significant human supervision, such as visual inspection of sampled images, and often offer only fairly limited scalability. In this paper, we propose new techniques that employ classification-based perspective to evaluate synthetic GAN distributions and their capability to accurately reflect the essential properties of the training data. These techniques require only minimal human supervision and can easily be scaled and adapted to evaluate a variety of state-of-the-art GANs on large, popular datasets. They also indicate that GANs have significant problems in reproducing the more distributional properties of the training dataset. In particular, the diversity of such synthetic data is orders of magnitude smaller than that of the original data.
Overall comments: Trying to shed light at comparison between different GAN variants, but the metrics introduced are not very novel, results are not comparable with prior work and older version of certain models are used (WGAN instead of Improved WGAN) Section 2.1: quantifying mode collapse * This section should mention Inception scores. A model which collapses on only one class will have a low inception score, and this metric also uses a conv net classifier, as the approach is very similar (the method is only mentioned briefly in section 2.3) * The authors might not be aware of concurrent work published before the ICLR deadline, which introduces a very similar metric: https://arxiv.org/abs/1706.08500 Section 2.2: measuring diversity: * There is an inherent flaw in this metric, namely it trains one GAN per class. One cannot generalize from this metric on how different GAN models will perform when trained on the entire dataset. One model might be able to capture more diverse distributions, but lose a bit of quality, while another model might be able to create good samples when train on low diversity data. We already know that when looking at other generative models, we can find such examples. VAEs can obtain very good samples on celebA, a dataset with relative low diversity, but not so good samples on cifar. * The authors compare their experiment with Radford et al. (2015), but that needs to be done with caution. In Radford et al. (2015), the authors use a conditional generative model trained on the entire dataset. In that setting, this test is more suitable since one can test how good well the model has learned the conditioning. For example, for a conditional model trained on cats and dogs, a failure mode is that the model generates only cats. This failure mode can then be captured by this metric. However, when training two models, one on cats and one on dogs, this failure mode is not present since the data is already split into classes. * The proposed metric is not necessarily a diversity metric, it is also a quality metric: in a situation where all the models diverge and generate random noise, with high diversity, but without any structure. This metric would capture this issue, because a classifier will not be able to learn the classes, because there is no correlation between the classes and the generated images. Experimental results: * Positive insights regarding labels and celeba. Looks like subtle labels on faces are not being captured by GAN models. Figure 1 is hard to read. * ALI having higher diversity on celeba is explicitly mentioned in a paper the authors cite, namely “Do GANs actually learn the distribution? An empirical study”. Would be nice to mention that in the paper. Would like to see: * A comparison with the Improved Wasserstein GAN model. This model is now the one used by the community, as opposed to the original Wasserstein GAN. * Models trained on cifar, with the reported inception scores of the models on cifar. That makes the paper comparable with previous work and is a test against bugs in model implementations or other parts of the code. This would also allow to test for claims such as the fact that the Improved GAN has more mode collapse than DCGAN, while the Improved GAN paper says the opposite. The reason why the authors chose the models they did for comparison. In the BiGAN (same model as ALI) paper the authors report a low inception score, which suggests that their model is not able to capture the subtleties of the Cifar dataset, and this seems to be correlated with the results obtained in this work.
iclr_2018_SkZxCk-0Z
CAN NEURAL NETWORKS UNDERSTAND LOGICAL ENTAILMENT? We introduce a new dataset of logical entailments for the purpose of measuring models' ability to capture and exploit the structure of logical expressions against an entailment prediction task. We use this task to compare a series of architectures which are ubiquitous in the sequence-processing literature, in addition to a new model class-PossibleWorldNets-which computes entailment as a "convolution over possible worlds". Results show that convolutional networks present the wrong inductive bias for this class of problems relative to LSTM RNNs, treestructured neural networks outperform LSTM RNNs due to their enhanced ability to exploit the syntax of logic, and PossibleWorldNets outperform all benchmarks.
SUMMARY The paper is fairly broad in what it is trying to achieve, but the approach is well thought out. The purpose of the paper is to investigate the effectiveness of prior machine learning methods with predicting logical entailment and then provide a new model designed for the task. Explicitly, the paper asks the following questions: "Can neural networks understand logical formula well enough to detect entailment?", and "Which architectures are best at inferring, encoding, and relating features in a purely structural sequence-based problem?". The goals of the paper is to understand the learning bias of current architectures when they are tasked with learning logical entailment. The proposed network architecture, PossibleWorldNet, is then viewed as an improvement on an earlier architecture TreeNet. POSITIVES The structure of this paper was very well done. The paper attempts to do a lot, and succeeds on most fronts. The generated dataset used for testing logical entailment is given a constructive description which allows for future replication. The baseline benchmark networks are covered in depth and the reader is provided with a deep understanding on the limitations of some networks with regard to exploiting structure in data. The PossibleWorldNets is also given good coverage, and the equations provided show the means by which it operates. • A clear methodological approach to the research. The paper covers how they created a dataset which can be used for logical entailment learning, and then explains clearly all the previous network models which will be used in testing as well as their proposed model. • The background information regarding each model was exceptionally thorough. The paper went into great depth describing the pros and cons of earlier network models and why they may struggle with recognizing logical entailment. • The section describing the creation of a dataset captures the basis for the research, learning logical entailment. They describe the creation of the data, as well as the means by which they increase the difficulty for learning. • The paper provides an in depth description of their PossibleWorldNet model, and during experimentation we see clear evidence of the models capabilities. NEGATIVES One issue I had with the paper is regarding the creation of the logical entailment dataset. Not so much for how they explained the process of creating the dataset, that was very thorough, but the fact that this dataset was the only means to test the previous network models and their new proposed network model. I wonder if it would be better to find non-generated datasets which may contain data that have entailment relationships. It is questionable if their hand crafted network model is learned best on their hand crafted dataset. The use of a singular dataset for learning logical entailment. The dataset was also created by the researchers for the express purpose of testing neural network capacity to learn logical entailment. I am hesitant to say their proposed network is an incredible achievement since PossibleWorldNet effectively beat out other methods on a dataset that they created expressly for it. RELATED WORK The paper has an extensive section dedicated to covering related work. I would say the research involved was very thorough and the researchers understood how their method was different as well as how it was improving on earlier approaches. CONCLUSION Given the thorough investigation into previous networks’ capabilities in logical entailment learning, I would accept this paper as a valid scientific contribution. The paper performs a thorough analysis on the limitations that previous networks face with regard to exploiting structure from data. The paper also covers results of the experiments by not only pointing out their proposed network’s success, but by analyzing why certain earlier network models were able to achieve competitive learning results. The structure of the PossibleWorldNet was also explained well, and during ex- perimentation demonstrated its ability to learn structure from data. The paper would have been improved through testing of multiple datasets, and not just on there self generated dataset, but the contribution of their research on their network and older networks is still justification enough for this paper.
iclr_2018_SkymMAxAb
In the past decade, many urban areas in China have suffered from serious air pollution problems, making air quality forecast a hot spot. Conventional approaches rely on numerical methods to estimate the pollutant concentration and require lots of computing power. To solve this problem, we applied the widely used deep learning methods. Deep learning requires large-scale datasets to train an effective model. In this paper, we introduced a new dataset, entitled as AirNet 1 , containing the 0.25 degree resolution grid map of mainland China, with more than two years of continued air quality measurement and meteorological data. We published this dataset as an open resource for machine learning researches and set up a baseline of a 5-day air pollution forecast. The results of experiments demonstrated that this dataset could facilitate the development of new algorithms on the air quality forecast.
This paper's main contribution is in the building of a spatio-temporal data set on air pollution indicators as the title states. The data set is built from open source data to comprise pollutants measured at a number of stations and meteorological data. Then, an air pollutant predictor is built as a baseline machine learning model with a reducedLSTM model. Most of the first part's work is in the extraction of the public data from the above mentioned sources, aligning of the two source data and sampling considerations. The paper lacks detailed explanation of the problem it is actually addressing by omitting the current systems' performance: simply stating: 1.1/page 2 "Thus it became essential and urgent to set up a larger scale training dataset to enhance the accuracy of the forecast results." It also lacks definition of certain application domain area terms and acronyms (PM2:5). Certain paragraphs need rewriting: - 2.2/Page 3: "Latitude ranges from 75 degrees to 132 degrees and the north latitude range of is from 18 degrees to 51 degrees". - 3.1/Page 4: "We converted the problem of the pollutant prediction as time sequential prediction problems, as in the case of giving the past pollutant concentration x0 to xt􀀀1.". Also, Table 1: GFS Field Description contains 6 features not 7 as stated in 2.1 For air pollutant prediction a baseline machine learning model is built with a reducedLSTM model. Results seem promising but lack serious comparison with currently obtained results by other approaches as mentioned above. The statement in 5./Page 7:"Furthermore, reduced LSTM is improved than LSTM, we assumed this is because our equation considered air pollutant dynamics, thus we gave more information to model than LSTM while keeping LSTMs advantage." attributes the enhanced results to extra data (quantity) fed to the model rather than the fact (quality) as stated in the paper that the meteorological conditions (dispersion etc.) influence the air pollutant presence/ concentrations in nearby stations. A rewriting and clarification of certain paragraphs is therefore recommended.