id
stringlengths
11
20
paper_text
stringlengths
29
163k
review
stringlengths
666
24.3k
iclr_2018_H1sUHgb0Z
Published as a conference paper at ICLR 2018 LEARNING FROM NOISY SINGLY-LABELED DATA Supervised learning depends on annotated examples, which are taken to be the ground truth. But these labels often come from noisy crowdsourcing platforms, like Amazon Mechanical Turk. Practitioners typically collect multiple labels per example and aggregate the results to mitigate noise (the classic crowdsourcing problem). Given a fixed annotation budget and unlimited unlabeled data, redundant annotation comes at the expense of fewer labeled examples. This raises two fundamental questions: (1) How can we best learn from noisy workers? (2) How should we allocate our labeling budget to maximize the performance of a classifier? We propose a new algorithm for jointly modeling labels and worker quality from noisy crowd-sourced data. The alternating minimization proceeds in rounds, estimating worker quality from disagreement with the current model and then updating the model by optimizing a loss function that accounts for the current estimate of worker quality. Unlike previous approaches, even with only one annotation per example, our algorithm can estimate worker quality. We establish a generalization error bound for models learned with our algorithm and establish theoretically that it's better to label many examples once (vs less multiply) when worker quality exceeds a threshold. Experiments conducted on both ImageNet (with simulated noisy workers) and MS-COCO (using the real crowdsourced labels) confirm our algorithm's benefits.
This paper focuses on the learning-from-crowds problem when there is only one (or very few) noisy label per item. The main framework is based on the Dawid-Skene model. By jointly update the classifier weights and the confusion matrices of workers, the predictions of the classifier can help on the estimation problem with rare crowdsourced labels. The paper discusses the influence of the label redundancy both theoretically and empirically. Results show that with a fixed budget, it’s better to label many examples once rather than fewer examples multiple times. The model and algorithm in this paper are simple and straightforward. However, I like the motivation of this paper and the discussion about the relationship between training efficiency and label redundancy. The problem of label aggregation with low redundancy is common in practice but hardly be formally analyzed and discussed. The conclusion that labeling more examples once is better can inspire other researchers to find more efficient ways to improve crowdsourcing. About the technique details, this paper is clearly written, but some experimental comparisons and claims are not very convincing. Here I list some of my questions: +About the MBEM algorithm, it’s better to make clear the difference between MBEM and a standard EM. Will it always converge? What’s its objective? +The setting of Theorem 4.1 seems too simple. Can the results be extended to more general settings, such as when workers are not identical? +When n = O(m log m), the result that \epslon_1 is constant is counterintuitive, people usually think large redundancy r can bring benefits on estimation, can you explain more on this? +During CIFAR-10 experiments when r=1, each example only have one label. For the baselines weighted-MV and weighted-EM, they can only be directly trained using the same noisy labels. So can you explain why their performance is slightly different in most settings? Is it due to the randomly chosen procedure of the noisy labels? +For ImageNet and MS-COCO experiments with a fixed budget, you reduced the training set when increasing the redundancy, which is unfair. The reduction of performance could mainly cause by seeing fewer raw images, but not the labels. It’s better to train some semi-supervised model to make the settings more comparable.
iclr_2018_BJij4yg0Z
A BAYESIAN PERSPECTIVE ON GENERALIZATION AND STOCHASTIC GRADIENT DESCENT We consider two questions at the heart of machine learning; how can we predict if a minimum will generalize to the test set, and why does stochastic gradient descent find minima that generalize well? Our work responds to Zhang et al. (2016), who showed deep neural networks can easily memorize randomly labeled training data, despite generalizing well on real labels of the same inputs. We show that the same phenomenon occurs in small linear models. These observations are explained by the Bayesian evidence, which penalizes sharp minima but is invariant to model parameterization. We also demonstrate that, when one holds the learning rate fixed, there is an optimum batch size which maximizes the test set accuracy. We propose that the noise introduced by small mini-batches drives the parameters towards minima whose evidence is large. Interpreting stochastic gradient descent as a stochastic differential equation, we identify the "noise scale" g = ( where is the learning rate, N the training set size and B the batch size. Consequently the optimum batch size is proportional to both the learning rate and the size of the training set, B opt ∝ N . We verify these predictions empirically.
Summary: This paper presents a very interesting perspective on why deep neural networks may generalize well, in spite of their high capacity (Zhang et al, 2017). It does so from the perspective of "Bayesian model comparison", where two models are compared based on their "marginal likelihood" (aka, their "evidence" --- the expected probability of the training data under the model, when parameters are drawn from the prior). It first shows that a simple weakly regularized (linear) logistic regression model over 200 dimensional data can perfectly memorize a random training set with 200 points, while also generalizing well when the class labels are not random (eg, when a simple linear model explains the class labels); this provides a much simpler example of a model generalizing well in spite of high capacity, relative to the experiments presented by Zhang et al (2017). It shows that in this very simple setting, the "evidence" of a model correlates well with the test accuracy, and thus could explain this phenomena (evidence is low for model trained on random data, but high for model trained on real data). The paper goes on to show that if the evidence is approximated using a second order Taylor expansion of the cost function around a minimia $w_0$, then the evidence is controlled by the cost at the minimum, and by the logarithm of the ratio of the curvature at the minimum compared to the regularization constant (eg, standard deviation of gaussian prior). Thus, Bayesian evidence prefers minima that are both deep and broad. This provides a way of comparing models in a way which is independent of the model parametrization (unfortunately, however, computing the evidence is intractable for large networks). The paper then discusses how SGD can be seen as an algorithmic way of finding minima with large "evidence" --- the "noise" in the gradient estimation helps the model avoid "sharp" minima, while the gradient helps the model find "deep" minima. The paper shows that SGD can be understood using stochastic differential equations, where the noise scale is approximately aN/((1-m)B) (a = learning rate, N = size of training set, B = batch size, m = momentum). It argues that because there should be an optimal noise scale (which maximizes test performance), the batch size should be taken proportional to the learning rate, as well as the training set size, and proportional to 1/(1-m). These scaling rules are confirmed experimentally (DNN trained on MNIST). Thus, this Bayesian perspective can also help explain the observation that models trained with smaller batch sizes (noisier gradient estimates) often generalize better than those with larger batch sizes (Kesker et al, 2016). These scaling rules provide guidance on how to increase the batch size, which is desirable for increasing the parralelism of SGD training. Review: Quality: The quality of the work is high. Experiments and analysis are both presented clearly. Clarity: The paper is relatively clear, though some of the connections between the different parts of the paper felt unclear to me: 1) It would be nice if the paper were to explain, from a theoretical perspective, why large evidence should correspond to better generalization, or provide an overview of the work which has shown this (eg, Rissanen, 1983). 2) Could margin-based generalization bounds explain the superior generalization performance of the linear model trained on random vs. non-random data? It seems to me that the model trained on meaningful data should have a larger margin. 3) The connection between the work on Bayesian evidence, and the work on SGD, felt very informal. The link seems to be purely intuitive (SGD should converge to minima with high evidence, because its updates are noisy). Can this be formalized? There is a footnote on page 7 regarding Bayesian posterior sampling -- I think this should be brought into the body of the paper, and explained in more detail. 4) The paper does not give any background on stochastic differential equations, and why there should be an optimal noise scale 'g', which remains constant during the stochastic process, for converging to a minima with high evidene. Are there any theoretical results which can be leveraged from the stochastic processes literature? For example, are there results which prove anything regarding the convergence of a stochastic process under different amounts of noise? 5) It was unclear to me why momentum was used in the MNIST experiments. This seems to complicate the experimental setting. Does the generalization gap not appear when no momentum is used? Also, why is the same learning rate used for both small and large batch training for Figures 3 and 4? If the learning rate were optimized together with batch size (eg, keeping aN/B constant), would the generalization gap still appear? Figure 5a seems to suggest that it would not appear (peaks appear to all have the same test accuracy). 6) It was unclear to me whether the analysis of SGD as a stochastic differential equation with noise scale aN/((1-m)B) was a contribution of this paper. It would be good if it were made clearer which part of the mathematical analysis in sections 2 and 5 are original. 7) Some small feedback: The notation $< x_i > = 0$ and $< x_i^2 > = 1$ is not explained. Is each feature being normalized to be zero mean, unit variance, or is each training example being normalized? Originality: The works seems to be relatively original combination of ideas from Bayesian evidence, to deep neural network research. However, I am not familiar enough with the literature on Bayesian evidence, or the literature on sharp/broad minima, and their generalization properties, to be able to confidently say how original this work is. Significance: I believe that this work is quite significant in two different ways: 1) "Bayesian evidence" provides a nice way of understanding why neural nets might generalize well, which could lead to further theoretical contributions. 2) The scaling rules described in section 5 could help practitioners use much larger batch sizes during training, by simultaneously increasing the learning rate, the training set size, and/or the momentum parameter. This could help parallelize neural network training considerably. Some things which could limit the significance of the work: 1) The paper does not provide a way of measuring the (approximate) evidence of a model. It simply says it is prohibitively expensive to compute for large models. Can the "Gaussian approximation" to the evidence (equation 10) be approximated efficiently for large neural networks? 2) The paper does not prove that SGD converges to models of high evidence, or formally relate the noise scale 'g' to the quality of the converged model, or relate the evidence of the model to its generalization performance. Overall, I feel the strengths of the paper outweight its weaknesses. I think that the paper would be made stronger and clearer if the questions I raised above are addressed prior to publication.
iclr_2018_HyUNwulC-
PARALLELIZING LINEAR RECURRENT NEURAL NETS OVER SEQUENCE LENGTH Recurrent neural networks (RNNs) are widely used to model sequential data but their non-linear dependencies between sequence elements prevent parallelizing training over sequence length. We show the training of RNNs with only linear sequential dependencies can be parallelized over the sequence length using the parallel scan algorithm, leading to rapid training on long sequences even with small minibatch size. We develop a parallel linear recurrence CUDA kernel and show that it can be applied to immediately speed up training and inference of several state of the art RNN architectures by up to 9x. We abstract recent work on linear RNNs into a new framework of linear surrogate RNNs and develop a linear surrogate model for the long short-term memory unit, the GILR-LSTM, that utilizes parallel linear recurrence. We extend sequence learning to new extremely long sequence regimes that were previously out of reach by successfully training a GILR-LSTM on a synthetic sequence classification task with a one million timestep dependency.
# Summary and Assessment The paper addresses an important issue–that of making learning of recurrent networks tractable for sequence lengths well beyond 1’000s of time steps. A key problem here is that processing such sequences with ordinary RNNs requires a reduce operation, where the output of the net at time step t depends on the outputs of *all* its predecessor. The authors now make a crucial observation, namely that a certain class of RNNs allows evaluation in a non-linear fashion through a so-called SCAN operator. Here, if certain conditions are satisfied, the calculation of the output can be parallelised massively. In the following, the authors explore the landscape of RNNs satisfying the necessary conditions. The performance is investigated in terms of wall clock time. Further, experimental results of problems with previously untacked sequence lengths are reported. The paper is certainly relevant, as it can pave the way towards the application of recurrent architectures to problems that have extremely long term dependencies. To me, the execution seems sound. The experiments back up the claim. ## Minor - I challenge the claim that thousands and millions of time steps are a common issue in “robotics, remote sensing, control systems, speech recognition, medicine and finance”, as claimed in the first paragraph of the introduction. IMHO, most problems in these domains get away with a few hundred time steps; nevertheless, I’d appreciate a few examples where this is a case to better justify the method.
iclr_2018_rkMt1bWAZ
We achieve bias-variance decomposition for Boltzmann machines using an information geometric formulation. Our decomposition leads to an interesting phenomenon that the variance does not necessarily increase when more parameters are included in Boltzmann machines, while the bias always decreases. Our result gives a theoretical evidence of the generalization ability of deep learning architectures because it provides the possibility of increasing the representation power with avoiding the variance inflation.
Summary of the paper: The paper derives a lower bound on the expected squared KL-divergence between a true distribution and the sample based maximum likelihood estimate (MLE) of that distribution modelled by an Boltzmann machine (BM) based on methods from information geometry. This KL-divergence is first split into the squared KL-divergence between the true distribution and MLE of that distribution, and the expected squared KL-divergence between the MLE of the true distribution and the sample based MLE (in a similar spirit to splitting the excess error into approximation and estimation error in statistical learning theory). The letter is than lower bounded (leading to a lower bound on the overall KL-divergence) by a term which does not necessarily increase if the number of model parameters is increased. Pros: - Using insights from information geometry opens up a very interesting and (to my knowledge) new approach for analysing the generalisation ability of ML models. - I am not an expert on information geometry and I did not find the time to follow all the steps of the proof in detail, but the analysis seems to be correct. Cons: - The fact that the lower bound does not necessary increase with a growing number of parameters does not guarantee that the same holds true for the KL-divergence (in this sense an upper bound would be more informative). Therefore, it is not clear how much of insights the theoretical analysis gives for practitioners (it could be nice to analyse the tightness of the bound for toy models). - Another drawback reading the practical impact is, that the theorem bounds the expected squared KL-divergence between a true distribution and the sample based MLE, while training minimises the divergence between the empirical distribution and the model distribution ( i.e. the sample based MLE in the optimal case), and the theorem does not show the dependency on the letter. I found some parts difficulty to understand and clarity could be improved e.g. by - explaining why minimising KL(\hat P, P_B) is equivalent to minimising the KL-divergence between the empirical distribution and the Gibbs distribution \Phi. - explaining in which sense the formula on page 4 is equivalent to “the learning equation of Boltzmann machines”. - explaining what is the MLE of the true distribution (I assume the closest distribution in the set of distributions that can be modelled by the BM). Minor comments: - page 1: and DBMs….(Hinton et al., 2006) : The paper describes deep belief networks (DBNs) not DBMs - \theta is used to describe the function in eq. (2) as well as the BM parameters in Section 2.2 - page 5: “nodes H is” -> “nodes H are” REVISION: Thanks to the reviewers for replying to my comments and making the changes. I think they improved the paper. On the other hand the other reviewers raised valid questions, that led to my decision to not change the overall rating of the paper.
iclr_2018_HJDV5YxCW
Recent work has shown that performing inference with fast, very-low-bitwidth (e.g., 1 to 2 bits) representations of values in models can yield surprisingly accurate results. However, although 2-bit approximated networks have been shown to be quite accurate, 1 bit approximations, which are twice as fast, have restrictively low accuracy. We propose a method to train models whose weights are a mixture of bitwidths, that allows us to more finely tune the accuracy/speed trade-off. We present the "middle-out" criterion for determining the bitwidth for each value, and show how to integrate it into training models with a desired mixture of bitwidths. We evaluate several architectures and binarization techniques on the ImageNet dataset. We show that our heterogeneous bitwidth approximation achieves superlinear scaling of accuracy with bitwidth. Using an average of only 1.4 bits, we are able to outperform state-of-the-art 2-bit architectures.
This paper suggests a method for varying the degree of quantization in a neural network during the forward propagation phase. Though this is an important direction to investigate, there are several issues: 1. Comparison with previous results is misleading: a. 1-bit weights and floating point activations: Rastegari et al. got 56.8% accuracy on Alexnet, which is better than this paper 1.4bit result of 55.2%. b. Hubara et al. got 51% results on 1-bit weights and 2-bit activations included also quantization first and last layer, in contrast to this paper. Therefore, it is not clear if there is a significant benefit in the proposed method which achieves 51.5% when decreasing the activation precision to 1.4bit. Therefore, it is not clear that the proposed methods improve over previous approaches. 2. It is not clear to me: in which dimension of the tensors are we saving the scale factor? If it is per feature map, or neuron, this eliminates the main benefits of quantization: doing efficient binarized operations when doing Weight*activation during the forward pass? 3. The review of the literature is inaccurate. For example, it is not true that Courbariaux et al. (2016) “further improved accuracy on small datasets”: the main novelty there was binarizing the activations (which typically decreased the accuracy). Also, it is not clear if the scale factors introduced by XNOR-Net indeed allowed "a significant improvement over previous work" in ImageNet (e.g., see DoReFA and Hubara et al. who got similar results using binarized weigths and activations on ImageNet without scale factors). Lastly, the statement “Typical approaches include linearly placing the quantization points” is inaccurate: it was observed that logarithmic quantization works better in various cases. For example, see Miyashita, Lee and Murmann 2016, and Hubara et al. %%% After Author's Clarification %%% This paper results seem more positive now, and I have therefore have increased my score, assuming the authors will revise the paper accordingly.
iclr_2018_rydeCEhs-
SMASH: ONE-SHOT MODEL ARCHITECTURE SEARCH THROUGH HYPERNETWORKS Designing architectures for deep neural networks requires expert knowledge and substantial computation time. We propose a technique to accelerate architecture selection by learning an auxiliary HyperNet that generates the weights of a main model conditioned on that model's architecture. By comparing the relative validation performance of networks with HyperNet-generated weights, we can effectively search over a wide range of architectures at the cost of a single training run. To facilitate this search, we develop a flexible mechanism based on memory read-writes that allows us to define a wide range of network connectivity patterns, with ResNet, DenseNet, and FractalNet blocks as special cases. We validate our method (SMASH) on CIFAR-10 and CIFAR-100, STL-10, ModelNet10, and Imagenet32x32, achieving competitive performance with similarly-sized handdesigned networks.
Summary of paper - This paper presents SMASH (or the one-Shot Model Architecture Search through Hypernetworks) which has two training phases (one to quickly train a random sample of network architectures and one to train the best architecture from the first stage). The paper presents a number of interesting experiments and discussions about those experiments, but offers more exciting ideas about training neural nets than experimental successes. Review - The paper is very well written with clear examples and an excellent contextualization of the work among current work in the field. The introduction and related work are excellently written providing both context for the paper and a preview of the rest of the paper. The clear writing make the paper easy to read, which also makes clear the various weaknesses and pitfalls of SMASH. The SMASH framework appears to provide more interesting contributions to the theory of training Neural Nets than the application of said training. While in some experiments SMASH offers excellent results, in others the results are lackluster (which the authors admit, offering possible explanations). It is a shame that the authors chose to push their section on future work to the appendices. The glimmers of future research directions (such as the end of the last paragraph in section 4.2) were some of the most intellectually exciting parts of the paper. This choice may be a reflection of preferring to highlight the experimental results over possible contributions to theory of neural nets. Pros - * Strong related work section that contextualizes this paper among current work * Very interesting idea to more efficiently find and train best architectures * Excellent and thought provoking discussions of middle steps and mediocre results on some experiments (i.e. last paragraph of section 4.1, and last paragraph of section 4.2) * Publicly available code Cons - * Some very strong experimental results contrasted with some mediocre results * The balance of the paper seems off, using more text on experiments than the contributions to theory. * (Minor) - The citation style is inconsistent in places. =-=-=-= Response to the authors I thank the authors for their thoughtful responses and for the new draft of their paper. The new draft laid plain the contribution of the memory bank which I had missed in the first version. As expected, the addition of the future work section added further intellectual excitement to the paper. The expansion of section 4.1 addressed and resolved my concerns about the balance of the paper by effortless intertwining theory and application. I do have one question from this section - In table 1, the authors report p-values but fail to include them in their interpretation; what is purpose of including these p-values, especially noting that only one falls under the typical threshold for significance?
iclr_2018_rkEtzzWAb
Generative modeling of high dimensional data like images is a notoriously difficult and ill-defined problem. In particular, how to evaluate a learned generative model is unclear. In this paper, we argue that adversarial learning, pioneered with generative adversarial networks (GANs), provides an interesting framework to implicitly define more meaningful task losses for unsupervised tasks, such as for generating "visually realistic" images. By relating GANs and structured prediction under the framework of statistical decision theory, we put into light links between recent advances in structured prediction theory and the choice of the divergence in GANs. We argue that the insights about the notions of "hard" and "easy" to learn losses can be analogously extended to adversarial divergences. We also discuss the attractive properties of parametric adversarial divergences for generative modeling, and perform experiments to show the importance of choosing a divergence that reflects the final task.
This paper introduces a family of "parametric adversarial divergences" and argue that they have advantages over other divergences in generative modelling, specially for structured outputs. There's clear value in having good inductive biases (e.g. expressed in the form of the discriminator architecture) when defining divergences for practical applications. However, I think that the paper would be much more valuable if its focus shifted from presenting a new notion of divergence to deep-diving into the effect of inductive biases and presenting more specific results (theoretical and / or empirical) in structured prediction or other problems. In its current form the paper doesn't seem particularly strong for either the divergence or GAN literatures. Some reasons below: * There are no specific results on properties of the divergences, or axioms that justify them. I think that presenting a very all-encompassing formulation without a strong foundation does not add value. * There's abundant literature on f-divergences which show that there's a 1-1 relationship between divergences and optimal (Bayes) risks of classification problems (e.g. Reid at al. Information, Divergence and Risk for Binary Experiments in JMLR and Garcia-Garcia et al. Divergences and Risks for Multiclass Experiments in COLT). This disproves the point that the authors make that it's not possible to encode information about the final task in the divergence. If the loss for the task is proper, then it's well known how to construct a divergence which coincides with the optimal risk. * The divergences presented in this work are different from the above since the risk is minimised over a parametric class instead of over the whole set of integrable functions. However, practical estimators of f-divergences also reduce the optimization space (e.g. unit ball in a RKHS as in Nguyen et al. Estimating Divergence Functionals and the Likelihood Ratio by Convex Risk Minimization or Ruderman et al. Tighter Variational Representations of f-Divergences via Restriction to Probability Measures). So, given the lack of strong foundation for the formulation, "parametric adversarial divergences" feel more like estimators of other divergences than a relevant new family. * There are many estimators for f-divergences (like the ones cited above and many others based e.g. on nearest-neighbors) that are sample-based and thus correspond to the "implicit" case that the authors discuss. They don't necessarily need to use the dual form. So table 1 and the first part of Section 3.1 are not accurate. * The experiments are few and too specific, specially given that the paper presents a very general framework. The first experiment just shows that Wasserstein GANs don't perform well in an specific dataset and use that to validate a point about those GANs not being good for high dimensions due to their sample complexity. That feels like confirmation bias and also does not really say anything about the parametric adversarial GANs, which are the focus of the paper. In summary, I like the authors idea to explore the restriction of the function class of dual representations to produce useful-in-practice divergences, but the paper feels a bit middle of the road. The theory is not strong and the experiments don't necessary support the intuitive claims made in the paper.
iclr_2018_Skp1ESxRZ
Published as a conference paper at ICLR 2018 TOWARDS SYNTHESIZING COMPLEX PROGRAMS FROM INPUT-OUTPUT EXAMPLES In recent years, deep learning techniques have been developed to improve the performance of program synthesis from input-output examples. Albeit its significant progress, the programs that can be synthesized by state-of-the-art approaches are still simple in terms of their complexity. In this work, we move a significant step forward along this direction by proposing a new class of challenging tasks in the domain of program synthesis from input-output examples: learning a context-free parser from pairs of input programs and their parse trees. We show that this class of tasks are much more challenging than previously studied tasks, and the test accuracy of existing approaches is almost 0%. We tackle the challenges by developing three novel techniques inspired by three novel observations, which reveal the key ingredients of using deep learning to synthesize a complex program. First, the use of a non-differentiable machine is the key to effectively restrict the search space. Thus our proposed approach learns a neural program operating a domain-specific non-differentiable machine. Second, recursion is the key to achieve generalizability. Thus, we bake-in the notion of recursion in the design of our non-differentiable machine. Third, reinforcement learning is the key to learn how to operate the non-differentiable machine, but it is also hard to train the model effectively with existing reinforcement learning algorithms from a cold boot. We develop a novel two-phase reinforcement learningbased search algorithm to overcome this issue. In our evaluation, we show that using our novel approach, neural parsing programs can be learned to achieve 100% test accuracy on test inputs that are 500× longer than the training samples.
This paper presents a reinforcement learning based approach to learn context-free parsers from pairs of input programs and their corresponding parse trees. The main idea of the approach is to learn a neural controller that operates over a discrete space of programmatic actions such that the controller is able to produce the desired parse trees for the input programs. The neural controller is trained using a two-phase reinforcement learning approach where the first phase is used to find a set of candidate traces for each input-output example and the second phase is used to find a satisfiable specification comprising of 1 unique trace per example such that there exists a program that is consistent with all the traces. The approach is evaluated on two datasets comprising of learning parsers for an imperative WHILE language and a functional LAMBDA language. The results show that the proposed approach is able to achieve 100% generalization on test sets with programs upto 100x longer than the training programs, while baseline approaches such as seq2seq and stack LSTM do not generalize at all. The idea to decompose the synthesis task into two sub-tasks of first learning a set of individual traces for each example, and then learning a program consistent with a satisfiable subset of traces is quite interesting and novel. The use of reinforcement learning in the two phases of finding candidate trace sets with different reward functions for different operators and searching for a satisfiable subset of traces is also interesting. Finally, the results leading to perfect generalization on parsing 100x longer input programs is also quite impressive. While the presented results are impressive, a lot of design decisions such as designing specific operators (Call, Reduce,..) and their specific semantics seem to be quite domain-specific for the parsing task. The comparison with general approaches such as seq2seq and stack LSTM might not be that fair as they are not restricted to only those operators and this possibly also explains the low generalization accuracies. Can the authors comment on the generality of the presented approach to some other program synthesis tasks? For comparison with the baseline networks such as seq2seq and stack-LSTM, what happens if the number of training examples is 1M (say programs upto size 100)? 10k might be too small a number of training examples and these networks can easily overfit such a small dataset. The paper mentions that developing a parser can take upto 2x/3x more time than developing the training set. How large were the 150 examples that were used for training the models and were they hand-designed or automatically generated by a parsing algorithm? Hand generating parse trees for complex expressions seems to be more tedious and error-prone that writing a modular parser. The reason there are only 3 to 5 candidate traces per example is because the training examples are small? For longer programs, I can imagine there can be thousands of bad traces as it only needs one small mistake to propagate to full traces. Related to this question, what happens to the proposed approach if it is trained with 1000 length programs? What is the intuition behind keeping M1, M2 and M3 constants? Shouldn’t they be adaptive values with respect to the number of candidate traces found so far? For phase-1 of learning candidate traces, what happens if the algorithm was only using the outside loop (M2) and performing REINFORCE without the inside loop? The current paper presentation is a bit too dense to clearly understand the LL machine model and the two-phase algorithm. A lot of important details are currently in the appendix section with several forward references. I would suggest moving Figure 3 from appendix to the main paper, and also add a concrete example in section 4 to better explain the two-phase strategy.
iclr_2018_ByOfBggRZ
Published as a conference paper at ICLR 2018 DETECTING STATISTICAL INTERACTIONS FROM NEURAL NETWORK WEIGHTS Interpreting neural networks is a crucial and challenging task in machine learning. In this paper, we develop a novel framework for detecting statistical interactions captured by a feedforward multilayer neural network by directly interpreting its learned weights. Depending on the desired interactions, our method can achieve significantly better or similar interaction detection performance compared to the state-of-the-art without searching an exponential solution space of possible interactions. We obtain this accuracy and efficiency by observing that interactions between input features are created by the non-additive effect of nonlinear activation functions, and that interacting paths are encoded in weight matrices. We demonstrate the performance of our method and the importance of discovered interactions via experimental results on both synthetic datasets and real-world application datasets.
Based on a hierarchical hereditary assumption, this paper identifies pairwise and high-order feature interactions by re-interpreting neural network weights, assuming higher-order interactions exist only if all its induced lower-order interactions exist. Using a multiplication of the absolute values of all neural network weight matrices on top of the first hidden layer, this paper defines the aggregated strength z_r of each hidden unit r contributing to the final target output y. Multiplying z_r by some statistics of weights connecting a subset of input features to r and summing over r results in final interaction strength of each feature interaction subsets, with feature interaction order equal to the size of each feature subset. Main issues: 1. Aggregating neural network weights to identify feature interactions is very interesting. However, completely ignoring activation functions makes the method quite crude. 2. High-order interacting features must share some common hidden unit somewhere in a hidden layer within a deep neural network. Restricting to the first hidden layer in Algorithm 1 inevitably misses some important feature interactions. 3. The neural network weights heavily depends on the l1-regularized neural network training, but a group lasso penalty makes much more sense. See Group Sparse Regularization for Deep Neural Networks (https://arxiv.org/pdf/1607.00485.pdf). 4. The experiments are only conducted on some synthetic datasets with very small feature dimensionality p. Large-scale experiments are needed. 5. There are some important references missing. For example, RuleFit is a good baseline method for identifying feature interactions based on random forest and l1-logistic regression (Friedman and Popescu, 2005, Predictive learning via rule ensembles); Relaxing strict hierarchical hereditary constraints, high-order l1-logistic regression based on tree-structured feature expansion identifies pairwise and high-order multiplicative feature interactions (Min et al. 2014, Interpretable Sparse High-Order Boltzmann Machines); Without any hereditary constraint, feature interaction matrix factorization with l1 regularization identifies pairwise feature interactions on datasets with high-dimensional features (Purushotham et al. 2014, Factorized Sparse Learning Models with Interpretable High Order Feature Interactions). 6. At least, RuleFit (Random Forest regression for getting rules + l1-regularized regression) should be used as a baseline in the experiments. Minor issues: Ranking of feature interactions in Algorithm 1 should be explained in more details. On page 3: b^{(l)} \in R^{p_l}, l should be from 1, .., L. You have b^y. In summary, the idea of using neural networks for screening pairwise and high-order feature interactions is novel, significant, and interesting. However, I strongly encourage the authors to perform additional experiments with careful experiment design to address some common concerns in the reviews/comments for the acceptance of this paper. ======== The additional experimental results are convincing, so I updated my rating score.
iclr_2018_BJInEZsTb
Workshop track -ICLR 2018 LEARNING REPRESENTATIONS AND GENERATIVE MODELS FOR 3D POINT CLOUDS Three-dimensional geometric data offer an excellent domain for studying representation learning and generative modeling. In this paper, we look at geometric data represented as point clouds. We introduce a deep autoencoder (AE) network with state-of-the-art reconstruction quality and generalization ability. The learned representations outperform existing methods on 3D recognition tasks and enable basic shape editing via simple algebraic manipulations, such as semantic part editing, shape analogies and shape interpolation. We perform a thorough study of different generative models including: GANs operating on the raw point clouds, significantly improved GANs trained in the fixed latent space of our AEs and, Gaussian mixture models (GMM). For our quantitative evaluation we propose measures of sample fidelity and diversity based on matchings between sets of point clouds. Interestingly, our careful evaluation of generalization, fidelity and diversity reveals that GMMs trained in the latent space of our AEs produce the best results.
Summary: This paper proposes generative models for point clouds. First, they train an auto-encoder for 3D point clouds, somewhat similar to PointNet (by Qi et al.). Then, they train generative models over the auto-encoder's latent space, both using a "latent-space GAN" (l-GAN) that outputs latent codes, and a Gaussian Mixture Model. To generate point clouds, they sample a latent code and pass it to the decoder. They also introduce a "raw point cloud GAN" (r-GAN) that, instead of generating a latent code, directly produces a point cloud. They evaluate the methods on several metrics. First, they show that the autoencoder's latent space is a good representation for classification problems, using the ModelNet dataset. Second, they evaluate the generative model on several metrics (such as Jensen-Shannon Divergence) and study the benefits and drawbacks of these metrics, and suggest that one-to-one mapping metrics such as earth mover's distance are desirable over Chamfer distance. Methods such as the r-GAN score well on the latter by over-representing parts of an object that are likely to be filled. Pros: - It is interesting that the latent space models are most successful, including the relatively simple GMM-based model. Is there a reason that these models have not been as successful in other domains? - The comparison of the evaluation metrics could be useful for future work on evaluating point cloud GANs. Due to the simplicity of the method, this paper could be a useful baseline for future work. - The part-editing and shape analogies results are interesting, and it would be nice to see these expanded in the main paper. Cons: - How does a model that simply memorizes (and randomly samples) the training set compare to the auto-encoder-based models on the proposed metrics? How does the diversity of these two models differ? - The paper simultaneously proposes methods for generating point clouds, and for evaluating them. The paper could therefore be improved by expanding the section comparing to prior, voxel-based 3D methods, particularly in terms of the diversity of the outputs. Although the performance on automated metrics is encouraging, it is hard to conclude much about under what circumstances one representation or model is better than another. - The technical approach is not particularly novel. The auto-encoder performs fairly well, but it is just a series of MLP layers that output a Nx3 matrix representing the point cloud, trained to optimize EMD or Chamfer distance. The most successful generative models are based on sampling values in the auto-encoder's latent space using simple models (a two-layer MLP or a GMM). - While it is interesting that the latent space models seem to outperform the r-GAN, this may be due to the relatively poor performance of r-GAN than to good performance of the latent space models, and directly training a GAN on point clouds remains an important problem. - The paper could possibly be clearer by integrating more of the "background" section into later sections. Some of the GAN figures could also benefit from having captions. Overall, I think that this paper could serve as a useful baseline for generating point clouds, but I am not sure that the contribution is significant enough for acceptance.
iclr_2018_SyYYPdg0-
We capitalize on the natural compositional structure of images in order to learn object segmentation with weakly labeled images. The intuition behind our approach is that removing objects from images will yield natural images, however removing random patches will yield unnatural images. We leverage this signal to develop a generative model that decomposes an image into layers, and when all layers are combined, it reconstructs the input image. However, when a layer is removed, the model learns to produce a different image that still looks natural to an adversary, which is possible by removing objects. Experiments and visualizations suggest that this model automatically learns object segmentation on images labeled only by scene better than baselines.
This paper creates a layered representation in order to better learn segmentation from unlabeled images. It is well motivated, as Fig. 1 clearly shows the idea that if the segmentation was removed properly, the result would still be a natural image. However, the method itself as described in the paper leaves many questions about whether they can achieve the proposed goal. I cannot see from the formulation why would this model work as it is advertised. The formulation (3-4) looks like a standard GAN, with some twist about measuring the GAN loss in the z space (this has been used in e.g. PPGN and CVAE-GAN). I don't see any term that would guarantee: 1) Each layer is a natural image. This was advertised in the paper, but the loss function is only on the final product G_K. The way it is written in the paper, the result of each layer does not need to go through a discriminator. Nothing seems to have been done to ensure that each layer outputs a natural image. 2) None of the layers is degenerate. There does not seem to be any constraint either regularizing the content in each layer, or preventing any layer to be non-degenerate. 3) The mask being contiguous. I don't see any term ensuring the mask being contiguous, I imagine normally without such terms doing such kinds of optimization would lead to a lot of fragmented small areas being considered as the mask. The claim that this paper is for unsupervised semantic segmentation is overblown. A major problem is that when conducting experiments, all the images seem to be taken from a single category, this implicitly uses the label information of the category. In that regard, this cannot be viewed as an unsupervised algorithm. Even with that, the results definitely looked too good to be true. I have a really difficult time believing why such a standard GAN optimization would not generate any of the aforementioned artifacts and would perform exactly as the authors advertised. Even if it does work as advertised, the utilization of implicit labels would make it subject to comparisons with a lot of weakly-supervised learning papers with far better results than shown in this paper. Hence I am pretty sure that this is not up to the standards of ICLR. I have read the rebuttal and still not convinced. I don't think the authors managed to convince me that this method would work the way it's advertised. I also agree with Reviewer 2 that there is a lack of comparison against baselines.
iclr_2018_B1D6ty-A-
We present DANTE, a novel method for training neural networks, in particular autoencoders, using the alternating minimization principle. DANTE provides a distinct perspective in lieu of traditional gradient-based backpropagation techniques commonly used to train deep networks. It utilizes an adaptation of quasi-convex optimization techniques to cast autoencoder training as a bi-quasi-convex optimization problem. We show that for autoencoder configurations with both differentiable (e.g. sigmoid) and non-differentiable (e.g. ReLU) activation functions, we can perform the alternations very effectively. DANTE effortlessly extends to networks with multiple hidden layers and varying network configurations. In experiments on standard datasets, autoencoders trained using the proposed method were found to be very promising and competitive to traditional backpropagation techniques, both in terms of quality of solution, as well as training speed.
After reading the rebuttal: The authors addressed some of my theoretical questions. I think the paper is borderline, leaning towards accept. I do want to note my other concerns: I suspect the theoretical results obtained here are somewhat restricted to the least-squares, autoencoder loss. And note that the authors show that the proposed algorithm performs comparably to SGD, but not significantly better. The classification result (Table 1) was obtained on the autoencoder features instead of training a classifier on the original inputs. So it is not clear if the proposed algorithm is better for training the classifier, which may be of more interest. ============================================================= This paper presents an algorithm for training deep neural networks. Instead of computing gradient of all layers and perform updates of all weight parameters at the same time, the authors propose to perform alternating optimization on weights of individual layers. The theoretical justification is obtained for single-hidden-layer auto-encoders. Motivated by recent work by Hazan et al 2015, the authors developed the local-quasi-convexity of the objective w.r.t. the hidden layer weights for the generalized RELU activation. As a result, the optimization problem over the single hidden layer can be optimized efficiently using the algorithm of Hazan et al 2015. This itself can be a small, nice contribution. What concerns me is the extension to multiple layers. Some questions are not clear from section 3.4: 1. Do we still have local-quasi-convexity for the weights of each layer, when there are multiple nonlinear layers above it? A negative answer to this question will somewhat undermine the significance of the single-hidden-layer result. 2. Practically, even if the authors can perform efficient optimization of weights in individual layers, when there are many layers, the alternating optimization nature of the algorithm can possibly result in overall slower convergence. Also, since the proposed algorithm still uses gradient based optimizers for each layer, computing the gradient w.r.t. lower layers (closer to the inputs) are still done by backdrop, which has pretty much the same computational cost of the regular backdrop algorithm for updating all layers at the same time. As a result, I am not sure if the proposed algorithm is on par with / faster than the regular SGD algorithm in actual runtime. In the experiments, the authors plotted the training progress w.r.t. the minibatch iterations, I do not know if the minibatch iteration is a proxy for actual runtime (or number of floating point operations). 3. In the experiments, the authors found the network optimized by the proposed algorithm generalize better than regular SGD. Is this result consistent (across dataset, random initializations, etc), and can the authors elaborate the intuition behind?
iclr_2018_BJQPG5lR-
A widely observed phenomenon in deep learning is the degradation problem: increasing the depth of a network leads to a decrease in performance on both test and training data. Novel architectures such as ResNets and Highway networks have addressed this issue by introducing various flavors of skip-connections or gating mechanisms. However, the degradation problem persists in the context of plain feed-forward networks. In this work we propose a simple method to address this issue. The proposed method poses the learning of weights in deep networks as a constrained optimization problem where the presence of skip-connections is penalized by Lagrange multipliers. This allows for skip-connections to be introduced during the early stages of training and subsequently phased out in a principled manner. We demonstrate the benefits of such an approach with experiments on MNIST, fashion-MNIST, CIFAR-10 and CIFAR-100 where the proposed method is shown to greatly decrease the degradation effect and is often competitive with ResNets.
EDIT: The rating has been changed. See thread below for explanation / further comments. ORIGINAL REVIEW: In this paper, the authors present a new training strategy, VAN, for training very deep feed-forward networks without skip connections (henceforth called VDFFNWSC) by introducing skip connections early in training and then gradually removing them. I think the fact that the authors demonstrate the viability of training VDFFNWSCs that could have, in principle, arbitrary nonlinearities and normalization layers, is somewhat valuable and as such I would generally be inclined towards acceptance, even though the potential impact of this paper is limited because the training strategy proposed is (by deep learning standards) relatively complicated, requires tuning two additional hyperparameters in the initial value of \lambda as well as the step size for updating \lambda, and seems to have no significant advantage over just using skip connections throughout training. So my rating based on the message of the paper would be 6/10. However, there appear to be a range of issues. As long as those issues remain unresolved, my rating is at is but if those issues were resolved it could go up to a 6. +++ Section 3.1 problems +++ - I think the toy example presented in section 3.1 is more confusing than it is helpful because the skip connection you introduce in the toy example is different from the skip connection you introduce in VANs. In the toy example, you add (1 - \alpha)wx whereas in the VANs you add (1 - \alpha)x. Therefore, the type of vanishing gradient that is observed when tanh saturates, which you combat in the toy model, is not actually combated at all in the VAN model. While it is true that skip connections combat vanishing gradients in certain situations, your example does not capture how this is achieved in VANs. - The toy example seems to be an example where Lagrangian relaxation fails, not where it succeeds. Looking at figure 1, it appears that you start out with some alpha < 1 but then immediately alpha converges to 1, i.e. the skip connection is eliminated early in training, because wx is further away from y than tanh(wx). Most of the training takes place without the skip connection. In fact, after 10^4 iterations, training with and without skip connection seem to achieve the same error. It appears that introducing the skip connection was next to useless and the model failed to recognize the usefulness of the skip connection early in training. - Regarding the optimization algorithm involving \alpha^* at the end of section 3: It looks to me like a hacky, unprincipled method with no guarantees that just happened to work in the particular example you studied. You motivate the choice of \alpha^* by wanting to maximize the reduction in the local linear approximation to \mathcal{C} induced by the update on w. However, this reduction grows to infinity the larger the update is. Does that mean that larger updates are always better? Clearly not. If we wanted to reduce the size of the objective according to the local linear approximation, why wouldn't we choose infinitely large step sizes? Hence, the motivation for the algorithm you present is invalid. Here is an example where this algorithm fails: consider the point (x,y,w,\alpha,\lambda) = (100, \sigma(100), 1.0001, 1, 1). Here, w has almost converged to its optimum w* = 1. Correspondingly, the derivative of C is a small negative value. However, \alpha* is actually 0, and this choice would catapult w far away from w*. If I haven't made a mistake in my criticisms above, I strongly suggest removing section 3.1 entirely or replacing it with a completely new example that does not suffer from the above issues. +++ ResNet scaling +++ There is a crucial difference between VANs and ResNets. In the VAN initial state (alpha = 0.5), both the residual path and the skip path are multiplied by 0.5 whereas for ResNet, neither is multiplied by 0.5. Because of this, the experimental results between the two architectures are incomparable. In a question I posed earlier, you claimed that this scaling makes no difference when batch normalization is used. I disagree. Let's look at an example. Consider ResNet first. It can be written as x + r_1 + r_2 + .. + r_B, where r_b is the value computed by residual block b. Now let's assume we insert a scaling constant after each residual block, say c = 0.5. Then the result is c^{B}x + c^{B-1}r_1 + c^{B-2}r_2 + .. + r_B. Therefore, contributions of lower blocks vanish exponentially. This effect is not combated by batch normalization. So the learning dynamics for VAN and ResNet are very different because of this scaling. Therefore, there is an open question: are the differences in results between VAN and ResNet in your experiments caused by the removal of skip connections during training or by this scaling? Without this information, the experiments have limited value. In fact, I suspect that the vanishing of the contribution of lower blocks bears more responsibility for the declining performance of VAN at higher depths than the removal of skip connections. If my assessment of the situation is correct, I would like to ask you to repeat your experiments with the following two settings: - ResNet where after each block you multiply the result of the addition by 0.5, i.e. x_{l+1} = 0.5\mathcal{F}(x_l) + 0.5x_l - VAN with the following altered equation: x_{l+1} = \mathcal{F}(x_l) + (1-\alpha)x_l, i.e. please remove the alpha in front of \mathcal{F}. Also, initialize \alpha to zero. This ensures that VAN starts out as a regular ResNet. +++ writing issues +++ Title: - "VARIABLE ACTIVATION NETWORKS: A SIMPLE METHOD TO TRAIN DEEP FEED-FORWARD NETWORKS WITHOUT SKIP-CONNECTIONS" This title can be read in two different ways. (A) [Train] [deep feed-forward networks] [without skip-connections] and (B) [Train] [deep feed-forward networks without skip connections]. In (A), the `without skip-connections' modifies the `train' and suggests that training took place without skip connections. In (B), the `without skip-connections' modifies `deep feed-forward networks' and suggests that the network trained has no skip connections. You must mean (B), because (A) is false. Since it is not clear from reading the title whether (A) or (B) is true, please reword it. Abstract: - "Part of the success of ResNets has been attributed to improvements in the conditioning of the optimization problem (e.g., avoiding vanishing and shattered gradients). In this work we propose a simple method to extend these benefits to the context of deep networks without skip-connections." Again, this is ambiguous. To me, this sentence implies that you extend the benefit of avoiding vanishing and exploding gradients to fully-connected networks without skip connections. However, nowhere in your paper do you show that trained VANs have less exploding / vanishing gradients than fully-connected networks trained the old-fashioned way. Again, please reword or include evidence. - "where the proposed method is shown to outperform many architectures without skip-connections" Again, this sentence makes no sense to me. It seems to imply that VAN has skip connections. But in the abstract you defined VAN as an architecture without skip connections. Please make this more clear. Introduction: - "Indeed, Zagoruyko & Komodakis (2016) demonstrate that it is better to increase the width of ResNets than the depth, suggesting that perhaps only a few layers are learning useful representations." Just because increasing width may be better than increasing depth does not mean that deep layers don't learn useful representations. In fact, the claim that deep layers don't learn useful representations is directly contradicted by the paper. section 3.1: - replace "to to" by "to" in the second line section 4: - "This may be a result of the ensemble nature of ResNets (Veit et al., 2016), which does not play a significant role until the depth of the network increases." The ensemble nature of ResNet is a drawback, not an advantage, because it causes a lack of high-order co-adaptataion of layers. Therefore, it cannot contribute positively to the performance or ResNet. As mentioned in earlier comments, please reword / clarify your use of "activation function". It is generally used a synonym for "nonlinearity", so please use it in this way. Change your claim that VAN is equivalent to PReLU. Please include your description of how your method can be extended to networks which do allow for skip connections. +++ Hyperparameters +++ Since the initial values of \lambda and \eta' are new hyperparameters, include the values you chose for them, explain how you arrived at those values and plot the curve of how \lambda evolves for at least some of the experiments.
iclr_2018_S1LXVnxRb
A bottleneck problem in machine learning-based relationship extraction (RE) algorithms, and particularly of deep learning-based ones, is the availability of training data in the form of annotated corpora. For specific domains, such as biomedicine, the long time and high expertise required for the development of manually annotated corpora explain that most of the existing one are relatively small (i.e., hundreds of sentences). Beside, larger corpora focusing on general or domain-specific relationships (such as citizenship or drug-drug interactions) have been developed. In this paper, we study how large annotated corpora developed for alternative tasks may improve the performances on biomedicine related tasks, for which few annotated resources are available. We experiment two deep learning-based models to extract relationships from biomedical texts with high performance. The first one combine locally extracted features using a Convolutional Neural Network (CNN) model, while the second exploit the syntactic structure of sentences using a Recursive Neural Network (RNN) architecture. Our experiments show that, contrary to the former, the latter benefits from a cross-corpus learning strategy to improve the performance of relationship extraction tasks. Indeed our approach leads to the best published performances for two biomedical RE tasks, and to state-of-the-art results for two other biomedical RE tasks, for which few annotated resources are available (less than 400 manually annotated sentences). This may be particularly impactful in specialized domains in which training resources are scarce, because they would benefit from the training data of other domains for which large annotated corpora does exist.
SUMMARY. The paper presents a cross-corpus approach for relation extraction from text. The main idea is complementing small training data for relation extraction with training data with different relation types. The model is also connected with multitask learning approaches where the encoder for the input is the same but the output layer is different for each task. In this work, the output/softmax layer is different for each data type, while the encoder is shared. The authors tried two different sentence encoders (cnn-based and tree-lstm), and final results are calculated on the low resource dataset. Experimental results show that the tree-rnn encoder is able to capture valuable information from auxiliary data, while the cnn based does not. ---------- OVERALL JUDGMENT The paper shows an interesting approach to data augmentation with data of different type for relation extraction. I would have appreciated a section where the authors explain briefly what relation extraction is maybe with an example. The paper is overall clear, although the experimental section has to be improved I believe. From section 5.2 I am not able to understand the experimental setting the authors used, is it 10-fold CV? Did the authors tune the hyperparameters for each fold? Are the results in table 3 obtained with tree-lstm? What kind of ensembling did the authors chose for those experiments? The author overstates that their model outperforms the state-of-the-art models they compare to, but that is not true for the EU-ADR dataset where in 2 out of 3 relation types the proposed model performs on par with the state-of-the-art model. Finally, the authors used only one auxiliary dataset at the time, it would be interesting to see whether using all the auxiliary dataset together would improve results even more. I would suggest the author also to check and revise citations (CNN's are not Collobert et al. invention, the same thing for the maximum likelihood objective) and more in general to improve the reference on relation extraction literature.
iclr_2018_rk8R_JWRW
Spiking neural networks are being investigated both as biologically plausible models of neural computation and also as a potentially more efficient type of neural network. While convolutional spiking neural networks have been demonstrated to achieve near state-of-the-art performance, only one solution has been proposed to convert gated recurrent neural networks, so far. Recurrent neural networks in the form of networks of gating memory cells have been central in state-of-the-art solutions in problem domains that involve sequence recognition or generation. Here, we design an analog gated LSTM cell where its neurons can be substituted for efficient stochastic spiking neurons. These adaptive spiking neurons implement an adaptive form of sigma-delta coding to convert internally computed analog activation values to spike-trains. For such neurons, we approximate the effective activation function, which resembles a sigmoid. We show how analog neurons with such activation functions can be used to create an analog LSTM cell; networks of these cells can then be trained with standard backpropagation. We train these LSTM networks on a noisy and noiseless version of the original sequence prediction task from Hochreiter & Schmidhuber (1997), and also on a noisy and noiseless version of a classical working memory reinforcement learning task, the T-Maze. Substituting the analog neurons for corresponding adaptive spiking neurons, we then show that almost all resulting spiking neural network equivalents correctly compute the original tasks.
First the authors suggest an adaptive analog neuron (AAN) model which can be trained by back-propagation and then mapped to an Adaptive Spiking Neuron (ASN). Second, the authors suggest a network module called Adaptive Analog LSTM Cell (AA-LSTM) which contains input cells, input gates, constant error carousels (CEC) and output cells. Jointly with the AA-LSTM, the authors describe a spiking model (AS-LSTM) that is meant to reproduce its transfer function. It is shown quantitatively that the transfer functions of isolated AAN and AA-LSTM units are well approximated by their spiking counterparts. Two sets of experiments are reported, a sequence prediction task taken from the original LSTM paper and a T-maze task solved with reward based learning. In general, the paper presents an interesting idea. However, it seems that the main claims of the introduction are not sufficiently well proven later. Also, I believe that the tasks are rather simple and therefore it is not demonstrated that the approach performs well on practically relevant tasks. On general level, it should be clarified whether the model is meant to reproduce features of biology or whether the model is meant to be efficient. If the model is meant to reproduce biology, some features of the model are problematic. In particular, that the CEC is modeled with an infinitely long integration time constant of the input current. This would produce infinitely long EPSPs. However, I think there is a chance that minor changes of the model could still work while being more realistic. For example, I would find it more convincing to put the CEC into the adaptation time constants by using a large tau_gamma or tau_eta. If the model is meant to provide efficient spiking neural networks, I find the tasks too simple and too artificial. This is particularly true in comparison to the speech recognition tasks VAD and TIMIT which were already solved in Esser et al. with spiking and efficient feedforward networks. The authors say in the introduction that they target to model recurrent neural networks. This is an important open question. The usage of the CEC is an interesting idea toward this goal. However, beside the presence of CEC I do not see any recurrence in the used networks. This seems in contradiction with what is implicitly claimed in the introduction, title and abstract. There are only input-output neuron connections in the sequence prediction task, and a single hidden layer for the T-maze (which does not seem to be recurrently connected). This is problematic as the authors mention that their goal is to reproduce the functionality of LSTMs with spiking neurons for which the network recurrence is an important feature. Regarding more low-level comments: - The authors used a truncated version of RTRL to train LSTMs and standard back-propagation for single neurons. I wonder why two different algorithms were used, as, in principle, they compute the same gradient either forward or backward. Is there a reason for this? Did the truncated RTRL bring any additional benefit compared to the exact backpropagation already implemented in automatic differentiation software? - The sigma-delta neuron model seems quite ad-hoc and incompatible with most simulators and dedicated hardware. I wonder whether the AS-LSTM model would still be valid if the ASN model is replaced with a standard SRM model for instance. - The authors claim in the introduction that they made an analytical conversion from discrete to continuous time. I did not find this in the main text. - The axes in Figure 1 are not defined (what is Delta S?) and the caption does not match. "Average output signal [...] as a function of its incoming PSC I" output signal is not defined, and S is presented in the graph, but not I.
iclr_2018_HkpYwMZRb
Workshop track -ICLR 2018 GRADIENTS EXPLODE -DEEP NETWORKS ARE SHALLOW -RESNET EXPLAINED Whereas it is believed that techniques such as Adam, batch normalization and, more recently, SeLU nonlinearities "solve" the exploding gradient problem, we show that this is not the case in general and that in a range of popular MLP architectures, exploding gradients exist and that they limit the depth to which networks can be effectively trained, both in theory and in practice. We explain why exploding gradients occur and highlight the collapsing domain problem, which can arise in architectures that avoid exploding gradients. ResNets have significantly lower gradients and thus can circumvent the exploding gradient problem, enabling the effective training of much deeper networks, which we show is a consequence of a surprising mathematical property. By noticing that any neural network is a residual network, we devise the residual trick, which reveals that introducing skip connections simplifies the network mathematically, and that this simplicity may be the major cause for their success.
Summary of paper - The paper introduces the Gradient Scale Coefficient and uses it to demonstrate issues with the current understanding of where and why exploding gradients occur. Review - The paper attempts to contribute to the discussion about the exploding gradient problem by both introducing a metric for discussing this issue and by showing that current understanding of the exploding gradient problem may be incorrect. It is admirable that the authors are seeking to add to the understanding about theory of neural nets instead of contributing a new architecture with better error rates but without understanding why said error rates are lower. While the authors list 7 contributions, the current version of the text is a challenge to read and makes it challenging to distill an overarching theme or narrative to these contributions. The authors do mention experiments on page 8, but confess that some of the results are somewhat underwhelming. Unfortunately, all tables with the experimental results are left to the appendix. As this is a mostly theoretical paper, pushing experimental results to the appendix does make sense, but the repeated references to these tables suggest that these experimental results are crucial for the authors’ overall points. While the authors do attempt to accomplish a lot in these nearly 16 pages of text, the authors' main points and overall narrative gets lost due to the writing that is a bit jumbled at times and that relies heavily on the supplement. There are several places where it is not immediately clear why a certain block of text is included (i.e. the proof outlines on pages 8 and 10). At other points the authors default to an chronological narrative that can be useful at times (i.e. page 9), but here seems to distract from their overall narrative. This paper has a lot of content, but not all of it appears to be relevant to the authors’ central points. Furthermore, the paper is nearly double the recommended page length and has a nearly 30 page supplement. My biggest recommendations for this paper are for the authors to 1) articulate one theme and then 2) look at each part (whether that be section, paragraph, or sentence) and ask what does that part contribute to that theme. Pros - * This paper attempts to add the understanding of neural nets instead of only contributing better error rates on benchmark datasets. * At several points, the authors seek to make the work accessible by offering lay explanations for their more technical points. * The practical suggestions on page 16 are a true highlight and could provide an outline for possible revisions. Cons - * The main narrative is lost in the text, leaving a reader unsure of the authors main points and contributions as they read. For example, the authors’ first contribution is hidden among the text presentation of section 2. * The paper relies heavily on the supplement to make their central points. * It is nearly double the recommended page length with a nearly 30 page supplement Minor issues - * Use one style for introducing and defining terms either use italics or single quotes. The latter is not recommended since the authors use double quotes in the abstract to express that the exploding gradient problem is not solved. * The citation style of Authors (YEAR) at times leads to awkward sentence parsing. * Given that many figures have several subfigures, the authors should consider using a package that will denote subfigures with letters. * The block quotes in the introduction may be quite important for points later in the paper, but summarizing the points of these quotes may be a better use of space. The authors more successfully did this in paragraph 2 of the introduction. * All long descriptions of the appendix should be carefully revisited and possibly removed due to page length considerations. * In the text, figure 4 (which is in the supplement) is referenced before figure 3 (which is in the text). =-=-=-= Response to the authors During the initial reviewing period, I was unable to distill the significance of the authors’ contributions from the current literature in large part due to the nature of the writing style. After reading the authors responses and consulting the differences between the versions of the paper, my review remains the same. It should be noted that all three reviewers pointed out the length of the paper as a weakness of the paper, and that in the most recent draft, the authors made the main text of the paper longer. Consulting the differences between the paper revisions, I was initially intrigued with the volume of differences that shown in the summary bar. Upon closer inspection, I read a much stronger introduction and appreciated the summaries at the ends of sections 4.4 and 6. However, I did notice that the majority of these changes were superficial re-orderings of the original text. Given the limited substantive changes to the main text, I did not deeply re-read the text of the paper beyond the introduction.
iclr_2018_H1xJjlbAZ
In order for machine learning to be deployed and trusted in many applications, it is crucial to be able to reliably explain why the machine learning algorithm makes certain predictions. For example, if an algorithm classifies a given pathology image to be a malignant tumor, then the doctor may need to know which parts of the image led the algorithm to this classification. How to interpret black-box predictors is thus an important and active area of research. A fundamental question is: how much can we trust the interpretation itself? In this paper, we show that interpretation of deep learning predictions is extremely fragile in the following sense: two perceptively indistinguishable inputs with the same predicted label can be assigned very different interpretations. We systematically characterize the fragility of the interpretations generated by several widely-used feature-importance interpretation methods (saliency maps, integrated gradient, and DeepLIFT) on ImageNet and CIFAR-10. Our experiments show that even small random perturbation can change the feature importance and new systematic perturbations can lead to dramatically different interpretations without changing the label. We extend these results to show that interpretations based on exemplars (e.g. influence functions) are similarly fragile. Our analysis of the geometry of the Hessian matrix gives insight on why fragility could be a fundamental challenge to the current interpretation approaches.
The paper shows that interpretations for DNN decisions, e.g. computed by methods such as sensitivity analysis or DeepLift, are fragile: Visually (to a human) inperceptibly different image cause greatly different explanations (and also to an extent different classifier outputs). The authors perturb input images and create explanations using different methods. Even though the image is inperceptibly different to a human observer, the authors observe large changes in the heatmaps visualizing the explanation maps. This is true even for random perturbations. The images have been modified wrt. to some noise, such that they deviate from the natural statistics for images of that kind. Since the explanation algorithms investigated in this papers merely react to the interactions of the model to the input and thus are unsupervised processes in nature, the explanation methods merely show the model's reaction to the change. For one, the model itself reacts to the perturbation, which can be measured by the (considarbly) increased class probability. Since the prediction score is given in probabilty values, the reviewer assumes the final layer of the model is a SoftMax activation. In order to see change in the softmax output score, especially if the already dominant prediction score is further increased, a lot of change has to happen to the outputs of the layer serving as input to the SoftMax layer. It can thus be expected, that the input- and class specific explanations change as well, to an also not so small extent. The explanation maps mirror for the considered methods the model's reaction to the input. They are thus not meaningless, but are a measure to model reaction instead of an independent process. The excellent Figure 2 supports this point. Not the interpretation itself is fragile, but the model. Adding a small delta to the sample x shifts its position in data space, completely altering the prediction rule applied by the model due to the change in proximity to another section of the decision hyperplane. The fragility of DNN models to marginally perturbed inputs themselves is well known. This especially true for adversial perturbations, which have been used as test cases in this work. The explanation methods are expected to highlight highly important areas in an image, which have been targetet by these perturbation approaches. The authors give an example of an adversary manipulating the input in order to draw the activation to specific features to draw confusing/malignant explanation maps. In a settig of model verification, the explanation via heatmaps is exactly what one wants to have: If tiny change to the image causes lots of change to the prediction (and explanation) we can visualize the instability of the model not the explanation method. Further do targeted perturbations not show the fragility of explanation methods, but rather that the models actually find what is important to the model. It can be expected, that after a change to these parts of the input, the model will decide differently, albeit coming to the same conclusion (in terms of predicted class membership), which reflects in the explanation map computed for the perturbed input. Further remarks: It would be interesting to see the size and position of the center of mass attacks in the appendix. The reviewer closely follows and is experienced with various explanation methods, their application and the quality of the expected explanations. The reviewer is therefore surprised by the poor quality and lack of structure in the maps obtained from the DeepLift method. Can bugs and suboptimal configurations be ruled out during the experiments? The DeepLift explanations are almost as noisy as the ones obtained for Sensitivity Analysis (i.e. the gradient at the input point). However, recent work (e.g. Samek et al., IEEE TNNLS, 2017 or Montavon et al., Digital Signal Processing, 2017) showed that decomposition-based methods (such as DeepLift) provide less noisy explanations than Sensitivity Analysis. Have the authors considered training the net with small random perturbations added to the samples, to compare the "vanilla" model to the more robust one, which has seen noisy samples, and compared explanations? Why not train (finetune) the considered models using softplus activations instead of exchanging activation nodes? Appendix B: Heatmaps through the different stages of perturbation should be normalized using a common factor, not individually, in order to better reflect the change in the explanation Conclusion: The paper follows an interesting approach, but ultimately takes the wrong view point: The authors try to attribute fragility to explaining methods, which visualize/measure the reaction of the model to the perturbed inputs. A major rework should be considered.
iclr_2018_BySRH6CpW
Published as a conference paper at ICLR 2018 LEARNING DISCRETE WEIGHTS USING THE LOCAL REPARAMETERIZATION TRICK Recent breakthroughs in computer vision make use of large deep neural networks, utilizing the substantial speedup offered by GPUs. For applications running on limited hardware, however, high precision real-time processing can still be a challenge. One approach to solving this problem is training networks with binary or ternary weights, thus removing the need to calculate multiplications and significantly reducing memory size. In this work, we introduce LR-nets (Local reparameterization networks), a new method for training neural networks with discrete weights using stochastic parameters. We show how a simple modification to the local reparameterization trick, previously used to train Gaussian distributed weights, enables the training of discrete weights. Using the proposed training we test both binary and ternary models on MNIST, CIFAR-10 and ImageNet benchmarks and reach state-of-the-art results on most experiments.
This paper proposes training binary and ternary weight distribution networks through the local reparametrization trick and continuous optimization. The argument is that due to the central limit theorem (CLT) the distribution on the neuron pre-activations is approximately Gaussian, with a mean given by the inner product between the input and the mean of the weight distribution and a variance given by the inner product between the squared input and the variance of the weight distribution. As a result, the parameters of the underlying discrete distribution can be optimized via backpropagation by sampling the neuron pre-activations with the reparametrization trick. The authors further propose appropriate initialisation schemes and regularization techniques to either prevent the violation of the CLT or to prevent underfitting. The method is evaluated on multiple experiments. This paper proposed a relatively simple idea for training networks with discrete weights that seems to work in practice. My main issue is that while the authors argue about novelty, the first application of CLT for sampling neuron pre-activations at neural networks with discrete r.v.s is performed at [1]. While [1] was only interested in faster convergence and not on optimization of the parameters of the underlying distribution, the extension was very straightforward. I would thus suggest that the authors update the paper accordingly. Other than that, I have some other comments: - The L2 regularization on the distribution parameters for the ternary weights is a bit ad-hoc; why not penalise according to the entropy of the distribution which is exactly what you are trying to achieve? - For the binary setting you mentioned that you had to reduce the entropy thus added a “beta density regulariser”. Did you add R(p) or log R(p) to the objective function? Also, with alpha, beta = 2 the beta density is unimodal with a peak at p=0.5; essentially this will force the probabilities to be close to 0.5, i.e. exactly what you are trying to avoid. To force the probability near the endpoints you have to use alpha, beta < 1 which results into a “bowl” shaped Beta distribution. I thus wonder whether any gains you observed from this regulariser are just an artifact of optimization. - I think that a baseline (at least for the binary case) where you learn the weights with a continuous relaxation, such as the concrete distribution, and not via CLT would be helpful. Maybe for the network to properly converge the entropy for some of the weights needs to become small (hence break the CLT). [1] Wang & Manning, Fast Dropout Training. Edit: After the authors rebuttal I have increased the rating of the paper: - I still believe that the connection to [1] is stronger than what the authors allude to; eg. the first two paragraphs of sec. 3.2 could easily be attributed to [1]. - The argument for the entropy was to include a term (- lambda * H(p)) in the objective function with H(p) being the entropy of the distribution p. The lambda term would then serve as an indicator to how much entropy is necessary. - There indeed was a misunderstanding with the usage of the R(p) regularizer at the objective function (which is now resolved). - The authors showed benefits compared to a continuous relaxation baseline.
iclr_2018_ryH_bShhW
Any autoencoder network can be turned into a generative model by imposing an arbitrary prior distribution on its hidden code vector. Variational Autoencoder uses a KL divergence penalty to impose the prior, whereas Adversarial Autoencoder uses generative adversarial networks. A straightforward modification of Adversarial Autoencoder is to replace the adversary by maximum mean discrepancy (MMD) test. This replacement leads to a new type of probabilistic autoencoder, which is also discussed in our paper. However, an essential challenge remains in both of these probabilistic autoencoders, namely that the only source of randomness at the output of encoder, is the training data itself. Lack of enough stochasticity can make the optimization problem nontrivial. As a result, they can lead to degenerate solutions where the generator collapses into sampling only a few modes. Our proposal is to replace the adversary of adversarial autoencoder by a space of stochastic functions. This replacement introduces a a new source of randomness, which can be considered as a continuous control for encouraging explorations. This prevents the adversary from fitting too closely to the generator and therefore leads to more diverse set of generated samples. Consequently, the decoder serves as a better generative network, which unlike MMD nets scales linearly with the amount of data. We provide mathematical and empirical evidence on how this replacement outperforms the pre-existing architectures.
Thank you for the feedback, and I have read it. The authors claimed that they used techniques in [6] in which I am not an expert for this. However I cannot find the comparison that the authors mentioned in the feedback, so I am not sure if the claim is true. I still recommend rejection for the paper, and as I said in the first review, the paper is not mature enough. ==== original review === The paper describes a generative model that replaces the GAN loss in the adversarial auto-encoder with MMD loss. Although the author claim the novelty as adding noise to the discriminator, it seems to me that at least for the RBF case it just does the following: 1. write down MMD as an integral probability metric (IPM) 2. say the test function, which originally should be in an RKHS, will be approximated using random feature approximations. Although the authors explained the intuition a bit and showed some empirical results, I still don't see why this method should work better than directly minimising MMD. Also it is not preferred to look at the generated images and claim diversity, instead it's better to have some kind of quantitative metric such as the inception score. Finally, given the fact that we have too many GAN related papers now, I don't think the innovation contained in the paper (which is using random features) is good enough to be published at ICLR. Also the paper is not clearly written, and I would suggest better not to copy-past paragraphs in the abstract and intro. That said, I would welcome for the authors feedback and see if I have misunderstood something.
iclr_2018_HJZiRkZC-
This article proposes to auto-encode text at byte-level using convolutional networks with a recursive architecture. The motivation is to explore whether it is possible to have scalable and homogeneous text generation at byte-level in a nonsequential fashion through the simple task of auto-encoding. We show that nonsequential text generation from a fixed-length representation is not only possible, but also achieved much better auto-encoding results than recurrent networks. The proposed model is a multi-stage deep convolutional encoder-decoder framework using residual connections (He et al., 2016), containing up to 160 parameterized layers. Each encoder or decoder contains a shared group of modules that consists of either pooling or upsampling layers, making the network recursive in terms of abstraction levels in representation. Results for 6 large-scale paragraph datasets are reported, in 3 languages including Arabic, Chinese and English. Analyses are conducted to study several properties of the proposed model. Recently, generating text using convolutional networks (ConvNets) starts to become an alternative to recurrent networks for sequence-tosequence learning (Gehring et al., 2017). The dominant assumption for both these approaches is that texts are generated one word at a time. Such sequential generation process bears the risk of output or gradient vanishing or exploding problem (Bengio et al., 1994), which limits the length of its generated results. Such limitation in scalability prompts us to explore whether non-sequential text generation is possible. Meanwhile, text processing from lower levels than words -such as characters (Zhang et al., 2015) (Kim et al., 2016) and bytes (Gillick et al., 2016) (Zhang & LeCun, 2017) -is also being explored due to its promise in handling distinct languages in the same fashion. In particular, the work by Zhang & LeCun (2017) shows that simple one-hot encoding on bytes could give the best results for text classification in a variety of languages. The reason is that it achieved the best balance between computational performance and classification accuracy. Inspired by these results, this article explores auto-encoding for text using byte-level convolutional networks that has a recursive structure, as a first step towards low-level and non-sequential text generation. For the task of text auto-encoding, we should avoid the use of common attention mechanisms like those used in machine translation Bahdanau et al. (2015), because they always provide a direct information path that enables the auto-encoder to directly copy from the input. This diminishes the Figure 2: Sequential and non-sequential decoders illustrated in graphical models. u is a vector containing encoded representation. y i 's are output entities. h i 's are hidden representations. Note that they both imply conditional independence between outputs conditioned by the representation u. purpose of studying the representational ability of different models. Therefore, all models considered in this article would encode to and decode from a fixed-length vector representation. The paper by was an anterior result on using word-level convolutional networks for text auto-encoding. This article differs from it in several key ways of using convolutional networks. First of all, our models work from the level of bytes instead of words, which arguably makes the problem more challenging. Secondly, our network is dynamic with a recursive structure that scales with the length of input text, which by design could avoid trivial solutions for auto-encoding such as the identity function. Thirdly, by using the latest design heuristics such as residual connections (He et al., 2016), our network can scale up to several hundred of layers deep, compared to a static network that contains a few layers. In this article, several properties of the auto-encoding model are studied. The following is a list. 1. Applying the model to 3 languages -Arabic, Chinese and English -shows that the model can handle all different languages in the same fashion with equally good accuracy. 2. Comparisons with long short-term memory (LSTM) (Hochreiter & Schmidhuber, 1997) show a significant advantage of using convolutional networks for text auto-encoding. 3. We determined that a recursive convolutional decoder like ours can accurately produce the end-of-string byte, despite that the decoding process is non-sequential. 4. By studying the auto-encoding error when the samples contain randomized noisy bytes, we show that the model does not degenerate to the identity function. However, it can neither denoise the input very well. 5. The recursive structure requires a pooling layer. We compared between average pooling, L2 pooling and max-pooling, and determined that max-pooling is the best choice. 6. The advantage of recursion is established by comparison against a static model that does not have shared module groups. This shows that linguistic heuristics such as recursion is useful for designing models for language processing. 7. We also explored models of different sizes by varying the maximum network depth from 40 to 320. The results show that deeper models give better results.
The paper aims to illustrated the representation learning ability of the convolutional autoencoder with residual connections is proposed by to encode text at the byte level. The authors apply the proposed architecture to 3 languages and run comparisons with an LSTM. Experimental results with different perturbation of samples, pooling layers, and sample lengths are presented. The writing is fairly clear, however the presentation of tables and figures could be done better, for example, Fig. 2 is referred to in page 3, Table 2 which contains results is referred to on page 5, Fig 4 is referred to in page 6 and appears in page 5, etc. What kind of minimal preprocessing is done on the text? Are punctuations removed? Is casing retained? How is the space character encoded? Why was the encoded dimension always fixed at 1024? What is the definition of a sample here? The description of the various data sets could be moved to a table/Appendix, particularly since most of the results are presented on the enwiki dataset, which would lead to better readability of the paper. Also results are presented only on a random 1M sample selected from these data sets, so the need for this whole page goes away. Comparing Table 2 and Table 3, the LSTM is at 67% error on the test set while the proposed convolutional autoencoder is at 3.34%. Are these numbers on the same test set? While the argument that the LSTM does not generalize well due to the inherent memory learnt is reasonable, the differences in performance cannot be explained away with this. Can you please clarify this further? It appears that the byte error shoot up for sequences of length 512+ (fig. 6 and fig. 7) and seems entirely correlated with the amount of data than recursion levels. How do you expect these results to change for a different subset selection of training and test samples? Will Fig. 7 and Fig. 6 still hold? In Fig, 8, unless the static train and test error are exactly on top of the recursive errors, they are not visible. What is the x-axis in Fig. 8? Please also label axes on all figures. While the datasets are large and would take a lot of time to process for each case study, a final result on the complete data set, to illustrate if the model does learn well with lots of data would have been useful. A table showing generated sample text would also clarify the power of the model. With the results presented, with a single parameter setting, its hard to determine what exactly the model learns and why.
iclr_2018_rJ7yZ2P6-
Ubuntu dialogue corpus is the largest public available dialogue corpus to make it feasible to build end-to-end deep neural network models directly from the conversation data. One challenge of Ubuntu dialogue corpus is the large number of out-of-vocabulary words. In this paper we proposed a method which combines the general pre-trained word embedding vectors with those generated on the taskspecific training set to address this issue. We integrated character embedding into Chen et al's Enhanced LSTM method (ESIM) and used it to evaluate the effectiveness of our proposed method. For the task of next utterance selection, the proposed method has demonstrated a significant performance improvement against original ESIM and the new model has achieved state-of-the-art results on both Ubuntu dialogue corpus and Douban conversation corpus. In addition, we investigated the performance impact of end-of-utterance and end-of-turn token tags.
The main contributions in this paper are: 1) New variants of a recent LSTM-based model ("ESIM") are applied to the task of response-selection in dialogue modeling -- ESIM was originally introduced and evaluated for natural language inference. In this new setting, the ESIM model (vanilla and extended) outperform previous models when trained and evaluated on two distinct conversational datasets. 2) A fairly trivial method is proposed to extend the coverage of pre-trained word embeddings to deal with the OOV problem that arises when applying them to these conversational datasets. The method itself is to combine d1-dimensional word embeddings that were pretrained on a large unannotated corpus (vocabulary S) with distinct d2-dimensional word embeddings that are trained on the task-specific training data (vocabulary T). The enhanced (d1+d2)-dimensional representation for a word is constructed by concatenating its vectors from the two embeddings, setting either the d1- or d2-dimensional subvector to zeros when the word is absent from either S or T, respectively. This method is incorporated as an extension into ESIM and evaluated on the two conversation datasets. The main results can be characterized as showing that this vocabulary extension method leads to performance gains on two datasets, on top of an ESIM-model extended with character-based word embeddings, which itself outperforms the vanilla ESIM model. These empirical results are potentially meaningful and could justify reporting, but the paper's organization is very confusing, and too many details are too unclear, leading to low confidence in reproducibility. There is basic novelty in applying the base model to a new task, and the analysis of the role of the special conversational boundary tokens is interesting and can help to inform future modeling choices. The embedding-enhancing method has low originality but is effective on this particular combination of model architecture, task and datasets. I am left wondering how well it might generalize to other models or tasks, since the problem it addresses shows up in many other places too... Overall, the presentation switches back and forth between the Douban corpus and the Ubuntu corpus, and between word2vec and Glove embeddings, and this makes it very challenging to understand the details fully. S3.1 - Word representation layer: This paragraph should probably mention that the character-composed embeddings are newly introduced here, and were not part of the original formulation of ESIM. That statement is currently hidden in the figure caption. Algorithm 1: - What set does P denote, and what is the set-theoretic relation between P and T? - Under one possible interpretation, there may be items in P that are in neither T nor S, yet the algorithm does not define embeddings for those items even though its output is described as "a dictionary with word embeddings ... for P". This does not seem consistent? I think the sentence in S4.2 about initializing remaining OOV words as zeros is relevant and wonder if it should form part of the algorithm description? S4.1 - What do the authors mean by the statement that response candidates for the Douban corpus were "collected by Lucene retrieval model"? S4.2 - Paragraph two is very unclear. In particular, I don't understand the role of the Glove vectors here when Algorithm 1 is used, since the authors refer to word2vec vectors later in this paragraph and also in the Algorithm description. S4.3 - It's insufficiently clear what the model definitions are for the Douban corpus. Is there still a character-based LSTM involved, or does FastText make it unnecessary? S4.3 - "It can be seen from table 3 that the original ESIM did not perform well without character embedding." This is a curious way to describe the result, when, in fact, the ESIM model in table 3 already outperforms all the previous models listed. S4.4 - gensim package -- for the benefit of readers unfamiliar with gensim, the text should ideally state explicitly that it is used to create the *word2vec* embeddings, instead of the ambiguous "word embeddings".
iclr_2018_Hkfmn5n6W
Background: Statistical mechanics results (Dauphin et al. (2014); Choromanska et al. (2015)) suggest that local minima with high error are exponentially rare in high dimensions. However, to prove low error guarantees for Multilayer Neural Networks (MNNs), previous works so far required either a heavily modified MNN model or training method, strong assumptions on the labels (e.g., "near" linear separability), or an unrealistically wide hidden layer with Ω (N ) units. Results: We examine a MNN with one hidden layer of piecewise linear units, a single output, and a quadratic loss. We prove that, with high probability in the limit of N → ∞ datapoints, the volume of differentiable regions of the empiric loss containing sub-optimal differentiable local minima is exponentially vanishing in comparison with the same volume of global minima, given standard normal input of dimension d 0 =Ω √ N , and a more realistic number of hidden units. We demonstrate our results numerically: for example, 0% binary classification training error on CIFAR with only N/d 0 ≈ 16 hidden neurons.
## Summary This paper aims to tackle the question: "why does standard SGD based algorithms on neural network converge to 'good' solutions?" Pros: Authors ask the question of convergence of optimization (ignoring generalization error): how "likely" is that an over-parameterized (d1d0 > N) single hidden layer binary classifier "find" a good (possibly over-fitted) local minimum. They make a set of assumptions (A1-A3) which are weaker (d1 > N^{1/2}) than the ones used earlier works. Previous works needed a wide hidden layer (d1 > N). Assumptions (d0=input dim, d1=hidden dim, N=n of datapoints, X=datapoints matrix): A1. Datapoints X come from a Gaussian distribution A2. N^{1/2} < d0 =< N A3. N polylog(N) < d0d1 (approximate n of. parameters) and d1 =< N This paper proves that total "angular volume" of "regions" (defined with respect to the piecewise linear regions of neuron activations) with differentiable bad-local minima are exponentially small when compared with to the total "angular volume" of "regions" containing only differentiable global-minimal. The proof boils down to counting arguments and concentration inequality. Cons: Non-differentiable stationary points are left as a challenging future work on this paper. Non-differentiability aside, authors show a possible way by which shallow neural networks might be over-fitting the data. But this is only half the story and does not completely answer the question. First, exponentially vanishing (in N) volume of the "regions" containing bad-local minima doesn't mean that the number of bad local minima are exponentially small when compared to number global minima. Secondly, as the authors aptly pointed out in the discussion section, this results doesn't mean neural networks will converge to good local minima because these bad local minimas can have a large basins of attraction. Lastly, appropriate comparisons with the existing literature is lacking. It is hinted that this paper is more general as the assumptions are more realistic. However, it comes at a cost of losing sharpness in the theoretical results. It is not well motivated why one should study the angular volume of the global and local minima. ## Questions and comments 1. How critical is Gaussian-datapoints assumption (A1)? Which part of the proof fails to generalize? 2. Can the proof be extended to scalar regression? It seems hard to generalize to vector output neural networks. What about deep neural networks? 3. Can you relate the results to other more recent works like: https://arxiv.org/pdf/1707.04926.pdf. 4. Piecewise linear and positively homogeneous (https://arxiv.org/pdf/1506.07540.pdf) activation seem to be important assumption of the paper. It should probably be mentioned explicitly. 5. In the experiments section, it is mentioned that "...inputs to the hidden neurons converge to a distinctly non-zero value. This indicates we converged to DLMs." How can you guarantee that it is a local minimum and not a saddle point?
iclr_2018_BygpQlbA-
We study the control of symmetric linear dynamical systems with unknown dynamics and a hidden state. Using a recent spectral filtering technique for concisely representing such systems in a linear basis, we formulate optimal control in this setting as a convex program. This approach eliminates the need to solve the nonconvex problem of explicit identification of the system and its latent state, and allows for provable optimality guarantees for the control signal. We give the first efficient algorithm for finding the optimal control signal with an arbitrary time horizon , with sample complexity (number of training rollouts) polynomial only in log and other relevant parameters.
This paper proposes a new algorithm to generate the optimal control inputs for unknown linear dynamical systems (LDS) with known system dimensions. The idea is exciting LDS by wave filter inputs and record the output and directly estimate the operator that maps the input to the output instead of estimating the hidden states. After obtaining this operator, this paper substitutes this operator to the optimal control problem and solve the optimal control problem to estimate the optimal control input, and show that the gap between the true optimal cost and the cost from applying estimated optimal control input is small with high probability. I think estimating the operator from the input to the output is interesting, instead of constructing (A, B, C, D) matrices, but this idea and all the techniques are from Hazan et. el., 2017. After estimating this operator, it is straightforward to use this to generate the estimated optimal control input. So I think the idea is OK, but not a breakthrough. Also I found the symmetric matrix assumption on A is quite limited. This limitation is from Hazan et. el., 2017, where the authors wants to predict the output. For prediction purposes, this restriction might be OK, but for control purposes, many interesting plants does not satisfy this assumption, even simple RL circuit. I agree with authors that this is an attempt to combine system identification with generating control inputs together, but I am not sure how to remove the restriction on A. Dean et. el., 2017 also pursued this direction by combining system identification with robust controller synthesis to handle estimation errors in the system matrices (A, B) in the state-feedback case (LQR), and I can see that Dean et. el. could be extended to handle observer-feedback case (LQG) without any restriction. Despite of this limitation I think the paper's idea is OK and the result is worth to be published but not in the current form. The paper is not clearly written and there are several areas need to be improved. 1. System identification. Subspace identification (N4SID) won't take exponential time. I recommend the authors to perform either proper literature review or cite one or two papers on the time complexity and their weakness. Also note that subspace identification can estimate (A, B, C, D) matrices which is great for control purposes especially for the infinite horizon LQR. 2. Clarification on the unit ball constraints. Optimal control inputs are restricted to be inside the unit ball and overall norm is bounded by L. Where is this restriction coming from? The standard LQG setup does not have this restriction. 3. Clarification on the assumption (3). Where is this assumption coming from? I can see that this makes the analysis go through but is this a reasonable assumption? Does most of system satisfy this constraint? Is there any? It's ok not to provide the answer if it's hard to analyze, but if that's the case the paper should provide some numerical case studies to show this bound either holds or the gap is negligible in the toy example. 4. Proof of theorem 3.3. Theorem 3.3 is one of the key results in this paper, yet its proof is just "noted". The setup is slightly different from the original theorem in Hazan et. el., 2017 including the noise model, so I strongly recommend to include the original theorem in the appendix, and include the full proof in the appendix. 5. Proof of lemma 3.1. I found it's hard to keep track of which one is inside the expectation. I recommend to follow the notation E[variable] the authors been using throughout the paper in the proof instead of dropping these brackets. 6. Minor typos In theorem 2.4, ||Q||_op is used for defining rho, but in the text ||Q||_F is used. I think ||Q||_op is right.
iclr_2018_rJ4uaX2aW
A common way to speed up training of deep convolutional networks is to add computational units. Training is then performed using data-parallel synchronous Stochastic Gradient Descent (SGD) with a mini-batch divided between computational units. With an increase in the number of nodes, the batch size grows. However, training with a large batch often results in lower model accuracy. We argue that the current recipe for large batch training (linear learning rate scaling with warm-up) does not work for many networks, e.g. for Alexnet, Googlenet,... We propose a more general training algorithm based on Layer-wise Adaptive Rate Scaling (LARS). The key idea of LARS is to stabilize training by keeping the magnitude of update proportional to the norm of weights for each layer. This is done through gradient rescaling per layer. Using LARS, we successfully trained AlexNet and ResNet-50 to a batch size of 16K.
This paper proposes a training algorithm based on Layer-wise Adaptive Rate Scaling (LARS) to overcome the optimization difficulties for training with large batch size. The authors use a linear scaling and warm-up scheme to train AlexNet on ImageNet. The results show promising performance when using a relatively large batch size. The presented method is interesting. However, the experiments are poorly organized since some necessary descriptions and discussions are missing. My detailed comments are as follows. Contributions: 1. The authors propose a training algorithm based LARS with the adaptive learning rate for each layer, and train the AlexNet and ResNet-50 to a batch size of 16K. 2. The training method shows stable performance and helps to avoid gradient vanishing or exploding. Weak points: The training algorithm does not overcome the optimization difficulties when the batch size becomes larger (e.g. 32K), where the training becomes unstable, and the training based on LARS and warm-up can’t improve the accuracy compared to the baselines. Specific comments: 1. In Algorithm 1, how to choose $ \eta $ and $ \beta $ in the experiment? 2. Under the line of Equation (3), $ \nabla L(x_j, w_{t+1}) \approx L(x_j, w_{t}) $ should be $ \nabla L(x_j, w_{t+1}) \approx \nabla L(x_j, w_{t}) $. 3. How can the training algorithm based on LARS improve the generalization for the large batch? 4. In the experiments, what is the parameter iter_size? How to choose it? 5. In the experiments, no descriptions and discussions are given for Table 3, Figure 4, Table 4, Figure 5, Table 5 and Table 6. The authors should give more discussions on these tables and figures. Furthermore, the captions of these tables and figures confusing. 6. On page 4, there is a statement “The ratio is high during the initial phase, and it is rapidly decreasing after few epochs (see Figure 2).” This is quite confusing, since Figure 2 is showing the change of learning rates w.r.t. training epochs.
iclr_2018_HkgNdt26Z
DISTRIBUTED FINE-TUNING OF LANGUAGE MODELS ON PRIVATE DATA One of the big challenges in machine learning applications is that training data can be different from the real-world data faced by the algorithm. In language modeling, users' language (e.g. in private messaging) could change in a year and be completely different from what we observe in publicly available data. At the same time, public data can be used for obtaining general knowledge (i.e. general model of English). We study approaches to distributed fine-tuning of a general model on user private data with the additional requirements of maintaining the quality on the general data and minimization of communication costs. We propose a novel technique that significantly improves prediction quality on users' language compared to a general model and outperforms gradient compression methods in terms of communication efficiency. The proposed procedure is fast and leads to an almost 70% perplexity reduction and 8.7 percentage point improvement in keystroke saving rate on informal English texts. Finally, we propose an experimental framework for evaluating differential privacy of distributed training of language models and show that our approach has good privacy guarantees.
This paper deals with improving language models on mobile equipments based on small portion of text that the user has ever input. For this purpose, authors employed a linearly interpolated objectives between user specific text and general English, and investigated which method (learning without forgetting and random reheasal) and which interepolation works better. Moreover, authors also look into privacy analysis to guarantee some level of differential privacy is preserved. Basically the motivation and method is good, the drawback of this paper is its narrow scope and lack of necessary explanations. Reading the paper, many questions arise in mind: - The paper implicitly assumes that the statistics from all the users must be collected to improve "general English". Why is this necessary? Why not just using better enough basic English and the text of the target user? - To achieve the goal above, huge data (not the "portion of the general English") should be communicated over the network. Is this really worth doing? If only "the portion of" general English must be communicated, why is it validated? - For measuring performance, authors employ keystroke saving rate. For the purpose of mobile input, this is ok: but the use of language models will cover much different situation where keystrokes are not necessarily available, such as speech recognition or machine translation. Since this paper is concerned with a general methodology of language modeling, perplexity improvement (or other criteria generally applicable) is also important. - There are huge number of previous work on context dependent language models, let alone a mixture of general English and specific models. Are there any comparison with these previous efforts? Finally, this research only relates to ICLR in that the language model employed is LSTM: in other aspects, it easily and better fit to ordinary NLP conferences, such as EMNLP, NAACL or so. I would like to advise the authors to submit this work to such conferences where it will be reviewed by more NLP experts. Minor: - t of $G_t$ in page 2 is not defined so far. - What is "gr" in Section 2.2?
iclr_2018_ByBAl2eAZ
PARAMETER SPACE NOISE FOR EXPLORATION Deep reinforcement learning (RL) methods generally engage in exploratory behavior through noise injection in the action space. An alternative is to add noise directly to the agent's parameters, which can lead to more consistent exploration and a richer set of behaviors. Methods such as evolutionary strategies use parameter perturbations, but discard all temporal structure in the process and require significantly more samples. Combining parameter noise with traditional RL methods allows to combine the best of both worlds. We demonstrate that both off-and on-policy methods benefit from this approach through experimental comparison of DQN, DDPG, and TRPO on high-dimensional discrete action environments as well as continuous control tasks.
This paper proposes a method for parameter space noise in exploration. Rather than the "baseline" epsilon-greedy (that sometimes takes a single action at random)... this paper presents an method for perturbations to the policy. In some domains this can be a much better approach and this is supported by experimentation. There are several things to like about the paper: - Efficient exploration is a big problem for deep reinforcement learning (epsilon-greedy or Boltzmann is the de-facto baseline) and there are clearly some examples where this approach does much better. - The noise-scaling approach is (to my knowledge) novel, good and in my view the most valuable part of the paper. - This is clearly a very practical and extensible idea... the authors present good results on a whole suite of tasks. - The paper is clear and well written, it has a narrative and the plots/experiments tend to back this up. - I like the algorithm, it's pretty simple/clean and there's something obviously *right* about it (in SOME circumstances). However, there are also a few things to be cautious of... and some of them serious: - At many points in the paper the claims are quite overstated. Parameter noise on the policy won't necessarily get you efficient exploration... and in some cases it can even be *worse* than epsilon-greedy... if you just read this paper you might think that this was a truly general "statistically efficient" method for exploration (in the style of UCRL or even E^3/Rmax etc). - For instance, the example in 4.2 only works because the optimal solution is to go "right" in every timestep... if you had the network parameterized in a different way (or the actions left/right were relabelled) then this parameter noise approach would *not* work... By contrast, methods such as UCRL/PSRL and RLSVI https://arxiv.org/abs/1402.0635 *are* able to learn polynomially in this type of environment. I think the claim/motivation for this example in the bootstrapped DQN paper is more along the lines of "deep exploration" and you should be clear that your parameter noise does *not* address this issue. - That said I think that the example in 4.2 is *great* to include... you just need to be more upfront about how/why it works and what you are banking on with the parameter-space exploration. Essentially you perform a local exploration rule in parameter space... and sometimes this is great - but you should be careful to distinguish this type of method from other approaches. This must be mentioned in section 4.2 "does parameter space noise explore efficiently" because the answer you seem to imply is "yes" ... when the answer is clearly NOT IN GENERAL... but it can still be good sometimes ;D - The demarcation of "RL" and "evolutionary strategies" suggests a pretty poor understanding of the literature and associated concepts. I can't really support the conclusion "RL with parameter noise exploration learns more efficiently than both RL and evolutionary strategies individually". This sort of sentence is clearly wrong and for many separate reasons: - Parameter noise exploration is not a separate/new thing from RL... it's even been around for ages! It feels like you are talking about DQN/A3C/(whatever algorithm got good scores in Atari last year) as "RL" and that's just really not a good way to think about it. - Parameter noise exploration can be *extremely* bad relative to efficient exploration methods (see section 2.4.3 https://searchworks.stanford.edu/view/11891201) Overall, I like the paper, I like the algorithm and I think it is a valuable contribution. I think the value in this paper comes from a practical/simple way to do policy randomization in deep RL. In some (maybe even many of the ones you actually care about) settings this can be a really great approach, especially when compared to epsilon-greedy. However, I hope that you address some of the concerns I have raised in this review. You shouldn't claim such a universal revolution to exploration / RL / evolution because I don't think that it's correct. Further, I don't think that clarifying that this method is *not* universal/general really hurts the paper... you could just add a section in 4.2 pointing out that the "chain" example wouldn't work if you needed to do different actions at each timestep (this algorithm does *not* perform "deep exploration"). I vote accept.
iclr_2018_H1tSsb-AW
VARIANCE REDUCTION FOR POLICY GRADIENT WITH ACTION-DEPENDENT FACTORIZED BASELINES Policy gradient methods have enjoyed great success in deep reinforcement learning but suffer from high variance of gradient estimates. The high variance problem is particularly exasperated in problems with long horizons or high-dimensional action spaces. To mitigate this issue, we derive a bias-free action-dependent baseline for variance reduction which fully exploits the structural form of the stochastic policy itself and does not make any additional assumptions about the MDP. We demonstrate and quantify the benefit of the action-dependent baseline through both theoretical analysis as well as numerical results, including an analysis of the suboptimality of the optimal state-dependent baseline. The result is a computationally efficient policy gradient algorithm, which scales to high-dimensional control problems, as demonstrated by a synthetic 2000-dimensional target matching task. Our experimental results indicate that action-dependent baselines allow for faster learning on standard reinforcement learning benchmarks and highdimensional hand manipulation and synthetic tasks. Finally, we show that the general idea of including additional information in baselines for improved variance reduction can be extended to partially observed and multi-agent tasks.
The paper proposes a variance reduction technique for policy gradient methods. The proposed approach justifies the utilization of action-dependent baselines, and quantifies the gains achieved by it over more general state-dependent or static baselines. The writing and organization of the paper is very well done. It is easy to follow, and succinct while being comprehensive. The baseline definition is well-motivated, and the benefits offered by it are quantified intuitively. There is only one mostly minor issues with the algorithm development and the experiments need to be more polished. For the algorithm development, there is an relatively strong assumption that z_i^T z_j = 0. This assumption is not completely unrealistic (for example, it is satisfied if completely separate parts of a feature vector are used for actions). However, it should be highlighted as an assumption, and it should be explicitly stated as z_i^T z_j = 0 rather than z_i^T z_j approx 0. Further, because it is relatively strong of an assumption, it should be discussed more thoroughly, with some explicit examples of when it is satisfied. Otherwise, the idea is simple and yet effective, which is exactly what we would like for our algorithms. The paper would be a much stronger contribution, if the experiments could be improved. - More details regarding the experiments are desirable - how many runs were done, the initialization of the policy network and action-value function, the deep architecture used etc. - The experiment in Figure 3 seems to reinforce the influence of \lambda as concluded by the Schulman et. al. paper. While that is interesting, it seems unnecessary/non-relevant here, unless performance with action-dependent baselines with each value of \lambda is contrasted to the state-dependent baseline. What was the goal here? - In general, the graphs are difficult to read; fonts should be improved and the graphs polished. - The multi-agent task needs to be explained better - specifically how is the information from the other agent incorporated in an agent's baseline? - It'd be great if Plot (a) and (b) in Figure 5 are swapped. Overall I think the idea proposed in the paper is beneficial. Better discussing the strong theoretical assumption should be incorporated. Adding the listed suggestions to the experiments section would really help highlight the advantage of the proposed baseline in a more clear manner. Particularly with some clarity on the experiments, I would be willing to increase the score. Minor comments: 1. In Equation (28) how is the optimal-state dependent baseline obtained? This should be explicitly shown, at least in the appendix. 2. The listed site for videos and additional results is not active. 3. Some typos - Section 2 - 1st para - last line: "These methods are therefore usually more sample efficient, but can be less stable than critic-based methods.". - Section 4.1 - Equation (7) - missing subscript i for b(s_t,a_t^{-i}) - Section 4.2 - \hat{Q} is just Q in many places
iclr_2018_ByL48G-AW
We design a new policy, called a nearest neighbor policy, that does not require any optimization for simple, low-dimensional continuous control tasks. As this policy does not require any optimization, it allows us to investigate the underlying difficulty of a task without being distracted by optimization difficulty of a learning algorithm. We propose two variants, one that retrieves an entire trajectory based on a pair of initial and goal states, and the other retrieving a partial trajectory based on a pair of current and goal states. We test the proposed policies on five widely-used benchmark continuous control tasks with a sparse reward: Reacher, Half Cheetah, Double Pendulum, Cart Pole and Mountain Car. We observe that the majority (the first four) of these tasks, which have been considered difficult, are easily solved by the proposed policies with high success rates, indicating that reported difficulties of them may have likely been due to the optimization difficulty. Our work suggests that it is necessary to evaluate any sophisticated policy learning algorithm on more challenging problems in order to truly assess the advances from them.
This work shows that a simple non-parametric approach of storing state embeddings with the associated Monte Carlo returns is sufficient to solve several benchmark continuous control problems with sparse rewards (reacher, half-cheetah, double pendulum, cart pole) (due to the need to threshold a return the algorithms work less well with dense rewards, but with the introduction of a hyper-parameter is capable of solving several tasks there). The authors argue that the success of these simple approaches on these tasks suggest that more changing problems need to be used to assess new RL algorithms. This paper is clearly written and it is important to compare simple approaches on benchmark problems. There are a number of interesting and intriguing side-notes and pieces of future work mentioned. However, the originality and significance of this work is a significant drawback. The use non-parametric approaches to the action-value function go back to at least [1] (and probably much further). So the algorithms themselves are not particularly novel, and are limited to nearly-deterministic domains with either single sparse rewards (success or failure rewards) or introducing extra hyper-parameters per task. The significance of this work would still be quite strong if, as the author's suggest, these benchmarks were being widely used to assess more sophisticated algorithms and yet these tasks were mastered by such simple algorithms with no learnable parameters. Yet, the results do not support the claim. Even if we ignore that for most tasks only the sparse reward (which favors this algorithm) version was examined, these author's only demonstrate success on 4, relatively simple tasks. While these simple tasks are useful for diagnostics, it is well-known that these tasks are simple and, as the author's suggest "more challenging tasks .... are necessary to properly assess advances made by sophisticated, optimization-based policy algorithms." Lillicrap et al. (2015) benchmarked against 27 tasks, Houtfout et al. (2016) compared in the paper also used Walker2D and Swimmer (not used in this paper) as did [2], OpenAI Gym contains many more control environments than the 4 solved here and significant research is pursing complex manipulation and grasping tasks (e.g. [3]). This suggests the author's claim has already been widely heeded and this work will be of limited interest. [1] Juan, C., Sutton, R. S., & Ram, A. Experiments with Reinforcement Learning in Problems with Continuous State and Action Spaces. [2] Henderson, P., Islam, R., Bachman, P., Pineau, J., Precup, D., & Meger, D. (2017). Deep reinforcement learning that matters. arXiv preprint arXiv:1709.06560. [3] Nair, A., McGrew, B., Andrychowicz, M., Zaremba, W., & Abbeel, P. (2017). Overcoming exploration in reinforcement learning with demonstrations. arXiv preprint arXiv:1709.10089.
iclr_2018_B1KJJf-R-
We present a Neural Program Search, an algorithm to generate programs from natural language description and a small number of input / output examples. The algorithm combines methods from Deep Learning and Program Synthesis fields by designing rich domain-specific language (DSL) and defining efficient search algorithm guided by a Seq2Tree model on it. To evaluate the quality of the approach we also present a semi-synthetic dataset of descriptions with test examples and corresponding programs. We show that our algorithm significantly outperforms sequence-to-sequence model with attention baseline.
This paper presents a seq2Tree model to translate a problem statement in natural language to the corresponding functional program in a DSL. The model uses an RNN encoder to encode the problem statement and uses an attention-based doubly recurrent network for generating tree-structured output. The learnt model is then used to perform Tree-beam search using a search algorithm that searches for different completion of trees based on node types. The evaluation is performed on a synthetic dataset and shows improvements over seq2seq baseline approach. Overall, this paper tackles an important problem of learning programs from natural language and input-output example specifications. Unlike previous neural program synthesis approaches that consider only one of the specification mechanisms (examples or natural language), this paper considers both of them simultaneously. However, there are several issues both in the approach and the current preliminary evaluation, which unfortunately leads me to a reject score, but the general idea of combining different specifications is quite promising. First, the paper does not compare against a very similar approach of Parisotto et al. Neuro-symbolic Program Synthesis (ICLR 2017) that uses a similar R3NN network for generating the program tree incrementally by decoding one node at a time. Can the authors comment on the similarity/differences between the approaches? Would it be possible to empirically evaluate how the R3NN performs on this dataset? Second, it seems that the current model does not use the input-output examples at all for training the model. The examples are only used during the search algorithm. Several previous neural program synthesis approaches (DeepCoder (ICLR 2017), RobustFill (ICML 2017)) have shown that encoding the examples can help guide the decoder to perform efficient search. It would be good to possibly add another encoder network to see if encoding the examples as well help improve the accuracy. Similar to the previous point, it would also be good to evaluate the usefulness of encoding the problem statement by comparing the final model against a model in which the encoder only encodes the input-output examples. Finally, there is also an issue with the synthetic evaluation dataset. Since the problem descriptions are generated syntactically using a template based approach, the improvements in accuracy might come directly from learning the training templates instead of learning the desired semantics. The paper mentions that it is prohibitively expensive to obtain human-annotated set, but can it be possible to at least obtain a handful of real tasks to evaluate the learnt model? There are also some recent datasets such as WikiSQL (https://github.com/salesforce/WikiSQL) that the authors might consider in future. Questions for the authors: Why was MAX_VISITED only limited to 100? What happens when it is set to 10^4 or 10^6? The Search algorithm only shows an accuracy of 0.6% with MAX_VISITED=100. What would the performance be for a simple brute-force algorithm with a timeout of say 10 mins? Table 3 reports an accuracy of 85.8% whereas the text mentions that the best result is 90.1% (page 8)? What all function names are allowed in the DSL (Figure 1)? Can you clarify the contributions of the paper in comparison to the R3NN? Minor typos: page 2: allows to add constrains --> allows to add constraints page 5: over MAX_VISITED programs has been --> over MAX_VISITED programs have been
iclr_2018_Hy3MvSlRW
Machine reading has recently shown remarkable progress thanks to differentiable reasoning models. In this context, End-to-End trainable Memory Networks (MemN2N) have demonstrated promising performance on simple natural language based reasoning tasks such as factual reasoning and basic deduction. However, the task of machine comprehension is currently bounded to a supervised setting and available question answering dataset. In this paper we explore the paradigm of adversarial learning and self-play for the task of machine reading comprehension. Inspired by the successful propositions in the domain of game learning, we present a novel approach of training for this task that is based on the definition of a coupled attention-based memory model. On one hand, a reader network is in charge of finding answers regarding a passage of text and a question. On the other hand, a narrator network is in charge of obfuscating spans of text in order to minimize the probability of success of the reader. We experimented the model on several question-answering corpora. The proposed learning paradigm and associated models present encouraging results.
Summary: This paper proposes an adversarial learning framework for machine comprehension task. Specifically, authors consider a reader network which learns to answer the question by reading the passage and a narrator network which learns to obfuscate the passage so that the reader can fail in its task. Authors report results in 3 different reading comprehension datasets and the proposed learning framework results in improving the performance of GMemN2N. My Comments: This paper is a direct application of adversarial learning to the task of reading comprehension. It is a reasonable idea and authors indeed show that it works. 1. The paper needs a lot of editing. Please check the minor comments. 2. Why is the adversary called narrator network? It is bit confusing because the job of that network is to obfuscate the passage. 3. Why do you motivate the learning method using self-play? This is just using the idea of adversarial learning (like GAN) and it is not related to self-play. 4. In section 2, first paragraph, authors mention that the narrator prevents catastrophic forgetting. How is this happening? Can you elaborate more? 5. The learning framework is not explained in a precise way. What do you mean by re-initializing and retraining the narrator? Isn’t it costly to reinitialize the network and retrain it for every turn? How many such epochs are done? You say that test set also contains obfuscated documents. Is it only for the validation set? Can you please explain if you use obfuscation when you report the final test performance too? It would be more clear if you can provide a complete pseudo-code of the learning procedure. 6. How does the narrator choose which word to obfuscate? Do you run the narrator model with all possible obfuscations and pick the best choice? 7. Why don’t you treat number of hops as a hyper-parameter and choose it based on validation set? I would like to see the results in Table 1 where you choose number of hops for each of the three models based on validation set. 8. In figure 2, how are rounds constructed? Does the model sees the same document again and again for 100 times or each time it sees a random document and you sample documents with replacement? This will be clear if you provide the pseudo-code for learning. 9. I do not understand author's’ justification for figure-3. Is it the case that the model learns to attend to last sentences for all the questions? Or where it attends varies across examples? 10. Are you willing to release the code for reproducing the results? Minor comments: Page 1, “exploit his own decision” should be “exploit its own decision” In page 2, section 2.1, sentence starting with “Indeed, a too low percentage …” needs to be fixed. Page 3, “forgetting is compensate” should be “forgetting is compensated”. Page 4, “for one sentences” needs to be fixed. Page 4, “unknow” should be “unknown”. Page 4, “??” needs to be fixed. Page 5, “for the two first datasets” needs to be fixed. Table 1, “GMenN2N” should be “GMemN2N”. In caption, is it mean accuracy or maximum accuracy? Page 6, “dataset was achieves” needs to be fixed. Page 7, “document by obfuscated this word” needs to be fixed. Page 7, “overall aspect of the two first readers” needs to be fixed. Page 8, last para, references needs to be fixed. Page 9, first sentence, please check grammar. Section 6.2, last sentence is irrelevant.
iclr_2018_rkWN3g-AZ
Style transfer usually refers to the task of applying color and texture information from a specific style image to a given content image while preserving the structure of the latter. Here we tackle the more generic problem of semantic style transfer: given two unpaired collections of images, we aim to learn a mapping between the corpus-level style of each collection, while preserving semantic content shared across the two domains. We introduce XGAN, a dual adversarial autoencoder, which captures a shared representation of the common domain semantic content in an unsupervised way, while jointly learning the domain-to-domain image translations in both directions. We exploit ideas from the domain adaptation literature and define a semantic consistency loss which encourages the model to preserve semantics in the learned embedding space. We report promising qualitative results for the task of face-to-cartoon translation. The cartoon dataset we collected for this purpose will also be released as a new benchmark for semantic style transfer.
This paper proposes a new GAN-based model for unpaired image-to-image translation. The model is very similar to DTN [Taigman et al. 2016] except with trained encoders and a domain confusion loss to encourage the encoders to map source and target domains to a shared embedding. Additionally, an optional teacher network is introduced, but this feels rather tangential and problem-specific. The paper is clearly presented and I enjoyed the aesthetics of the figures. The method appears technically sound, albeit a bit complicated. The new cartoon dataset is also a nice contribution. My main criticism of this paper is the experiments. At the end of reading, I don’t know clearly which aspects of the method are important, why they are important, and how the proposed system compares against past work. First, the baselines are insufficient. Only DTNs are compared against, yet there are many other recent methods for unpaired image-to-image translation, notably, cycle-consistency-based methods and UNIT. These methods should also be compared against, as there is little evidence that DTNs are actually SOTA on cartoons (rather, the cartoon dataset was not public so other papers did not compare on that dataset). Second, although I appreciated the ablation experiments, they are not comprehensive, as discussed more below. Third, there is no quantitative evaluation. The paper states that quantifying performance on style transfer is an unsolved problem, but this is no excuse for not at least trying. Indeed, there are many proposed metrics in the literature for quantifying style transfer / image generation, including the Inception score [Salimans et al. 2016], conditional variants like the FCN-score [Isola et al. 2017], and human judgments. These metrics could all be adapted to the present task (with appropriate modifications, e.g., switching from Inception to a face attribute classifier). Additionally, as the paper mentions at the end, the method could be applied to domain adaptation, where plenty of standard metrics and benchmarks exist. Ultimately, the qualitative results in the paper are not convincing to me. It’s hard to see the advantages/disadvantages in each comparison. For example in Figure 8, it’s hard to even see any overall change in the outputs due to ablating the semantic consistency loss and the teacher loss (especially since I’m comparing these to Figure 6, which is referred to “Selected results” and therefore might not be a fair comparison). Perhaps the effect of the ablations would be clearer if the figures showed a single input followed by a series of outputs for that same input, each with a different term ablated. A careful reader might be able to examine the images for a long time and find some insights, but it would be much better if the paper distilled these insights into a more concise and convincing form. I feel sort of like I’m looking at raw data, and it still needs to be analyzed. I also think the ablations are not sufficiently comprehensive. In particular, there is no ablation of the domain adversarial loss. This seems like an important one to test since it’s one of the main differences from DTNs. I was a bit confused by the “finetuned DTN” in Section 7.2. Is this an ablation experiment where the domain adversarial loss and teacher loss are removed? If so, referring to it as so may be clearer than calling it a finetuned DTN. Interestingly, the results of this method look pretty decent, suggesting that the domain adversarial loss might not be having a big effect, in which case XGAN looks very close indeed to DTNs. It would be great here to actually quantify the mentioned sensitivity to hyperparameters. In terms of presentation, at several points, the paper argues that previous, pixel-domain methods are more limited than the proposed feature-space method, but little evidence is given to support these claims. For example, “we argue that such a pixel-level constraint is not sufficient in our case” in the intro, and “our proposed semantic consistency loss acts at the feature level, allowing for more flexible transformations” in related work. I would like to see more motivation for these assertions, and ultimately, the limitations should be concretely demonstrated in experiments. In models like CycleGAN the pixel-level constraint is between inputs and reconstructed inputs, and I don’t see why this necessarily is overly restrictive on the kinds of transformations in the outputs. The phrasing in the current paper seems to suggest that the pixel-level constraints are between input and output, which, I agree, would be directly restrictive. The reasoning here should be clarified. Better yet would be to provide empirical evidence that pixel-domain methods are not successful (e.g., by comparing against CycleGAN). The usage of the term “semantic” is also somewhat confusing. In what sense is the latent space semantic? The paper should clarify exactly what this term refers to, perhaps simply defining it to mean a “low-dimensional shared embedding.” I think the role of the GAN objective is somewhat underplayed. It is quite interesting that the current model achieves decent results even without the GAN. However, there is no experiment keeping the GAN but ablating other parts of the method. Other papers have shown that a GAN objective plus, e.g., cycle-consistency, can do quite well on this kind of problem. It could be that different terms in the current objective are somewhat redundant, so that you can choose any two or three, let’s say, and get good results. To check this, it would be great to see more comprehensive ablation experiments. Minor comments: 1. Page 1: I wouldn’t call colorization one-to-one. Even though there is a single ground truth, I would say colorization is one-to-many in the sense that many outputs may be equally probable according to a Bayes optimal observer. 2. Fig 1: It should be clarified that the left example is not a result of the method. At a glance this looks like an exciting new result and I think that could mislead casual readers. 3. Fig 1 caption: “an other” —> “another” 4. Page 2: “Recent work … fail for more general transformations” — DiscoGAN (Kim et al. 2017) showed some success beyond pixel-aligned transformations 5. Page 5: “particular,the” —> “particular, the”; quotes around “short beard” are backwards 6. Page 6: “founnd” —> “found” 7. Page 11: what is \mathcal{L}_r? I don’t see it defined above.
iclr_2018_B1X4DWWRb
Predictive models that generalize well under distributional shift are often desirable and sometimes crucial to machine learning applications. One example is the estimation of treatment effects from observational data, where a subtask is to predict the effect of a treatment on subjects that are systematically different from those who received the treatment in the data. A related kind of distributional shift appears in unsupervised domain adaptation, where we are tasked with generalizing to a distribution of inputs that is different from the one in which we observe labels. We pose both of these problems as prediction under a shift in design. Popular methods for overcoming distributional shift are often heuristic or rely on assumptions that are rarely true in practice, such as having a well-specified model or knowing the policy that gave rise to the observed data. Other methods are hindered by their need for a pre-specified metric for comparing observations, or by poor asymptotic properties. In this work, we devise a bound on the generalization error under design shift, based on integral probability metrics and sample re-weighting. We combine this idea with representation learning, generalizing and tightening existing results in this space. Finally, we propose an algorithmic framework inspired by our bound and verify is effectiveness in causal effect estimation.
The paper proposes a novel way of causal inference in situations where in causal SEM notation the outcome Y = f(T,X) is a function of a treatment T and covariates X. The goal is to infer the treatment effect E(Y|T=1,X=x) - E(Y|T=0,X=x) for binary treatments at every location x. If the treatment effect can be learned, then forecasts of Y under new policies that assign treatment conditional on X will still "work" and the distribution of X can also change without affecting the accuracy of the predictions. What is proposed seems to be twofold: - instead of using a standard inverse probability weighting, the authors construct a bound for the prediction performance under new distributions of X and new policies and learn the weights by optimizing this bound. The goal is to avoid issues that arise if the ratio between source and target densities become very large or small and the weights in a standard approach would become very sparse, thus leading to a small effective sample size. - as an additional ingredient the authors also propose "representation learning" by mapping x to some representation Phi(x). The goal is to learn the mapping Phi (and its inverse) and the weighting function simultaneously by optimizing the derived bound on the prediction performance. Pros: - The problem is relevant and also appears in similar form in domain adaptation and transfer learning. - The derived bounds and procedures are interesting and nontrivial, even if there is some overlap with earlier work of Shalit et al. Cons: - I am not sure if ICLR is the optimal venue for this manuscript but will leave this decision to others. - The manuscript is written in a very compact style and I wish some passages would have been explained in more depth and detail. Especially the second half of page 5 is at times very hard to understand as it is so dense. - The implications of the assumptions in Theorem 1 are not easy to understand, especially relating to the quantities B_\Phi, C^\mathcal{F}_{n,\delta} and D^{\Phi,\mathcal{H}}_\delta. Why would we expect these quantities to be small or bounded? How does that compare to the assumptions needed for standard inverse probability weighting? - I appreciate that it is difficult to find good test datasets for evaluating causal estimator. The experiment on the semi-synthetic IHDP dataset is ok, even though there is very little information about its structure in the manuscript (even basic information like number of instances or dimensions seems missing?). The example does not provide much insight into the main ideas and when we would expect the procedure to work more generally.
iclr_2018_rJNpifWAb
FLIPOUT: EFFICIENT PSEUDO-INDEPENDENT WEIGHT PERTURBATIONS ON MINI-BATCHES Stochastic neural net weights are used in a variety of contexts, including regularization, Bayesian neural nets, exploration in reinforcement learning, and evolution strategies. Unfortunately, due to the large number of weights, all the examples in a mini-batch typically share the same weight perturbation, thereby limiting the variance reduction effect of large mini-batches. We introduce flipout, an efficient method for decorrelating the gradients within a mini-batch by implicitly sampling pseudo-independent weight perturbations for each example. Empirically, flipout achieves the ideal linear variance reduction for fully connected networks, convolutional networks, and RNNs. We find significant speedups in training neural networks with multiplicative Gaussian perturbations. We show that flipout is effective at regularizing LSTMs, and outperforms previous methods. Flipout also enables us to vectorize evolution strategies: in our experiments, a single GPU with flipout can handle the same throughput as at least 40 CPU cores using existing methods, equivalent to a factor-of-4 cost reduction on Amazon Web Services.
In this article, the authors offer a way to decrease the variance of the gradient estimation in the training of neural networks. They start in the Introduction and Section 2 by explaining the multiple uses of random connection weights in deep learning and how the computational cost often restricts their use to a single randomly sampled set of weights per minibatch, which results to higher-variance gradient estimatos than could be achieved otherwise. In Section 3 the authors offer to get the benefits of multiple weights without most of the cost, when the distribution of the weights is symmetric and fully factorized, by multiplying sampled-once random perturbations of the weights by a rank-1 random sign matrix. This efficient mechanism is only twice as costly as a single random perturbation, and the authors show how to efficiently parallelize it on GPUs, thereby also allowing GPU-ization of evolution strategies (something so far difficult toachieve). Of note, they provide a theoretical analysis in Section 3.2, proving the actual variance reduction of their efficient pseudo-sampling scheme. In Section 4 they provide quite varied empirical analysis: they confirm their theoretical results on four architectures; they show its use it to regularise on language models; they apply it on large minibatch settings where high variance is a main problem; and on evolution strategies. While it is a rather simple idea which could be summarised much earlier in the single equation (3), I really like the thoroughness and the clarity of the exposure of the idea. Too many papers in our community skimp on details and on formalism, and it is a delight to see things exposed so clearly -- even accompanied with a proof. However, the painful part: while I am convinced by the idea and love its detailed exposure, and the gradient variance reduction is made very clear, the experimental impact in terms of accuracy (or perplexity) is, sadly, not very convincing. Nowhere in the text did I find a clear rationale of why it is beneficial to reduce the variance of the gradient. The numerical results in Table 1 and Table 2 also do not show a clear improvement: Flipout does not provide the best accuracy. The gain in wall clock could be a factor, but would need to be measured on the figures more clearly. And the validation errors in Figure 2 for Evolution strategies seem to be worse than backprop.The main text itself also only claims performance “comparable to the other methods”. The only visible gain is on the lower part Figure 2.a on a ConvNet. This makes me wonder if the authors could do a better job of putting forward the actual advantages of their methods on the end-results: could wall clock measure be put more forward, to justify the extra work? This would, in my mind, strongly improve the case for publication of this article. A few improvement suggestions: * Could put earlier more emphasis of superiority to Local Reparameterization Trick in terms of architecture, not wait until Section 2.2 and section 4.1 *Should also put more emphasis on limitations, not wait until 3.1. * Proposition 1 is quite straightforward, not sure it deserves a proposition, but it’s elegant to put it forward. * Footnote 1 on re-using the matrices is indeed practical, but also somewhat surprising in terms of bias risks. Could it be explained in more depth, maybe by the random permutations of the minibatches making the bias non systematic and cancelling out? * Theorem 1: For readability could merge the expectations on the joint distribution as E_{x, \hat \delta W} , rather than separate expectations with the conditional distributions. * Theorem 1: could the authors provide a clearer intuitive explanation of the \beta term alone, not only as part of \alpha + \beta, especially as it plays such a key role, being the only one that does not disappear? And how do they explain their empirical observation that \beta is close to 0? Any intuition on that? * Experiments: I salute the authors for providing all the details in exhaustive manner in the Appendix. Very commendable. * Experiments: I like the empirical verification of the theory. Very neat to see. Minor typo: * page 2 last paragraph, “evolution strategies” is plural but the verbs are used in singular (“is black box”, “It doesn’t”, “generates”)
iclr_2018_ry-TW-WAb
Published as a conference paper at ICLR 2018 VARIATIONAL NETWORK QUANTIZATION In this paper, the preparation of a neural network for pruning and few-bit quantization is formulated as a variational inference problem. To this end, a quantizing prior that leads to a multi-modal, sparse posterior distribution over weights, is introduced and a differentiable Kullback-Leibler divergence approximation for this prior is derived. After training with Variational Network Quantization, weights can be replaced by deterministic quantization values with small to negligible loss of task accuracy (including pruning by setting weights to 0). The method does not require fine-tuning after quantization. Results are shown for ternary quantization on LeNet-5 (MNIST) and DenseNet (CIFAR-10).
This paper presents Variational Network Quantization; a variational Bayesian approach for quantising neural network weights to ternary values post-training in a principled way. This is achieved by a straightforward extension of the scale mixture of Gaussians perspective of the log-uniform prior proposed at [1]. The authors posit a mixture of delta peaks hyperprior over the locations of the Gaussian distribution, where each peak can be seen as the specific target value for quantisation (including zero to induce sparsity). They then further propose an approximation for the KL-divergence, necessary for the variational objective, from this multimodal prior to a factorized Gaussian posterior by appropriately combining the approximation given at [2] for each of the modes. At test-time, the variational posterior for each weight is replaced by the target quantisation value that is closest, w.r.t. the squared distance, to the mean of the Gaussian variational posterior. Encouraging experimental results are shown with performance comparable to the state-of-the-art for ternary weight neural networks. This paper presented a straightforward extension of the work done at [1, 2] for ternary networks through a multimodal quantising prior. It is generally well-written, with extensive preliminaries and clear equations. The visualizations also serve as a nice way to convey the behaviour of the proposed approach. The idea is interesting and well executed so I propose for acceptance. I only have a couple of minor questions: - For the KL-divergence approximation you report a maximum difference of 1 nat per weight that seems a bit high; did you experiment with the `naive` Monte Carlo approximation of the bound (e.g. as done at Bayes By Backprop) during optimization? If yes, was there a big difference in performance? - Was pre-training necessary to obtain the current results for MNIST? As far as I know, [1] and [2] did not need pre-training for the MNIST results (but did employ pre-training for CIFAR 10). - How necessary was each one of the constraints during optimization (and what did they prevent)? - Did you ever observe posterior means that do not settle at one of the prior modes but rather stay in between? Or did you ever had issues of the variance growing large enough, so that q(w) captures multiple modes of the prior (maybe the constraints prevent this)? How sensitive is the quantisation scheme? Other minor comments / typos: (1) 7th line of section 2.1 page 2, ‘a unstructured data’ -> ‘unstructured data’ (2) 5th line on page 3, remove ‘compare Eq. (1)’ (or rephrase it appropriately). (3) Section 2.2, ’Kullback-Leibler divergence between the true and the approximate posterior’; between implies symmetry (and the KL isn’t symmetric) so I suggest to change it to e.g. ‘from the true to the approximate posterior’ to avoid confusion. Same for the first line of Section 3.3. (4) Footnote 2, the distribution of the noise depends on the random variable so I would suggest to change it to a general \epsilon \sim p(\epsilon). (5) Equation 4 is confusing. [1] Louizos, Ullrich & Welling, Bayesian Compression for Deep Learning. [2] Molchanov, Ashukha & Vetrov, Variational Dropout Sparsifies Deep Neural Networks.
iclr_2018_Syl3_2JCZ
Working memory requires information about external stimuli to be represented in the brain even after those stimuli go away. This information is encoded in the activities of neurons, and neural activities change over timescales of tens of milliseconds. Information in working memory, however, is retained for tens of seconds, suggesting the question of how time-varying neural activities maintain stable representations. Prior work shows that, if the neural dynamics are in the 'null space' of the representation -so that changes to neural activity do not affect the downstream read-out of stimulus information -then information can be retained for periods much longer than the time-scale of individual-neuronal activities. The prior work, however, requires precisely constructed synaptic connectivity matrices, without explaining how this would arise in a biological neural network. To identify mechanisms through which biological networks can self-organize to learn memory function, we derived biologically plausible synaptic plasticity rules that dynamically modify the connectivity matrix to enable information storing. Networks implementing this plasticity rule can successfully learn to form memory representations even if only 10% of the synapses are plastic, they are robust to synaptic noise, and they can represent information about multiple stimuli.
A neural network model consisting of recurrently connected neurons and one or more readouts is introduced which aims to retain some output over time. A plasticity rule for this goal is derived. Experiments show the robustness of the network with respect to noisy weight updates, number of non-plastic connections, and sparse connectivity. Multiple consecutive runs increase the performance; furthermore, remembering multiple stimuli is possible. Finally, ideas for the biological implementation of the rule are suggested. While the presentation is generally comprehensible a number of errors and deficits exist (see below). In general, this paper addresses a question that seems only relevant from a neuroscience perspective. Therefore, I wonder whether it is relevant in terms of the scope of this conference. I also think that the model is rather speculative. The authors argue that the resulting learning rule is biologically plausible. But even if this is the case, it does not imply that it is implemented in neuronal circuits in the brain. As far as I can see, there exists no experimental evidence for this rule. The paper shows the superiority of the proposed model over the approach of Druckmann & Chkolvskii (2012), however, it lacks in-depth analysis of the network behavior. Specifically, it is not clear how the information is stored. Do neurons show time-varying responses as in Druckmann & Chkolvskii (2012) or do all neuron stabilize within the first 50 ms (as in Fig. 2A, it is not detailed how the neurons shown there have been selected)? Do the weights change continuously within the delay period or do they also converge rapidly? This question is particularily important when considering multiple consecutive trials (cf. Fig. 5) as it seem that a specific but constant network architecture can retain the desired stimulus without further plasticity. Weight histograms should be presented for the different cases and network states. Also, since autapses are allowed, an analysis of their role should be performed. This information is vital to compare the model to findings from neuroscience and judge the biologic realism. The target used is \hat{s}(t) / \hat{s}(t = 0), this is dubbed "fraction of stimulus retained". In most plots, the values for this measure are <= 1, but in Fig. 3A, the value (for the FEVER network) is > 1. Thus, the name is arguably not well-chosen: how can a fraction of remembrance be greater than one? Also, in a realistic environment, it is not clear that the neuronal activities decay to zero (resulting in \hat{s}(t) also approaching zero). A squared distances measure should therefore be considered. It is not clear from the paper when and how often weight updates are performed. Therefore, the biologic plausability cannot be assessed, since the learning rule might lead to much more rapid changes of weights than the known learning rules in biological neural networks. Since the goal seems to be biologic realism, generally, spiking neurons should be used for the model. This is important as spiking neural networks are much more fragile than artificial ones in terms of stability. Further remarks: - In Sec. 3.2.1, noise is added to weight updates. The absolute values of alpha are hard to interpret since it is not clear in what range the weights, activities, and weight updates typically lie. - In Sec. 3.2.2 it is shown that 10% plastic synapses is enough for reasonable performance. In this case, it should be investigated whether the full network is essential for the memory task at all (especially since later, it is argued that 100 neurons can store up to 100 stimuli). - For biologic realism, just assuming that the readout value at t = 0 is the target seems a bit too simple. How does this output arise in the first place? At least, an argument for this choice should be presented. Remarks on writing: - Fig. 1A is too small to read. - The caption of Fig. 4C is missing. - In Fig. 7AB, q_i and q_j are swapped. Also, it is unclear in the figure to which connection the ds and qs belong. - In 3.6.1, Fig. 7 is referenced, but in the figure the terminology of Eq. 5 is used, which is only introduced in Sec. 3.6.2. This is confusing. - The beginning of Sec. 3.6 claims that all information is local except d\hat{s}_k / dt, but this is not the case as d_i is not local (which is explained later). - The details of the "stimulus presentation" (i.e. it is not performed explicitly) should be emphasised in 2.1. Also, the description of the target \hat{s} is much clearer in 3.4 than in 2.1 (where it should primarily be explained). - The title of the citation Cowan (2010) is missing. - In Fig. 2A, the formulas are too small too read in a printed version. - In Sec. 3.6.1 some sums are given over k, but k is also the index of a neuron in Fig. 7A (which is referenced there), this can be ambiguous and could be changed.
iclr_2018_SJahqJZAW
Training generative adversarial networks is unstable in high-dimensions as the true data distribution tends to be concentrated in a small fraction of the ambient space. The discriminator is then quickly able to classify nearly all generated samples as fake, leaving the generator without meaningful gradients and causing it to deteriorate after a point in training. In this work, we propose training a single generator simultaneously against an array of discriminators, each of which looks at a different random low-dimensional projection of the data. Individual discriminators, now provided with restricted views of the input, are unable to reject generated samples perfectly and continue to provide meaningful gradients to the generator throughout training. Meanwhile, the generator learns to produce samples consistent with the full data distribution to satisfy all discriminators simultaneously. We demonstrate the practical utility of this approach experimentally, and show that it is able to produce image samples with higher quality than traditional training with a single discriminator.
The paper proposes to stabilize GAN training by using an ensemble of discriminators, each workin on a random projection of the input data, to provide the training signal for the generator model. Q1: “In relation to “Theorem 3.1. … will produce samples from a distribution whose marginals along each of the projections W_k match those of the true distribution”.. I presume an infinite number of generator distributions could give rise to the correct marginals however not necessarily be converged to the data distribution. In Theorem A.2 the authors upperbound this residual as a function of the smoothness and support of the distributions as well as the projections presented to the discriminators. Can the authors comment on how tight this bound is e.g. as a function the number of used discriminators or the choosen projection methods ? Q2: Related to the above. Did the authors do or considered any frequency analysis of the ensemble of random projection? I guess you could easily do a numeric simulation of the expected frequency spectrum of the combined set discriminators? Q3: My primary concern with the work is the above mentioned computational complexity of running K discriminators in parallel. This is especially in relation to the experimental results showing significant high-frequency artefacts when running with K=12 classifiers (K=12 celebA results and “Random Imagenet-Canine Images: Proposed Method” in suplementary results). I think this is as expected as the authors are effectively fitting each classifier to the distributions of smoothed (with 8x8 random kernel) subsampled version of the input image. I would expect that each discriminator sees none or only a very limited amount the high frequency component in the images. Do the authors have any comments on how the sampling of the projection kernels affects the image results especially if the number of needed classifiers can be reduced somehow? I would expect that a combination of smoothing and high frequency filters would be needed to remove the high frequency artefacts? Q4: Whats the explanation of the oscilating patterns in figure 2? Q5: In the conclusion the authors mention that their framework is currently limited by the computational of running K discriminators and proposes: “In our current framework, the number of discriminators is limited by computational cost. In future work, we plan to investigate training with a much larger set of discriminators, employing only a small subset of them at each iteration, or every set of iterations” In the extreme case of only using a single randomly discriminator the approach is quite similar to the quite widely used input dropout to the discriminator? Overall I like the simplicity of the proposed idea. However i’m not completely convinced that the “marginal” convergence proof holds for the relative low number of discriminators possible to use in practice. At least i would like the authors to touch on this key aspect of the method both theoretically and with experiments/simulations. Also several other methods have recently been proposed to improve stability of GANs, however no experimental comparisons is made with these methods (WGAN, EGAN, LSGAN etc.)
iclr_2018_rJHcpW-CW
In this paper, we propose a mix-generator generative adversarial networks (PGAN) model that works in parallel by mixing multiple disjoint generators to approximate a complex real distribution. In our model, we propose an adjustment component that collects all the generated data points from the generators, learns the boundary between each pair of generators, and provides error to separate the support of each of the generated distributions. To overcome the instability in a multiplayer game, a shrinkage adjustment component method is introduced to gradually reduce the boundary between generators during the training procedure. To address the linearly growing training time problem in a multiple generators model, we propose a method to train the generators in parallel. This means that our work can be scaled up to large parallel computation frameworks. We present an efficient loss function for the discriminator, an effective adjustment component, and a suitable generator. We also show how to introduce the decay factor to stabilize the training procedure. We have performed extensive experiments on synthetic datasets, MNIST, and CIFAR-10. These experiments reveal that the error provided by the adjustment component could successfully separate the generated distributions and each of the generators can stably learn a part of the real distribution even if only a few modes are contained in the real distribution.
Overall, the writing is very confusing at points and needs some attention to make the paper clearer. I’m not entirely sure the authors understand the material particularly well, as I found some of the arguments and narrative confusing or just incorrect. I don’t really see any significant contribution here except “we had this idea for this model, and it works”. There’s no interesting questions being asked about missing modes (and no answers through good experimentation), no insight that might contribute to our understanding of the problem, and no comparison to other models. My guess is this submission was rushed (and perhaps they were just looking for feedback). I like the idea, don’t get me wrong: a model that is trainable across multiple GPUs and that distributes generative work is pretty cool, and I want to see this work succeed (after a *lot* more work). But the paper really lacks what I’d consider good science, and I don’t see it publishable without significant improvement. Personally I think you should change the angle from missing modes to parallel training. I don’t see any strong guarantees that the model will do what you say it will, especially as beta goes to zero. Detailed comments P1 “, that explicitly approximate data distribution, the approximation of GAN is implicit” The wording of this is pretty strange: by “implicit”, we mean that we only have *samples* from the distribution(s) of interest, but what does it mean for an approximation to be “implicit”? From the intro, it doesn’t sound like the approach is meant for the “mode collapse” problem, but for dealing with missing modes. These are different types of failures for GANs, and while there are many theories for why these happen, to my knowledge there’s no such consensus that these issues are the same. For instance, what is keeping each of the generators from collapsing onto a single value? We often see the model collapse on several different values: why couldn’t each of your generators do this? P2: No, it is incorrect that the KL is what is causing mode collapse, and I think actually you mean “missing modes”. Arjovsky et al addresses the mode collapse problem, which is just another word for a type of instability in GANs. But this isn’t because of “vanishing gradients”, as the “proxy loss” (which you call “heuristic loss”, this isn’t a common term, fyi), which is what GANs are trained on in practice don’t vanish, but show some other sorts of instabilities (Arjovsky 2016). That said, other GAN variants without regularization also show collapse *and* missing modes, such as LSGAN and all the f-GAN variants (even the auto encoder variants). You should also probably cite Che et al 2016 as another model that addressed missing modes. Also, what about ALI, BiGAN, and ALiCE? These also address missing modes (at least they claim to). I don’t understand why you’re comparing f-GAN and WGAN convergences: they are addressing different things with GANs: one shows insight into what exactly traditional GANs are doing (solving a dual problem of minimizing an f-divergence) versus addressing stability through using an IPM (though also a dual formulation of the wasserstein). f-GANs ensure neither stability nor non-vanishing gradients. P3: I like the breakdown of how the memory is organized. This is for multi-GPU, correct? This needs to be explicitly stated. P6: There’s a sign error in proof 1 (both in the definition of the reverse KL and when the loss is written out). Also, the gradient w.r.t. theta magically appears in the second half. This is a pretty round-about way to arrive at that you’re minimizing the reverse KL: I’m pretty sure this can be shown by formulating the second term in f-gan (the one where you sample from the generator), that is f*(T), where f* is the convex conjugate of f = -log Mixture of Gaussians: common *missing modes* experiment. So my general comments about the experiments You need to compare to other models that address missing modes. Overall, many people have shown success with experiments similar to your simple mixture of Gaussians experiments, so in order to show something significant here, you will need to have a more challenging experiments and show a comparison to other models. The real-world experiments are fairly unconvincing, as you only show MNIST and CIFAR-10 (and MNIST doesn’t look very good). Overall, the good inception scores aren’t too surprising given the model has several generators for each mode, but I think we need to see a demonstration on better datasets.
iclr_2018_SkVqXOxCb
Published as a conference paper at ICLR 2018 COULOMB GANS: PROVABLY OPTIMAL NASH EQUI- LIBRIA VIA POTENTIAL FIELDS Generative adversarial networks (GANs) evolved into one of the most successful unsupervised techniques for generating realistic images. Even though it has recently been shown that GAN training converges, GAN models often end up in local Nash equilibria that are associated with mode collapse or otherwise fail to model the target distribution. We introduce Coulomb GANs, which pose the GAN learning problem as a potential field, where generated samples are attracted to training set samples but repel each other. The discriminator learns a potential field while the generator decreases the energy by moving its samples along the vector (force) field determined by the gradient of the potential field. Through decreasing the energy, the GAN model learns to generate samples according to the whole target distribution and does not only cover some of its modes. We prove that Coulomb GANs possess only one Nash equilibrium which is optimal in the sense that the model distribution equals the target distribution. We show the efficacy of Coulomb GANs on LSUN bedrooms, CelebA faces, CIFAR-10 and the Google Billion Word text generation.
In this paper, the authors interpret the training of GAN by potential field and inspired from which to provide new training procedure for GAN. They claim that under the condition that global optima are achieved for discriminator and generator in each iteration, the Coulomb GAN converges to the global solution. I think there are several points need to be addressed. 1, I agree that the "model collapsing" is due to converging to a local Nash Equilibrium. However, there are more reasons besides the drawback of the loss function, which is emphasized in the paper. Leave the stochastic gradient descent optimization algorithm apart (since most of the neural networks are trained in this way), the parametrization and the richness of discriminator family play a vital role in the model collapsing issue. In fact, even with KL-divergence in which log operation is involved, if one can select reasonable parametrization, e.g., directly handling in function space, the saddle point optimization is convex-concave, which means under the same assumption made in the paper, there is only one global Nash Equilibrium. On the other hand, the richness of the discriminator also important in the training of GAN. I did not get the point about the drawback of III. If indeed as the paper considered in the ideal case, the discriminator is rich enough, III cannot happen. The model collapsing is not just because loss function in training GAN. It is caused by the twist of these three issues listed above. Modifying the loss can avoid partially model collapsing, however, it is not appropriate to claim that the proposed algorithm is 'provable'. The assumption in this paper is too restricted, and the discussion is unfair to the existing variants of GAN, e.g., GMMN or Wasserstein GAN, which under some assumptions, there is also only one global Nash Equilibrium. 2, In the training procedure, the discriminator family is important as we discussed. The paper claims that the reason to introduce the extra discriminator is reducing variance. However, such parametrization will introduce bias too. The bias and variance tradeoff should be explicitly discussed here. Ideally, it should contain all the functions formed with Plummer kernel, but not too large (otherwise, it will increase the sample complexity.). Which function family used in the paper is not clear. 3, As the authors already realized, the GMMN is one closely related model. It will be more convincing to add the comparison with GMMN. In sum, this paper provides an interesting perspective modeling GAN from the potential field, however, there are several issues need to be addressed. I expect to see the reply of the authors regarding the mentioned issues.
iclr_2018_SJUX_MWCZ
When machine learning models are used for high-stakes decisions, they should predict accurately, fairly, and responsibly. To fulfill these three requirements, a model must be able to output a reject option (i.e. say "I Don't Know") when it is not qualified to make a prediction. In this work, we propose learning to defer, a method by which a model can defer judgment to a downstream decision-maker such as a human user. We show that learning to defer generalizes the rejection learning framework in two ways: by considering the effect of other agents in the decision-making process, and by allowing for optimization of complex objectives. We propose a learning algorithm which accounts for potential biases held by decision-makers later in a pipeline. Experiments on real-world datasets demonstrate that learning to defer can make a model not only more accurate but also less biased. Even when operated by highly biased users, we show that deferring models can still greatly improve the fairness of the entire pipeline.
Strengths: 1. This paper proposes a novel framework for ensuring fairness in the classification pipeline. To this end, this work explores models that learn to defer. 2. The work is conceptually very interesting. The idea of learning to defer (as proposed in the paper) as a means to fairness is not only novel but also quite apt. 3. Experimental results demonstrate that the proposed learning strategy can not only increase predictive accuracy but also reduce bias in decisions. Weaknesses: 1. While this work is conceptually quite novel and interesting, the technical novelty and contributions seem fairly minimal. 2. The proposed formulations are essentailly regularized variants of fairly standard classification models and the optimization also relies upon standard search procedures. 3. Experimental analysis on deferring to a biased decision maker (Section 7.3) is rather limited. Summary: This paper proposes a novel framework for ensuring fairness in the classification pipeline. More specifically, the paper outlines a strategy called learn to defer which enables the design of predictive models which not only classify accurately and fairly but also defer if necessary. Deferring a decision is used as a mechanism to ensure both fairness and accuracy. Furthermore, the authors consider two variants depending on if the model has some information about the decision maker or not. Experimental results on real world datasets demonstrate the effectiveness of the proposed approach in building an end to end pipeline that ensures accuracy and fairness. Novelty: The main novelty of this work stems from the idea of introducing learning to defer mechanisms in the context of fairness. While the ideas of learning to defer have already been studied in the context of classification models, this is the first contribution which leverages learning to defer strategy as a means to achieve fairness. However, beyond this conceptual novelty, the work does not demonstrate a lot of technical novelty or depth. The objective functions proposed are simple extensions of work done by Zafar et. al. (WWW, AISTATS 2017). The optimization procedures being used are also fairly standard. Furthermore, the authors do not carry out any rigorous theoretical analysis either. Other detailed comments: 1. I would strongly encourage the authors to carry out a more in-depth theoretical analysis of the proposed framework (Refer to "Provably Fair Representations" McNamara et. al. 2017) 2. Experimental evaluation can also be strengthened. More specifically, analysis in Section 7.3 can be made more thorough. Instead of just sticking to one scenario where the decision maker is extremely biased (how are you quantifying this?), how about plotting a graph where x axis denotes the extent of bias in decision-maker's judgments and y-axis captures the model performance? 3. Overall, the paper is quite well written and is well motivated. There are however some typos and incorrect figure refernces (e.g., Section 7.2 first line, Figure 7.2, there is no such figure).
iclr_2018_rJvJXZb0W
AN EFFICIENT FRAMEWORK FOR LEARNING SENTENCE REPRESENTATIONS In this work we propose a simple and efficient framework for learning sentence representations from unlabelled data. Drawing inspiration from the distributional hypothesis and recent work on learning sentence representations, we reformulate the problem of predicting the context in which a sentence appears as a classification problem. Given a sentence and the context in which it appears, a classifier distinguishes context sentences from other contrastive sentences based on their vector representations. This allows us to efficiently learn different types of encoding functions, and we show that the model learns high-quality sentence representations. We demonstrate that our sentence representations outperform stateof-the-art unsupervised and supervised representation learning methods on several downstream NLP tasks that involve understanding sentence semantics while achieving an order of magnitude speedup in training time.
==Update== I appreciate the response, and continue to recommend acceptance. The evaluation metric used in this paper (SentEval) represents an important open problem in NLP—learning reusable sentence representations—and one of the problems in NLP best suited to presentation at IC*LR*. Because of this, I'm willing to excuse the fact that the paper is only moderately novel, in light of the impressive reported results. While I would appreciate a direct (same codebase, same data) comparison with some outside baselines, this paper meets or exceeds the standards for rigor that were established by previous published work in the area, and the existing results are sufficient to support some substantial conclusions. ========== This paper proposes an alternative formulation of Kiros's SkipThought objective for training general-purpose sentence encoder RNNs on unlabeled data. This formulation replaces the decoder in that model with a second encoder, and yields substantial improvements to both speed and model performance (as measured on downstream transfer tasks). The resulting model is, for the first time, reasonably competitive even with models that are trained end-to-end on labeled data for the downstream tasks (despite the requirement, imposed by the evaluation procedure, that only the top layer classifier be trained for the downstream tasks here), and is also competitive with models trained on large labeled datasets like SNLI. The idea is reasonable, the topic is important, and the results are quite strong. I recommend acceptance, with some caveats that I hope can be addressed. Concerns: A nearly identical idea to the core idea of this paper was proposed in an arXiv paper this spring, as a commenter below pointed out. That work has been out for long enough that I'd urge you to cite it, but it was not published and it reports results that are far less impressive than yours, so that omission isn't a major problem. I'd like to see more discussion of how you performed your evaluation on the downstream tasks. Did you use the SentEval tool from Conneau et al., as several related recent papers have? If not, does your evaluation procedure differ from theirs or Kiros's in any meaningful way? I'm also a bit uncomfortable that the paper doesn't directly compare with any baselines that use the exact same codebase, word representations, hyperparameter tuning procedure, etc.. I would be more comfortable with the results if, for example, the authors compared a low-dimensional version of their model with a low-dimensional version of SkipThought, trained in the *exact* same way, or if they implemented the core of their model within the SkipThought codebase and showed strong results there. Minor points: The headers in Table 1 don't make it all that clear which additions (vectors, UMBC) are cumulative with what other additions. This should be an easy fix. The use of the check-mark as an output in Figure 1 doesn't make much sense, since the task is not binary classification. "Instead of training a model to reconstruct the surface form of the input sentence or its neighbors, our formulation attempts to focus on the semantic aspects of sentences. The meaning of a sentence is the property that creates bonds between a sequence of sentences and makes it logically flow." – It's hard to pin down exactly what this means, but it sounds like you're making an empirical claim here: semantic information is more important than non-semantic sources of variation (syntactic/lexical/morphological factors) in predicting the flow of a text. Provide some evidence for this, or cut it. You make a similar claim later in the same section: "In figure 1(a) however, the reconstruction loss forces the model to predict local structural information about target sentences that may be irrelevant to its meaning (e.g., is governed by grammar rules)." This is a testable prediction: Are purely grammatical (non-semantic) variations in sentence form helpful for your task? I'd suspect that they are, at least in some cases, as they might give you clues as to style, dialect, or framing choices that the author made when writing that specific passage. "Our best BookCorpus model (MC-QT) trains in just under 11hrs, compared to skip-thought model’s training time of 2 weeks." – If you say this, you need to offer evidence that your model is faster. If you don't use the same hardware and low-level software (i.e., CuDNN), this comparison tells us nearly nothing. The small-scale replication of SkipThought described above should address this issue, if performed.
iclr_2018_S1NHaMW0b
This paper proposes a powerful regularization method named ShakeDrop regularization. ShakeDrop is inspired by Shake-Shake regularization that decreases error rates by disturbing learning. While Shake-Shake can be applied to only ResNeXt which has multiple branches, ShakeDrop can be applied to not only ResNeXt but also ResNet, Wide ResNet and PyramidNet in a memory efficient way. Important and interesting feature of ShakeDrop is that it strongly disturbs learning by multiplying even a negative factor to the output of a convolutional layer in the forward training pass. The effectiveness of ShakeDrop is confirmed by experiments on CIFAR-10/100 and Tiny ImageNet datasets. (Huang et al., 2016) and Shake-Shake (Gastaldi, 2017) are known to be effective regularization methods for ResNet and its improvements. Among them, Shake-Shake applied to ResNeXt is the one achieving the lowest error rates on CIFAR-10/100 datasets (Gastaldi, 2017). Shake-Shake, however, has following two drawbacks. (1) Shake-Shake can be applied to only multibranch architectures (i.e., ResNeXt). (2) Shake-Shake is not memory efficient. Both drawbacks come from the same root. That is, Shake-Shake requires two branches of residual blocks to apply. If it is true, it is not difficult to conceive its solution: a similar disturbance to Shake-Shake on a single residual block. It is, however, not trivial to realize it. The current paper addresses the problem of realizing a similar disturbance to Shake-Shake on a single residual block, and proposes a powerful regularization method, named ShakeDrop regularization. While the proposed ShakeDrop is inspired by Shake-Shake, the mechanism of disturbing learning is completely different. ShakeDrop disturbs learning more strongly by multiplying even a negative factor to the output of a convolutional layer in the forward training pass. In addition, a different factor from the forward pass is multiplied in the backward training pass. As a byproduct, however, learning process gets unstable. Our solution to this problem is to stabilize the learning process by employing ResDrop in a different usage from the usual. Based on experiments using various base network architectures, we reveal the condition that the proposed ShakeDrop successfully works.
The paper proposes ShakeDrop regularization, which is essentially a combination of the PyramidDrop and Shake-Shake regularization. The procedure consists of essentially weighting the residual branch with a random weight, in the style of Shake-Shake, where the weight is sampled from a mixture of uniform distribution in [-1, 1] and delta at 1, such that the mixture of those two distributions varies linearly with layer depth, in the style of PyramidDrop. In the style of Shake-Shake, a different random weight (in [0, 1]) is used for the backward pass. The most surprising part is that that the forward weight can be negative thus inverting the output of a convolution. Apparently the goal is to "disturb" the training, and the procedure yields state-of-the-art results on CIFAR-10/100. Positives: - Results: state-of-the-art on CIFAR-10/100 Negatives: 1. No real motivation on why should this work. I guess the motivation is the mixture of PyramidDrop and Shake-Shake motivations, but the main surprising part (forward weight can be negative) is not motivated at all. There is a tiny bit of discussion at the very end, section 4.4, where authors examine the training loss (showing it's non-zero so less overfitting) and mean/variance of gradients (increased). However, this doesn't really satisfy me - it is clear that more disturbance will cause this behaviour, but that doesn't mean any disturbance is good, e.g. if I always apply the negative weight and make my model weights go in the wrong direction, I'm pretty sure training loss and gradients will be even larger, but it's a bad idea to do. 2. I'm concerned with the "weird trick that happens to work on CIFAR" line of work (not saying that this paper is the only offender) - are these methods actually useful and generalizable to other problems, or are we overfitting on CIFAR and creating MNIST v2.0 ? It would be nice to demonstrate that this regularization works in at least one more problem, maybe ImageNet, though maybe regularization is not needed there but just find one more dataset that needs regularization and test this on that. 3. The paper doesn't explain well what is the problem with Shake-Shake and memory. I see that the author of Shake-Shake has made a comment on this and that makes a lot of sense, i.e. there is no memory issue, just because there are 2x branches doesn't mean shake-shake needs 2x memory as it can use less capacity=memory to achieve the same performance. So it seems the main premise of the paper - "let's apply Shake-Shake to deeper models but we need to come up with a modified method because Shake-Shake cannot be applied due to memory problems" - seems wrong. 4. The writing quality is quite bad, it is very hard to understand what authors mean in parts of the text. E.g. at two places "it has almost the same residual block as Eqn. (1)" - how is it "almost"? Below equation 5, it is never specified that alpha and beta are sampled uniformly(?) from those ranges, one could think that alpha and beta are fixed constants that take a specific value that is in that range. There are also various grammatical errors such as "is expected to be powerful but slight memory overhead" or "which is introduced essence", etc. Smaller comments: - Isn't it surprising that alpha in [-1, 1] and beta in [0, 1] works well, but alpha in [0, 1] and beta in [-1, 1] works much worse? The two important cases, (alpha negative, beta positive) and (alpha positive, beta negative), seem to me like they are conceptually very similar. - End of section 4.1, should it be b_l as p_L is a constant and b_l is what is sampled? - I don't like that exactly the same text is repeated 3 times (abstract, end of intro, end of 1.1) and in very short distance from each other - repeating the same words 3 times doesn't make the reader understand it better, slight rephrasing is much more beneficial. Overall: Good to know that this method sets the new state of the art on CIFAR-10/100, so as such it should be of interest to the community to be available online (arXiv). But with fairly little novelty (is a combination of 2 methods), very little insights of why this should work at all (especially the negative scaling coefficient which is the only extra thing that one learns from this paper, since the rest is a combination of PyramidDrop and Shake-Shake), no idea on whether the method would work outside of the CIFAR-world, and bad quality of the text - I don't think the manuscript is sufficiently good for ICLR.
iclr_2018_rJzIBfZAb
Published as a conference paper at ICLR 2018 TOWARDS DEEP LEARNING MODELS RESISTANT TO ADVERSARIAL ATTACKS Recent work has demonstrated that neural networks are vulnerable to adversarial examples, i.e., inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network. To address this problem, we study the adversarial robustness of neural networks through the lens of robust optimization. This approach provides us with a broad and unifying view on much prior work on this topic. Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal. In particular, they specify a concrete security guarantee that would protect against a well-defined class of adversaries. These methods let us train networks with significantly improved resistance to a wide range of adversarial attacks. They also suggest robustness against a first-order adversary as a natural security guarantee. We believe that robustness against such well-defined classes of adversaries is an important stepping stone towards fully resistant deep learning models.
This paper proposes to look at making neural networks resistant to adversarial loss through the framework of saddle-point problems. They show that, on MNIST, a PGD adversary fits this framework and allows the authors to train very robust models. They also show encouraging results for robust CIFAR-10 models, but with still much room for improvement. Finally, they suggest that PGD is an optimal first order adversary, and leads to optimal robustness against any first order attack. This paper is well written, brings new ideas and perfoms interesting experiments, but its claims are somewhat bothering me, considering that e.g. your CIFAR-10 results are somewhat underwhelming. All you've really proven is that PGD on MNIST seems to be the ultimate adversary. You contrast this to the fact that the optimization is non-convex, but we know for a fact that MNIST is fairly simple in that regime; iirc a linear classifier gets something like 91% accuracy on MNIST. So my guess is that the optimization problem on MNIST is in fact pretty convex and mostly respects the assumptions of Danskin's theorem, but not so much for CIFAR-10 (maybe even less so for e.g. ImageNet, which is what Kurakin et al. seem to find). Considering your CIFAR-10 results, I don't think anyone should "suggest that secure neural networks are within reach", because 1) there is still room for improvement 2) it's a safe bet that someone will always just come up with a better attack than whatever defense we have now. It has been this way in many disciplines (crypto, security) for centuries, I don't see why deep learning should be exempt. Simply saying "we believe that our robust models are significant progress on the defense side" was enough, because afaik you did improve on CIFAR-10's SOTA; don't overclaim. You make these kinds of claims in a few other places in this paper, please be careful with that. The contributions in your appendix are interesting. Appendix A somewhat confirms one of the postulates in Goodfellow et al. (2014): "The direction of perturbation, rather than the specific point in space, matters most. Space is not full of pockets of adversarial examples that finely tile the reals like the rational numbers". Appendix B and C are not extremely novel in my mind, but definitely add more evidence. Appendix E is quite nice since it gives an insight into what actually makes the model resistant to adversarial examples. Remarks: - The update for PGD should be using \nabla_{x_t} L(\theta,x_t,y), (rather than only \nabla_x)? - In table 2, attacking a with 20-step PGD is doing better than 7-step. When you say "other hyperparameter choices didn’t offer a significant decrease in accuracy", does that include the number of steps? If not why stop there? What happens for more steps? (or is it too computationally intensive?) - You only seem to consider adversarial examples created from your dataset + adv. noise. What about rubbish class examples? (e.g. rgb noise)
iclr_2018_BJE-4xW0W
CAUSALGAN: LEARNING CAUSAL IMPLICIT GENER- ATIVE MODELS WITH ADVERSARIAL TRAINING We introduce causal implicit generative models (CiGMs): models that allow sampling from not only the true observational but also the true interventional distributions. We show that adversarial training can be used to learn a CiGM, if the generator architecture is structured based on a given causal graph. We consider the application of conditional and interventional sampling of face images with binary feature labels, such as mustache, young. We preserve the dependency structure between the labels with a given causal graph. We devise a two-stage procedure for learning a CiGM over the labels and the image. First we train a CiGM over the binary labels using a Wasserstein GAN where the generator neural network is consistent with the causal graph between the labels. Later, we combine this with a conditional GAN to generate images conditioned on the binary labels. We propose two new conditional GAN architectures: CausalGAN and CausalBEGAN. We show that the optimal generator of the CausalGAN, given the labels, samples from the image distributions conditioned on these labels. The conditional GAN combined with a trained CiGM for the labels is then a CiGM over the labels and the generated image. We show that the proposed architectures can be used to sample from observational and interventional image distributions, even for interventions which do not naturally occur in the dataset.
In their paper "CausalGAN: Learning Causal implicit Generative Models with adv. training" the authors address the following issue: Given a causal structure between "labels" of an image (e.g. gender, mustache, smiling, etc.), one tries to learn a causal model between these variables and the image itself from observational data. Here, the image is considered to be an effect of all the labels. Such a causal model allows us to not only sample from conditional observational distributions, but also from intervention distributions. These tasks are clearly different, as nicely shown by the authors' example of "do(mustache = 1)" versus "given mustache = 1" (a sample from the latter distribution contains only men). The paper does not aim at learning causal structure from data (as clearly stated by the authors). The example images look convincing to me. I like the idea of this paper. IMO, it is a very nice, clean, and useful approach of combining causality and the expressive power of neural networks. The paper has the potential of conveying the message of causality into the ICLR community and thereby trigger other ideas in that area. For me, it is not easy to judge the novelty of the approach, but the authors list related works, none of which seems to solve the same task. The presentation of the paper, however, should be improved significantly before publication. (In fact, because of the presentation of the paper, I was hesitating whether I should suggest acceptance.) Below, I give some examples (and suggest improvements), but there are many others. There is a risk that in its current state the paper will not generate much impact, and that would be a pity. I would therefore like to ask the authors to put a lot of effort into improving the presentation of the paper. - I believe that I understand the authors' intention of the caption of Fig. 1, but "samples outside the dataset" is a misleading formulation. Any reasonable model does more than just reproducing the data points. I find the argumentation the authors give in Figure 6 much sharper. Even better: add the expression "P(male = 1 | mustache = 1) = 1". Then, the difference is crystal clear. - The difference between Figures 1, 4, and 6 could be clarified. - The list of "prior work on learning causal graphs" seems a bit random. I would add Spirtes et al 2000, Heckermann et al 1999, Peters et al 2016, and Chickering et al 2002. - Male -> Bald does not make much sense causally (it should be Gender -> Baldness)... Aha, now I understand: The authors seem to switch between "Gender" and "Male" being random variables. Make this consistent, please. - There are many typos and comma mistakes. - I would introduce the do-notation much earlier. The paragraph on p. 2 is now written without do-notation ("intervening Mustache = 1 would not change the distribution"). But this way, the statements are at least very confusing (which one is "the distribution"?). - I would get rid of the concept of CiGM. To me, it seems that this is a causal model with a neural network (NN) modeling the functions that appear in the SCM. This means, it's "just" using NNs as a model class. Instead, one could just say that one wants to learn a causal model and the proposed procedure is called CausalGAN? (This would also clarify the paper's contribution.) - many realizations = one sample (not samples), I think. - Fig 1: which model is used to generate the conditional sample? - The notation changes between E and N and Z for the noises. I believe that N is supposed to be the noise in the SCM, but then maybe it should not be called E at the beginning. - I believe Prop 1 (as it is stated) is wrong. For a reference, see Peters, Janzing, Scholkopf: Elements of Causal Inference: Foundations and Learning Algorithms (available as pdf), Definition 6.32. One requires the strict positivity of the densities (to properly define conditionals). Also, I believe the Z should be a vector, not a set. - Below eq. (1), I am not sure what the V in P_V refers to. - The concept of data probability density function seems weird to me. Either it is referring to the fitted model, then it's a bad name, or it's an empirical distribution, then there is no pdf, but a pmf. - Many subscripts are used without explanation. r -> real? g -> generating? G -> generating? Sometimes, no subscripts are used (e.g., Fig 4 or figures in Sec. 8.13) - I would get rid of Theorem 1 and explain it in words for the following reasons. (1) What is an "informal" theorem? (2) It refers to equations appearing much later. (3) It is stated again later as Theorem 2. - Also: the name P_g does not appear anywhere else in the theorem, I think. - Furthermore, I would reformulate the theorem. The main point is that the intervention distributions are correct (this fact seems to be there, but is "hidden" in the CIGN notation in the corollary). - Re. the formulation in Thm 2: is it clear that there is a unique global optimum (my intuition would say there could be several), thus: better write "_a_ global minimum"? - Fig. 3 was not very clear to me. I suggest to put more information into its caption. - In particular, why is the dataset not used for the causal controller? I thought, that it should model the joint (empirical) distribution over the labels, and this is part of the dataset. Am I missing sth? - IMO, the structure of the paper can be improved. Currently, Section 3 is called "Background" which does not say much. Section 4 contains CIGMs, Section 5 Causal GANs, 5.1. Causal Controller, 5.2. CausalGAN, 5.2.1. Architecture (which the causal controller is part of) etc. An alternative could be: Sec 1: Introduction Sec 1.1: Related Work Sec 2: Causal Models Sec 2.1: Causal Models using Generative Models (old: CIGM) Sec 3: Causal GANs Sec 3.1: Architecture (including controller) Sec 3.2: loss functions ... Sec 4: Empricial Results (old: Sec. 6: Results) - "Causal Graph 1" is not a proper reference (it's Fig 23 I guess). Also, it is quite important for the paper, I think it should be in the main part. - There are different references to the "Appendix", "Suppl. Material", or "Sec. 8" -- please be consistent (and try to avoid ambiguity by being more specific -- the appendix contains ~20 pages). Have I missed the reference to the proof of Thm 2? - 8.1. contains copy-paste from the main text. - "proposition from Goodfellow" -> please be more precise - What is Fig 8 used for? Is it not sufficient to have and discuss Fig 23? - IMO, Section 5.3. should be rewritten (also, maybe include another reference for BEGAN). - There is a reference to Lemma 15. However, I have not found that lemma. - I think it's quite interesting that the framework seems to also allow answering counterfactual questions for realizations that have been sampled from the model, see Fig 16. This is the case since for the generated realizations, the noise values are known. The authors may think about including a comment on that issue. - Since this paper's main proposal is a methodological one, I would make the publication conditional on the fact that code is released.
iclr_2018_SJ8M9yup-
Auto-Encoders are unsupervised models that aim to learn patterns from observed data by minimizing a reconstruction cost. The useful representations learned are often found to be sparse and distributed. On the other hand, compressed sensing and sparse coding assume a data generating process, where the observed data is generated from some true latent signal source, and try to recover the corresponding signal from measurements. Looking at auto-encoders from this signal recovery perspective enables us to have a more coherent view of these techniques. In this paper, in particular, we show that the true hidden representation can be approximately recovered if the weight matrices are highly incoherent with unit 2 row length and the bias vectors takes the value (approximately) equal to the negative of the data mean. The recovery also becomes more and more accurate as the sparsity in hidden signals increases. Additionally, we empirically also demonstrate that auto-encoders are capable of recovering the data generating dictionary when only data samples are given.
This papers proposes to analyze auto-encoders under sparsity constraints of an underlying signal to be recovered. Based on concentration inequality, the reconstruction provided for a simple class of functions is guaranteed to be accurate in l1 norm with high probability. The proof techniques are classical, but the results seem novel as far as I know. As an open question, could the results be given for other lp norms, in particular for infinity-norm? Indeed, this is a privileged norm for support recovery. Presentation issues: - section should be Section when stating for instance "section 1". Idem for eq, equation, assumption... - bold fonts for vectors are randomly used: some care should be given to harmonizing symbols fonts. - equations should be cited with brackets References issues: - harmonize citations: if you add first name for some authors add it for all references: why writing Roland Makhzani and J. Wright? - Candes -> Cand\`es - Consider citing "Sparse approximate solutions to linear systems", Natarajan 1995 when mentioning Amaldi and Kann 1998. Specific comments: page 1: - hasn't -> has not. page 2: - "activation function": at this stage s_e and s_d are just functions. What is the "activation" refers to? Also a clarification on the space they act on should be stated. Idem for b_e and b_d. - "the identity of h in eq. 1 is only well defined in the presence of l1 regularization due to the over-completeness of the dictionary" : this is implicitly stating the uniqueness of the Lasso. Not that it is well known that there are cases where the Lasso is non-unique. Please, clarify your statement accordingly. - for simplicity b_d could be removed here. - in (4) it would be more natural to write f_j(h_j) instead of f(h_j) - "has is that to be bounded"-> is boundedness? - what is l_max_j here? moreover the bold letters seem to represent vectors but this should be state explicitly somewhere. page 3: - what is the precise meaning of "distributed" when referring to representation - In remark 1: the font has changed weirdly for W and h. - "two class"->two classes - Definition 1: again what is a precise definition of activation function? - "if we set": bold issue. - b should b_e in Theorem 1, right? Also, please recall the definition of the sigmoid function here. Moreover l_max and mu_h seem useless in this theorem... why referring to them? - "if the rows of the weight matrix is"-> if the rows of the weight matrix are page 4: - Proposition 1 could be stated as a theorem and Th.1 as a corollary (with e=0). The same is true for proposition 2 I suspect. - Again the influence of l_max and mu_h are none here... - Please, provide the definition of the ReLu function here. Is this just x->x_+ ? page 6: - R^+m -> font issue again. - "are maximally incoherent": what is the precise meaning of this statement? - what the motivation for Theorem 3? This should be discussed. - De-noising -> de-noising - the discussion after (15) should be made more precise. page 7: - Figure 1 and 2 should be postponed to page 8. - in Eq. (16) one needs to known E_h(x) and E_h_i(h_i), but I suspect this quantity are usually unknown to the practitioner. Can the author comment on that? page 8: - "the recovery is denoised through thresholding": where is this step analyzed? page 9: - figure 3: sparseness-> sparsity; also what is the activation function used here? - "are then generate"->are then generated - "by greedily select"->by greedily selecting - "the the" - "and thus pre-process"-> and thus pre-processing Supplementary: page 1: - please define \sigma, and its simple properties used along the proof. page 2: - g should be g_j (in eq 27 - > 31) - overall this proof relies on ingredients such as the one used for Hoeffding's inequality. Most ingredients could be taken from standard tools on concentration (see for instance Boucheron, Lugosi, Massart: "Concentration Inequalities: A Nonasymptotic Theory of Independence", 2013). Moreover, some elements should be factorized as they are shared among the next proofs. This should reduce the size of the supplementary dramatically. page 7: - Eq. (99): it should be reminded that W_ii=1 here. - the upper bound used on \mu to get equation 105 seems to be in the wrong order.
iclr_2018_ryzm6BATZ
We propose a new, multi-component energy function for energy-based Generative Adversarial Networks (GANs) based on methods from the image quality assessment literature. Our approach expands on the Boundary Equilibrium Generative Adversarial Network (BEGAN) by outlining some of the short-comings of the original energy and loss functions. We address these short-comings by incorporating an l 1 score, the Gradient Magnitude Similarity score, and a chrominance score into the new energy function. We then provide a set of systematic experiments that explore its hyper-parameters. We show that each of the energy function's components is able to represent a slightly different set of features, which require their own evaluation criteria to assess whether they have been adequately learned. We show that models using the new energy function are able to produce better image representations than the BEGAN model in predicted ways.
This paper proposed some new energy function in the BEGAN (boundary equilibrium GAN framework), including l_1 score, Gradient magnitude similarity score, and chrominance score, which are motivated and borrowed from the image quality assessment techniques. These energy component in the objective function allows learning of different set of features and determination on whether the features are adequately represented. experiments on the using different hyper-parameters of the energy function, as well as visual inspections on the quality of the learned images, are presented. It appears to me that the novelty of the paper is limited, in that the main approach is built on the existing BEGAN framework with certain modifications. For example, the new energy function in equation (4) larges achieves similar goal as the original energy (1) proposed by Zhao et. al (2016), except that the margin loss in (1) is changed to a re-weighted linear loss, where the dynamic weighting scheme of k_t is borrowed from the work of Berthelot et. al (2017). It is not very clear why making such changes in the energy would supposedly make the results better, and no further discussions are provided. On the other hand, the several energy component introduced are simply choices of the similarity measures as motivated from the image quality assessment, and there are probably a lot more in the literature whose application can not be deemed as a significant contribution to either theories or algorithm designs in GAN. Many results from the experimental section rely on visual evaluations, such as in Figure~4 or 5; from these figures, it is difficult to clearly pick out the winning images. In Figure~5, for a fair evaluation on the performance of model interploations, the same human model should be used for competing methods, instead of applying different human models and different interpolation tasks in different methods.
iclr_2018_BkQCGzZ0-
Recurrent models for sequences have been recently successful at many tasks, especially for language modeling and machine translation. Nevertheless, it remains challenging to extract good representations from these models. For instance, even though language has a clear hierarchical structure going from characters through words to sentences, it is not apparent in current language models. We propose to improve the representation in sequence models by augmenting current approaches with an autoencoder that is forced to compress the sequence through an intermediate discrete latent space. In order to propagate gradients though this discrete representation we introduce an improved semantic hashing technique. We show that this technique performs well on a newly proposed quantitative efficiency measure. We also analyze latent codes produced by the model showing how they correspond to words and phrases. Finally, we present an application of the autoencoder-augmented model to generating diverse translations.
This is an interesting paper focusing on building discrete reprentations of sequence by autoencoder. However, the experiments are too weak to demonstrate the effectiveness of using discrete representations. The design of the experiments on language model is problematic. There are a few interesting points about discretizing the represenations by saturating sigmoid and gumbel-softmax, but the lack of comparisons to benchmarks is a critical defect of this paper. Generally, continuous vector representations are more powerful than discrete ones, but discreteness corresponds to some inductive biases that might help the learning of deep neural networks, which is the appealing part of discrete representations, especially the stochastic discrete representations. However, I didn't see the intuitions behind the model that would result in its superiority to the continuous counterpart. The proposal of DSAE might help evaluate the usage of the 'autoencoding function' c(s), but it is certainly not enough to convince people. How is the performance if c(s) is replaced with the representations achieved from autoencoder, variational autoencoder or simply the sentence vectors produced by language model? The qualitative evaluation on 'Deciperhing the Latent Code' is not enough either. In addition, the language model part doesn't sound correct, because the model cheated on seeing the further before predicting the words autoregressively. One suggestion is to change the framework to variational auto-encoder, otherwise anything related to perplexity is not correct in this case. Overall, this paper is more suitable for the workshop track. It also needs a lot of more studies on related work.
iclr_2018_HyXBcYg0b
Graph-structured data such as social networks, functional brain networks, gene regulatory networks, communications networks have brought the interest in generalizing deep learning techniques to graph domains. In this paper, we are interested to design neural networks for graphs with variable length in order to solve learning problems such as vertex classification, graph classification, graph regression, and graph generative tasks. Most existing works have focused on recurrent neural networks (RNNs) to learn meaningful representations of graphs, and more recently new convolutional neural networks (ConvNets) have been introduced. In this work, we want to compare rigorously these two fundamental families of architectures to solve graph learning tasks. We review existing graph RNN and ConvNet architectures, and propose natural extension of LSTM and ConvNet to graphs with arbitrary size. Then, we design a set of analytically controlled experiments on two basic graph problems, i.e. subgraph matching and graph clustering, to test the different architectures. Numerical results show that the proposed graph ConvNets are 3-17% more accurate and 1.5-4x faster than graph RNNs. Graph ConvNets are also 36% more accurate than variational (non-learning) techniques. Finally, the most effective graph ConvNet architecture uses gated edges and residuality. Residuality plays an essential role to learn multi-layer architectures as they provide a 10% gain of performance.
The paper proposes an adaptation of existing Graph ConvNets and evaluates this formulation on a several existing benchmarks of the graph neural network community. In particular, a tree structured LSTM is taken and modified. The authors describe this as adapting it to general graphs, stacking, followed by adding edge gates and residuality. My biggest concern is novelty, as the modifications are minor. In particular, the formulation can be seen in a different way. As I see it, instead of adapting Tree LSTMs to arbitary graphs, it can be seen as taking the original formulation by Scarselli and replacing the RNN by a gated version, i.e. adding the known LSTM gates (input, output, forget gate). This is a minor modification. Adding stacking and residuality are now standard operations in deep learning, and edge-gates have also already been introduced in the literature, as described in the paper. A second concern is the presentation of the paper, which can be confusing at some points. A major example is the mathematical description of the methods. When reading the description as given, one should actually infer that Graph ConvNets and Graph RNNs are the same thing, which can be seen by the fact that equations (1) and (6) are equivalent. Another example, after (2), the important point to raise is the difference to classical (sequential) RNNs, namely the fact that the dependence graph of the model is not a DAG anymore, which introduces cyclic dependencies. Generally, a clear introduction of the problem is also missing. What are the inputs, what are the outputs, what kind of problems should be solved? The update equations for the hidden states are given for all models, but how is the output calculated given the hidden states from variable numbers of nodes of an irregular graph? The model has been evaluated on standard datasets with a performance, which seems to be on par, or a slight edge, which could probably be due to the newly introduced residuality. A couple of details : - the length of a graph is not defined. The size of the set of nodes might be meant. - at the beginning of section 2.1 I do not understand the reference to word prediction and natural language processing. RNNs are not restricted to NLP and I think there is no need to introduce an application at this point. - It is unclear what does the following sentence means: "ConvNets are more pruned to deep networks than RNNs"? - What are "heterogeneous graph domains"?
iclr_2018_H113pWZRb
Convolution acts as a local feature extractor in convolutional neural networks (CNNs). However, the convolution operation is not applicable when the input data is supported on an irregular graph such as with social networks, citation networks, or knowledge graphs. This paper proposes the topology adaptive graph convolutional network (TAGCN), a novel graph convolutional network that generalizes CNN architectures to graph-structured data and provides a systematic way to design a set of fixed-size learnable filters to perform convolutions on graphs. The topologies of these filters are adaptive to the topology of the graph when they scan the graph to perform convolution, replacing the square filter for the grid-structured data in traditional CNNs. The outputs are the weighted sum of these filters' outputs, extraction of both vertex features and strength of correlation between vertices. It can be used with both directed and undirected graphs. The proposed TAGCN not only inherits the properties of convolutions in CNN for grid-structured data, but it is also consistent with convolution as defined in graph signal processing. Further, as no approximation to the convolution is needed, TAGCN exhibits better performance than existing graph-convolution-approximation methods on a number of data sets. As only the polynomials of degree two of the adjacency matrix are used, TAGCN is also computationally simpler than other recent methods.
The paper introduces Topology Adaptive GCN (TAGCN) to generalize convolutional networks to graph-structured data. I find the paper interesting but not very clearly written in some sections, for instance I would better explain what is the main contribution and devote some more text to the motivation. Why is the proposed approach better than the previously published ones, and when is that there is an advantage in using it? The main contribution seems to be the use of the "graph shift" operator from Sandryhaila and Moura (2013), which closely resembles the one from Shuman et al. (2013). It is actually not very well explained what is the main difference. Equation (2) shows that the learnable filters g are operating on the k-th power of the normalized adjacency matrix A, so when K=1 this equals classical GCN from T. Kipf et al. By using K > 1 the method is able to leverage information at a farther distance from the reference node. Section 2.2 requires some polishing as I found hard to follow the main story the authors wanted to tell. The definition of the weight of a path seems disconnected from the main text, ins't A^k kind of a a diffusion operator or random walk? This makes me wonder what would be the performance of GCN when the k-th power of the adjacency is used. I liked Section 3, however while it is true that all methods differ in the way they do the filtering, they also differ in the way the input graph is represented (use of the adjacency or not). Experiments are performed on the usual reference benchmarks for the task and show sensible improvements with respect to the state-of-the-art. TAGCN with K=2 has twice the number of parameters of GCN, which makes the comparison not entirely fair. Did the author experiment with a comparable architecture? Also, how about using A^2 in GCN or making two GCN and concatenate them in feature space to make the representational power comparable? It is also known that these benchmarks, while being widely used, are small and result in high variance results. The authors should report statistics over multiple runs. Given the systematic parameter search, with reference to the actual validation (or test?) set I am afraid there could be some overfitting. It is quite easy to probe the test set to get best performance on these benchmarks. As a minor remark, please make figures readable also in BW. Overall I found the paper interesting but also not very clear at pointing out the major contribution and the motivation behind it. At risk of being too reductionist: it looks as learning a set of filters on different coordinate systems given by the various powers of A. GCN looks at the nearest neighbors and the paper shows that using also the 2-ring improves performance.
iclr_2018_ry4S90l0b
Since the creation of Generative Adversarial Networks (GANs), much work has been done to improve their training stability, their generated image quality, their range of application but nearly none of them explored their self-training potential. Self-training has been used before the advent of deep learning in order to allow training on limited labelled training data and has shown impressive results in semi-supervised learning. In this work, we combine these two ideas and make GANs self-trainable for semi-supervised learning tasks by exploiting their infinite data generation potential. Results show that using even the simplest form of self-training yields an improvement. We also show results for a more complex self-training scheme that performs at least as well as the basic self-training scheme but with significantly less data augmentation.
The paper presents to combine self-learning and GAN. The basic idea is to first use GAN to generate data, and then infer the pseudo label, and finally use the pseudo labeled data to enhance the learning process. Experiments are conducted on one image data set. The paper contains several deficiencies. 1. The experiment is weak. Firstly, only one data set is employed for evaluation, which is hard to justify the applicability of the proposed approach. Secondly, the compared methods are too few and do not include many state-of-the-art SSL methods like graph-based approaches. Thirdly, in these cases, the results in table 1 contain evident redundancy. Fourthly, the performance improvement over compared method is not significant and the result is based on 3 splits of data set, which is obviously not convincing and involves large variance. 2. The paper claims that ‘when paired with deep, semi-supervised learning has had a few success’. I do not agree with such a claim. There are many success SSL deep learning studies on embedding. They are not included in the discussions. 3. The layout of the paper could be improved. For example, there are too many empty spaces in the paper. 4. Overall technically the proposed approach is a bit straightforward and does not bring too much novelty. 5. The format of references is not consistent. For example, some conference has short name, while some does not have.
iclr_2018_ryOG3fWCW
The availability of general-purpose reference and benchmark datasets such as ImageNet have spurred the development of general-purpose popular reference model architectures and pre-trained weights. However, in practice, neural networks are often employed to perform specific, more restrictive tasks, that are narrower in scope and complexity. Thus, simply fine-tuning or transfer learning from a generalpurpose network inherits a large computational cost that may not be necessary for a given task. In this work, we investigate the potential for model specialization, or reducing a model's computational footprint by leveraging task-specific knowledge, such as a restricted inference distribution. We study three methods for model specialization-1) task-aware distillation, 2) task-aware pruning, and 3) specialized model cascades-and evaluate their performance on a range of classification tasks. Moreover, for the first time, we investigate how these techniques complement one another, enabling up to 5× speedups with no loss in accuracy and 9.8× speedups while remaining within 2.5% of a highly accurate ResNet on specialized image classification tasks. These results suggest that simple and easy-to-implement specialization procedures may benefit a large number practical applications in which the representational power of general-purpose networks need not be inherited.
The authors review and evaluate several empirical methods to create faster versions of big neural nets for vision without sacrificing accuracy. They show using the ResNet architecture that combining distillation, pruning, and cascades are complementary and can yield pretty nice speedups. This is a great idea and could be a strong paper, but it's really hard to glean useful recommendations from this for several reasons: - The writing of the paper makes it hard to understand exactly what's being compared and evaluated. For a paper like this it's really crucial to be precise. When the authors say "specialization" or "specialized model", they sometimes mean distillation, sometimes filter pruning, and sometimes cascades. The distinction of "task-aware" also seems arbitrary to me and obfuscates the contribution of the paper as well. As far as I can tell, the technique is exactly the same, all that's changing is a slight modification. It's not like any of the intuitions or objectives are changing, so adding this new terminology just complicates things. For example, just say "We distill a parent model to a child model with a subset of the labels." - In terms of substance, the experiments don't really add much value in terms of general lessons. For example, the Cat/Dog from ImageNet distillation only works if the target labels are exactly a subset of the original. Obviously if the parent model was overcomplete before, it is certainly overcomplete now. The proposed cascade method is also fairly trivial -- a cheap distilled model backs off to the reference model. Why not train the whole cascade end-to-end? What about multiple levels of cascades? The only useful conclusion I can draw from the experiments is that (1) distillation still works (2) cascades also still work (3) pruning doesn't seem that useful in comparison. Training a cascade also involves a bunch of non-trivial design choices which are largely ignored -- how to set pass through, how to train the model, etc. etc. - Nit: where are the blue squares in Figure 4? (Distill only) shouldn't those be the fastest methods (aside from pruning)? An ideal story would for a paper like this would be: here are some complementary ideas that we can combine in non-obvious ways for superlinear benefits, e.g. it turns out that by distilling into a cascade in some end-to-end fashion, you can get much better accuracy vs. speed trade-offs. Instead this paper is a grab-back of tricks. Such a paper can also provide value, but to do that right, the tricks need to be obvious *in retrospect only* and/or the experiments need to show a lot of precise practical lessons. All in all this paper reads like a tech report but not a conference publication.
iclr_2018_ryY4RhkCZ
Building robust online content recommendation systems requires learning complex interactions between user preferences and content features. The field has evolved rapidly in recent years from traditional multi-arm bandit and collaborative filtering techniques, with new methods integrating Deep Learning models that enable to capture non-linear feature interactions. Despite progress, the dynamic nature of online recommendations still poses great challenges, such as finding the delicate balance between exploration and exploitation. In this paper we provide a novel method, Deep Density Networks (DDN) which deconvolves measurement and data uncertainties and predicts probability density of CTR (Click Through Rate), enabling us to perform more efficient exploration of the feature space. We show the usefulness of using DDN online in a real world content recommendation system that serves billions of recommendations per day, and present online and offline results to evaluate the benefit of using DDN.
The paper adresses a very interesting question about the handling of the dynamics of a recommender systems at scale (here for linking to some articles). The defended idea is to use the context to fit a mixture of Gaussian with a NN and to assume that the noise could be additively split into two terms. One depend only on the number of observations of the given context and the average reward in this situation and the second term begin the noise. This is equivalent to separate a local estimation error from the noise. The idea is interesting but maybe not pushed far enough in the paper: *At fixed context x, assuming that the error is a function of the average reward u and of the number of displays r of the context could be a constant could be a little bit more supported (this is a variance explanation that could be tested statistically, or the shape of this 2D function f(u,r) could be plot to exhibit its regularity). * None of the experiments is done on public data which lead to an impossible to reproduce paper * The proposed baselines are not really the state of the art (Factorization Machines, GBDT features,...) and the used loss is MSE which is strange in the context of CTR prediction (logistic loss would be a more natural choice) * I'm not confident with the proposed surrogate metrics. In the paper, the work of Lihong Li &al on offline evaluation on contextual bandits is mentioned and considered as infeasible here because of the renewal of the set of recommendation. Actually this work can be adapted to handle theses situations (possibly requiring to bootstrap if the set is actually regenerating too fast). Also note that Yahoo Research R6a - R6b datasets where used in ICML'12 Exploration and Exploitation 3 challenge where about pushing some news in a given context and could be reused to support the proposed approach. An other option would be to use some counterfactual estimates (See Leon Bottou &all and Thorsten Joachims &all) * If the claim is about a better exploration, I'd like to have an idea of the influence of the tuning parameters and possibly a discussion/comparison over alternatives strategies (including an epsilon-n greedy algorithm) Besides theses core concerns, the papers suffers of some imprecisions on the notations which should be clarified. * As an example using O(1000) and O(1M) in the figure one. Everyone understands what is meant but O notation are made to eliminate constant terms and O(1) = O(1000). * For eqn (1) it would be better to refer to and "optimistic strategy" rather to UCB because the name is already taken by an algorithm which is not this one. Moreover the given strategy would achieve a linear regret if used as described in the paper which is not desirable for bandits algorithms (smallest counter example with two arms following a Bernouilli with different parameters if the best arms generates two zero in a row at the beginning, it is now stuck with a zero mean and zero variance estimate). This is why bandits bounds include a term which increase with the total number of plays. I agree that in practice this effect can be mitigated at that the strategy can be correct in the contextual case (but then I'd like to the dependancies on x to be clear) * The papers never mentions whats is a scalar, a vector or a matrix. This creates confusion: as an example eqn (3) can have several different meaning depending if the values are scalars, scalars depending on x or having a diagonal \sigma matrix * In the paragraph above (2) I unsure of what is a "binomial noise error distribution" for epsilon, but a few lines later epsilon becomes a gaussian why not just mention that you assume the presence of a gaussian noise on the parameters of a Bernouilli distribution ?
iclr_2018_HyxjwgbRZ
The sign stochastic gradient descent method (signSGD) utilises only the sign of the stochastic gradient in its updates. For deep networks, this one-bit quantisation has surprisingly little impact on convergence speed or generalisation performance compared to SGD. Since signSGD is effectively compressing the gradients, it is very relevant for distributed optimisation where gradients need to be aggregated from different processors. What's more, signSGD has close connections to common deep learning algorithms like RMSprop and Adam. We study the base theoretical properties of this simple yet powerful algorithm. For the first time, we establish convergence rates for signSGD on general non-convex functions under transparent conditions. We show that the rate of signSGD to reach first-order critical points matches that of SGD in terms of number of stochastic gradient calls, but loses out by roughly a linear factor in the dimension for general non-convex functions. We carry out simple experiments to explore the behaviour of sign gradient descent (without the stochasticity) close to saddle points and show that it can help to completely avoid certain kinds of saddle points without using either stochasticity or curvature information.
UPDATED REVIEW: I have checked all the reviews, also checked the most recent version. I like the new experiments, but I am not impressed much with them to increase my score. The assumption about the variance is fixing my concern, but as you have pointed out, it is a bit more tricky :) I would really suggest you work on the paper a bit more and re-submit it. -------------------------------------------------------------------- In this paper, authors provided a convergence analysis of Sign SGD algorithm for non-covex case. The crucial assumption for the proof was Assumption 3, otherwise, the proof technique is following a standard path in non-convex optimization. In general, the paper is written nicely, easy to follow. ============================================== "The major issue": Why Assumption 3 can be problematic in practice is given below: Let us assume just a convex case and assume we have just 2 kids of function in 2D: f_1(x) = 0.5 x_1^2 and f_2(x) = 0.5 x_2^2. Then define the function f(x) = E [ f_i(x) ]. where $i =1$ with prob 0.5 and $i=2$ with probability 0.5. We have that g(x) = 0.5 [ x_1, x_2 ]^T. Let us choose $i=1$ and choose $x = [a,a]^T$, where $a$ is some parameter. Then (4) says, that there has to exist a $\sigma$ such that P [ | \bar g_i(x) - g_i(x) | > t ] \leq 2 exp( - t^2 / 2\sigma^2). forall "x". plugging our function inside it should be true that P [ | [ B ] - 0.5 a | > t ] \leq 2 exp( - t^2 / 2\sigma^2). forall "x". where B is a random variable which has value "a" with probability 0.5 and value "0" with probability 0.5. If we choose $t = 0.1a$ then we have that it has to be true that 1 = P [ | [ B ] - 0.5 a | > 0.1a ] \leq 2 exp( - 0.01 a^2 / 2\sigma^2) ----> 0 as $a \to \infty$. Hence, even in this simple example, one can show that this assumption is violated unless $\sigma = \infty$. One way to ho improve this is to put more assumption + maybe put some projection into a compact set? ============================================== Hence, I think the theory should be improved. In terms of experiments, I like the discussion about escaping saddle points, it is indeed a good discussion. However, it would be nicer to have more numerical experiments. One thing I am also struggling is the "advantage" of using signSGD: one saves on communication (instead of sending 4*8 bits per dimension, one just send only 1 bit, however, one needs "d"times more iterations, hence, the theory shows that it is much worse then SGD (see (11) ).
iclr_2018_S1PWi_lC-
We apply multi-task learning to image classification tasks on MNIST-like datasets. MNIST dataset has been referred to as the drosophila of machine learning and has been the testbed of many learning theories. The NotMNIST dataset and the FashionMNIST dataset have been created with the MNIST dataset as reference. In this work, we exploit these MNIST-like datasets for multi-task learning. The datasets are pooled together for learning the parameters of joint classification networks. Then the learned parameters are used as the initial parameters to retrain disjoint classification networks. The baseline recognition model are all-convolution neural networks. Without multi-task learning, the recognition accuracies for MNIST, NotMNIST and FashionMNIST are 99.56%, 97.22% and 94.32% respectively. With multi-task learning to pre-train the networks, the recognition accuracies are respectively 99.70%, 97.46% and 95.25%. The results re-affirm that multi-task learning framework, even with data with different genres, does lead to significant improvement.
The manuscript mainly utilizing the data from all three MNIST-like datasets to pre-train the parameters of joint classification networks, and the pre-trained parameters are utilized to initialize the disjoint classification networks (of the three datasets). The presented idea is quite simple and the authors only re-affirm that multi-task learning can lead to performance improvement by simultaneously leverage the information of multiple tasks. There is no technique contribution. Pros: 1. The main idea is clearly presented. 2. It is interesting to visualize the results obtained with/without multi-task learning in Figure 6. Cons: 1. The contribution is quite limited since the authors only apply multi-task learning to the three MNIST-like datasets and there is no technique contribution. 2. There is no difference between the architecture of the single-task learning network and multi-task learning network. 3. Many unclear points, e.g., there is no description for “zero-padding” and why it can enhance target label. What is the “two-stage learning rate decay scheme” and why it is implemented? It is also unclear what can we observed from Figure 4.
iclr_2018_rJJzTyWCZ
Cloze test is widely adopted in language exams to evaluate students' language proficiency. In this paper, we propose the first large-scale human-designed cloze test dataset CLOTH 1 , in which the questions were used in middle-school and high-school language exams. With the missing blanks carefully created by teachers and candidate choices purposely designed to be confusing, CLOTH requires a deeper language understanding and a wider attention span than previous automatically generated cloze datasets. We show humans outperform dedicated designed baseline models by a significant margin, even when the model is trained on sufficiently large external data. We investigate the source of the performance gap, trace model deficiencies to some distinct properties of CLOTH, and identify the limited ability of comprehending a long-term context to be the key bottleneck. In addition, we find that human-designed data leads to a larger gap between the model's performance and human performance when compared to automatically generated data.
This paper presents a new dataset for cloze style question-answering. The paper starts with a very valid premise that many of the automatically generated cloze datasets for testing reading comprehension suffer from many shortcomings. The paper collects data from a novel source: reading comprehension data for English exams in China. The authors collect data for middle school and high school exams and clean it to obtain passages and corresponding questions and candidate answers for each question. The rest of the paper is about analyzing this data and performance of various models on this dataset. 1) The authors divide the questions into various types based on the type of reasoning needed to answer the question, noticeably short-term reasoning and long-term reasoning. 2) The authors then show that human performance on this dataset is much higher than the performance of LSTM-based and language model-based baselines; this is in contrast to existing cloze style datasets where neural models achieve close to human performance. 3) The authors hypothesize that this is partially explained by the fact that neural models do not make use of long-distance information. The authors verify their claim by running human eval where they show annotators only 1 sentence near the empty slot and find that the human performance is basically matched by a language model trained on 1 billion words. This part is very cool. 4) The authors then hypothesize that human-generated data provides more information. They even train an informativeness prediction network to (re-)weight randomly generated examples which can then be used to train a reading comprehension model. Pros of this work: 1) This work contributes a nice dataset that addresses a real problem faced by automatically generated datasets. 2) The breakdown of characteristics of questions is quite nice as well. 3) The paper is clear, well-written, and is easy to read. Cons: 1) Overall, some of the claims made by the paper are not fully supported by the experiments. E.g., the paper claims that neural approaches are much worse than humans on CLOTH data -- however, they do not use state-of-the-art neural reading comprehension techniques but only a standard LSTM baseline. It might be the case that the best available neural techniques are still much worse than humans on CLOTH data, but that remains to be seen. 2) Informativeness prediction: The authors claim that the human-generated data provides more information than automatically/randomly generated data by showing that the models trained on the former achieve better performance than the latter on test data generated by humans. The claim here is problematic for two reasons: a) The notion of "informativeness" is not clearly defined. What does it mean here exactly? b) The claim does not seem fully justified by the experiments -- the results could just as well be explained by distributional mismatch without appealing to the amount of information per se. The authors should show comparisons when evaluating on randomly generated data. Overall, this paper contributes a useful dataset; the analysis can be improved in some places.
iclr_2018_S1v4N2l0-
Published as a conference paper at ICLR 2018 UNSUPERVISED REPRESENTATION LEARNING BY PRE- DICTING IMAGE ROTATIONS Over the last years, deep convolutional neural networks (ConvNets) have transformed the field of computer vision thanks to their unparalleled capacity to learn high level semantic image features. However, in order to successfully learn those features, they usually require massive amounts of manually labeled data, which is both expensive and impractical to scale. Therefore, unsupervised semantic feature learning, i.e., learning without requiring manual annotation effort, is of crucial importance in order to successfully harvest the vast amount of visual data that are available today. In our work we propose to learn image features by training ConvNets to recognize the 2d rotation that is applied to the image that it gets as input. We demonstrate both qualitatively and quantitatively that this apparently simple task actually provides a very powerful supervisory signal for semantic feature learning. We exhaustively evaluate our method in various unsupervised feature learning benchmarks and we exhibit in all of them state-of-the-art performance. Specifically, our results on those benchmarks demonstrate dramatic improvements w.r.t. prior state-of-the-art approaches in unsupervised representation learning and thus significantly close the gap with supervised feature learning. For instance, in PASCAL VOC 2007 detection task our unsupervised pre-trained AlexNet model achieves the state-of-the-art (among unsupervised methods) mAP of 54.4% that is only 2.4 points lower from the supervised case. We get similarly striking results when we transfer our unsupervised learned features on various other tasks, such as ImageNet classification, PASCAL classification, PASCAL segmentation, and CIFAR-10 classification. The code and models of our paper will be published on: https://github.com/gidariss/FeatureLearningRotNet.
**Paper Summary** This paper proposes a self-supervised method, RotNet, to learn effective image feature from images by predicting the rotation, discretized into 4 rotations of 0, 90, 180, and 270 degrees. The authors claim that this task is intuitive because a model must learn to recognize and detect relevant parts of an image (object orientation, object class) in order to determine how much an image has been rotated. They visualize attention maps from the first few conv layers and claim that the attend to parts of the image like faces or eyes or mouths. They also visualize filters from the first convolutional layer and show that these learned filters are more diverse than those from training the same model in a supervised manner. They train RotNet to learn features of CIFAR-10 and then train, in a supervised manner, additional layers that use RotNet feature maps to perform object classification. They achieve 91.16% accuracy, outperforming other unsupervised feature learning methods. They also show that in a semi-supervised setting where only a small number of images of each category is available at training time, their method outperforms a supervised method. They next train RotNet on ImageNet and use the learned features for image classification on ImageNet and PASCAL VOC 2007 as well as object detection on PASCAL VOC 2007. They achieve an ImageNet and PASCAL classification score as well as an object detection score higher than other baseline methods. This task requires the ability to understand the types, the locations, and the poses of the objects presented in images and therefore provides a powerful surrogate supervision signal for representation learning. To demonstrate the effectiveness of the proposed method, the authors evaluate it under a variety of tasks with different settings. **Paper Strengths** - The motivation of this work is well-written. - The proposed self-supervised task is simple and intuitive. This simple idea of using image rotation to learn features, easy to implement image rotations without any artifacts - Requiring no scale and aspect ratio image transformations, the proposed self-supervised task does not introduce any low-level visual artifacts that will lead the CNN to learn trivial features with no practical value for the visual perception tasks. - Training the proposed model requires the same computational cost as supervised learning which is much faster than training image reconstruction based representation learning frameworks. - The experiments show that this representation learning task can improve the performance when only a small amount of annotated examples is available (the semi-supervised settings). - The implementation details are included, including the way of implementing image rotations, different network architectures evaluated on different datasets, optimizers, learning rates with weight decayed, batch sizes, numbers of training epochs, etc. - Outperforms all baselines and achieves performance close to, but still below, fully supervised methods - Plots rotation prediction accuracy and object recognition accuracy over time and shows that they are correlated **Paper Weaknesses** - The proposed method considers a set of different geometric transformations as discrete and independent classes and formulates the task as a classification task. However, the inherent relationships among geometric transformations are ignored. For example, rotating an image 90 degrees and rotating an image 180 degrees should be closer compared to rotating an image 90 degrees and rotating an image 270 degrees. - The evaluation of low-level perception vision task is missing. In particular, evaluating the learned representations on the task of image semantic segmentation is essential in my opinion. Since we are interested in assigning the label of an object class to each pixel in the image for the task, the ability to encode semantic image feature by learning from performing the self-supervised task can be demonstrated. - The figure presenting the visualization of the first layer filters is not clear to understand nor representative of any finding. - ImageNet Top-1 classification results produced by Split-Brain (Zhang et al., 2016b) and Counting (Noroozi et al., 2017) are missing which are shown to be effective in the paper [Representation Learning by Learning to Count](https://arxiv.org/abs/1708.06734). - An in-depth analysis of the correlation between the rotation prediction accuracy and the object recognition accuracy is missing. Showing both the accuracies are improved over time is not informative. - Not fully convinced on the intuition, some objects may not have a clear direction of what should be “up” or “down” (symmetric objects like balls), in Figure 2, rotated image X^3 could plausibly be believed as 0 rotation as well, do the failure cases of rotation relate to misclassified images? - “remarkably good performance”, “extremely good performance” – vague language choices (abstract, conclusion) - Per class breakdown on CIFAR 10 and/or PASCAL would help understand what exactly is being learned - In Figure 3, it would be better to show attention maps on rotated images as well as activations from other unsupervised learning methods. With this figure, it is hard to tell whether the proposed model effectively focuses on high level objects. - In Figure 4, patterns of the convolutional filters are not clear. It would be better to make the figures clear by using grayscale images and adjusting contrast. - In Equation 2, the objective should be maximizing the sum of losses or minimizing the negative. Also, in Equation 3, the summation should be computed over y = 1 ~ K, not i = 1 ~ N. **Preliminary Evaluation** This paper proposes a self-supervised task which allows a CNN to learn meaningful visual representations without requiring supervision signal. In particular, it proposes to train a CNN to recognize the rotation applied to an image, which requires the understanding the types, the locations, and the poses of the objects presented in images. The experiments demonstrate that the learned representations are meaningful and transferable to other vision tasks including object recognition and object detection. Strong quantitative results outperforming unsupervised representation learning methods, but lacking qualitative results to confirm/interpret the effectiveness of the proposed method.
iclr_2018_Sk2u1g-0-
Published as a conference paper at ICLR 2018 CONTINUOUS ADAPTATION VIA META-LEARNING IN NONSTATIONARY AND COMPETITIVE ENVIRONMENTS The ability to continuously learn and adapt from limited experience in nonstationary environments is an important milestone on the path towards general intelligence. In this paper, we cast the problem of continuous adaptation into the learning-to-learn framework. We develop a simple gradient-based meta-learning algorithm suitable for adaptation in dynamically changing and adversarial scenarios. Additionally, we design a new multi-agent competitive environment, RoboSumo, and define iterated adaptation games for testing various aspects of continuous adaptation. We demonstrate that meta-learning enables significantly more efficient adaptation than reactive baselines in the few-shot regime. Our experiments with a population of agents that learn and compete suggest that meta-learners are the fittest.
---- Summary ---- This paper addresses the problem of learning to operate in non-stationary environments, represented as a Markov chain of distinct tasks. The goal is to meta-learn updates that are optimal with respect to transitions between pairs of tasks, allowing for few-shot execution time adaptation that does not degrade as the environment diverges ever further from the training time task set. During learning, an inner loop iterates iterates over consecutive task pairs. For each pair, (T_i, T_{i+1}) trajectories sampled from T_i are used to construct a local policy that is then used to sample trajectories from T_{i+1}. By calculating the outer-loop policy gradient with respect to expectations of the trajectories sampled from T_i, and the trajectories sampled from T_{i+1} using the locally optimal inner-loop policy, the approach learns updates that are optimal with respect to the Markovian transitions between pairs of consecutive tasks. The training time optimization algorithm requires multiple passes through a given sequence of tasks. Since this is not feasible at execution time, the trajectories calculated while solving task T_i are used to calculate updates for task T_{i+1} and these updates are importance weighted w.r.t the sampled trajectories' expectation under the final training-time policy. The approach is evaluated on a pair of tasks. In the locomotion task, a six legged agent has to adapt to deal with an increasing inhibition to a pair of its legs. In the new RoboSumo task, agents have to adapt to effectively compete with increasingly competent components, that have been trained for longer periods of time via self-play. It is clear that, in the locomotion task, the meta learning strategy maintains performance much more consistently than approaches that adapt through PPO-tracking, or implicitly by maintaining state in the RL^2 approach. This behaviour is less visible in the RoboSumo task (Fig 5.) but it does seem to present. Further experiments show that when the adaptation approaches are forced to fight against each other in 100 round iterated adaptation games, the meta learning strategy is dominant. However, the authors also do point out that this behaviour is highly dependent on the number of episodes allowed in each game, and when the agent can accumulate a large amount of evidence in a given environment the meta learning approach falls behind adaptation through tracking. The bias that allows the agent to learn effectively from few examples precludes it from effectively using many examples. ---- Questions for author ---- Updates are performed from \theta to \phi_{i+1} rather than from \phi_i to \phi_{i+1}. Footnote 2 states that this was due to empirical observations of instability but it also necessitates the importance weight correction during execution time. I would like to know how the authors expect the sample in Eqn 9 to behave in much longer running scenarios, when \pi_{\phi} starts to diverge drastically from \pi_{\theta} but very few trajectories are available. The spider-spider results in Fig. 6 do not support the argument that meta learning is better than PPO tracking in the few-shot regime. Do you have any idea of why this is? ---- Nits ---- There is a slight muddiness of notation around the use of \tau in lines 7 & 9 in of Algorithm 1. I think it should be edited to line up with the definition given in Eqn. 8. The figures in this paper depend excessively and unnecessarily on color. They should be made more printer, and colorblind, friendly. ---- Conclusion ---- I think this paper would be a very worthy contribution to ICLR. Learning to adapt on the basis of few observations is an important prerequisite for real world agents, and this paper presents a reasonable approach backed up by a suite of informative evaluations. The quality of the writing is high, and the contributions are significant. However, this topic is very much outside of my realm of expertise and I am unfamiliar with the related work, so I am assigning my review a low confidence.
iclr_2018_rkPLzgZAZ
MODULAR CONTINUAL LEARNING IN A UNIFIED VISUAL ENVIRONMENT A core aspect of human intelligence is the ability to learn new tasks quickly and switch between them flexibly. Here, we describe a modular continual reinforcement learning paradigm inspired by these abilities. We first introduce a visual interaction environment that allows many types of tasks to be unified in a single framework. We then describe a reward map prediction scheme that learns new tasks robustly in the very large state and action spaces required by such an environment. We investigate how properties of module architecture influence efficiency of task learning, showing that a module motif incorporating specific design principles (e.g. early bottlenecks, low-order polynomial nonlinearities, and symmetry) significantly outperforms more standard neural network motifs, needing fewer training examples and fewer neurons to achieve high levels of performance. Finally, we present a meta-controller architecture for task switching based on a dynamic neural voting scheme, which allows new modules to use information learned from previouslyseen tasks to substantially improve their own learning efficiency.
Reading this paper feels like reading at least two closely-related papers compressed into one, with overflow into the appendix (e.g. one about the EMS module, one about the the recurrent voting, etc). There were so many aspects/components, that I am not entirely confident I fully understood how they all work together, and in fact I am pretty confident there was at least some part of this that I definitely did not understand. Reading it 5-20 more times would most likely help. For example, consider the opening example of Section 3. In principle, this kind of example is great, and more of these would be very useful in this paper. This particular one raises a few questions: -Eq 5 makes it so that $(W \Psi)$ and $(a_x)$ need to be positive or negative together. Why use ReLu's here at all? Why not just $sign( (W \Psi) a_x) $? Multiplying them will do the same thing, and is much simpler. I am probably missing something here, would like to know what it is... (Or, if the point of the artificial complexity is to give an example of the 3 basic principles, then perhaps point this out, or point out why the simpler version I just suggested would not scale up, etc) -what exactly, in this example, does $\Psi$ correspond to? In prev discussion, $\Psi$ is always written with subscripts to denote state history (I believe), so this is an opportunity to explain what is different here. -Nitpick: why is a vector written as $W$? (or rather, what is the point of bold vs non-bold here?) -a non-bold version of $Psi$, a few lines below, seems to correspond to the 4096 features of VGG's FC6, so I am still not sure what the bold version represents -The defs/eqns at the beginning of section 3.1 (Sc, CReLu, etc) were slightly hard to follow and I wonder whether there were any typos, e.g. was CReS meant to refer directly to Sc, but used the notation ${ReLu}^2$ instead? Each of these on its own would be easier to overlook, but there is a compounding effect here for me, as a reader, such that by further on in the paper, I am rather confused. I also wonder whether any of the elements described, have more "standard" interpretations/notations. For example, my slight confusion propagated further: after above point, I then did not have a clear intuition about $l_i$ in the EMS module. I get that symmetry has been built in, e.g. by the definitions of CReS and CReLu, etc, but I still don't see how it all works together, e.g. are late bottleneck architectures *exactly* the same as MLPs, but where inputs have simply been symmetrized, squared, etc? Nor do I have intuition about multiplicative symmetric interactions between visual features and actions, although I do get the sense that if I were to spend several hours implementing/writing out toy examples, it would clarify it significantly (in fact, I wouldn't be too surprised if it turns out to be fairly straightforward, as in my above comment indicating a seeming equivalence to simply multiplying two terms and taking the resulting sign). If the paper didn't need to be quite as dense, then I would suggest providing more elucidation for the reader, either with intuitions or examples or clearer relationships to more familiar formulations. Later, I did find that some of the info I *needed* in order to understand the results (e.g. exactly what is meant by a "symmetry ablation", how was that implemented?) was in fact in the appendices (of which there are over 8 pages). I do wonder how sensitive the performance of the overall system is to some of the details, like, e.g. the low-temp Boltzmann sampling rather than identity function, as described at the end of S2. My confidence in this review is somewhere between 2 and 3. The problem is an interesting one, the overall approach makes sense, it is clear the authors have done a very substantial amount of work, and very diligently so (well-done!), some of the ideas are interesting and seem creative, but I am not sure I understand the glue of the details, and that might be very important here in order to assess it effectively.
iclr_2018_HJ94fqApW
Published as a conference paper at ICLR 2018 RETHINKING THE SMALLER-NORM-LESS- INFORMATIVE ASSUMPTION IN CHANNEL PRUNING OF CONVOLUTION LAYERS Model pruning has become a useful technique that improves the computational efficiency of deep learning, making it possible to deploy solutions in resourcelimited scenarios. A widely-used practice in relevant work assumes that a smallernorm parameter or feature plays a less informative role at the inference time. In this paper, we propose a channel pruning technique for accelerating the computations of deep convolutional neural networks (CNNs) that does not critically rely on this assumption. Instead, it focuses on direct simplification of the channel-tochannel computation graph of a CNN without the need of performing a computationally difficult and not-always-useful task of making high-dimensional tensors of CNN structured sparse. Our approach takes two stages: first to adopt an end-toend stochastic training method that eventually forces the outputs of some channels to be constant, and then to prune those constant channels from the original neural network by adjusting the biases of their impacting layers such that the resulting compact model can be quickly fine-tuned. Our approach is mathematically appealing from an optimization perspective and easy to reproduce. We experimented our approach through several image learning benchmarks and demonstrate its interesting aspects and competitive performance.
In this paper, the authors propose a data-dependent channel pruning approach to simplify CNNs with batch-normalizations. The authors view CNNs as a network flow of information and applies sparsity regularization on the batch-normalization scaling parameter \gamma which is seen as a “gate” to the information flow. Specifically, the approach uses iterative soft-thresholding algorithm step to induce sparsity in \gamma during the overall training phase of the CNN (with additional rescaling to improve efficiency. In the experiments section, the authors apply their pruning approach on a few representative problems and networks. The concept of applying sparsity on \gamma to prune channels is an interesting one, compared to the usual approaches of sparsity on weights. However, the ISTA, which is equivalent to L1 penalty on \gamma is in spirit same as “smaller-norm-less-informative” assumption. Hence, the title seems a bit misleading. The quality and clarity of the paper can be improved in some sections. Some specific comments by section: 3. Rethinking Assumptions: - While both issues outlined here are true in general, the specific examples are either artificial or can be resolved fairly easily. For example: L-1 norm penalties only applied on alternate layers is artificial and applying the penalties on all Ws would fix the issue in this case. Also, the scaling issue of W can be resolved by setting the norm of W to 1, as shown in He et. al., 2017. Can the authors provide better examples here? - Can the authors add specific citations of the existing works which claim to use Lasso, group Lasso, thresholding to enforce parameter sparsity? 4. Channel Pruning - The notation can be improved by defining or replacing “sum_reduced” - ISTA – is only an algorithm, the basic assumption is still L1 -> sparsity or smaller-norm-less-informative. Can the authors address the earlier comment about “a theoretical gap questioning existing sparsity inducing formulation and actual computational algorithms”? - Can the authors address the earlier comment on “how to set thresholds for weights across different layers”, by providing motivation for choice of penalty for each layer? - Can the authors address the earlier comment on how their approach provides “guarantees for preserving neural net functionality approximately”? 5. Experiments - CIFAR-10: Since there is loss of accuracy with channel pruning, it would be useful to compare accuracy of a pruned model with other simpler models with similar param.size? (like pruned-resnet-101 vs. resnet-50 in ISLVRC subsection) - ISLVRC: The comparisons between similar param-size models is exteremely useful in highlighting the contribution of this. However, resnet-34/50/101 top-1 error rates from Table 3/4 in (He et.al. 2016) seem to be lower than reported in table 3 here. Can the authors clarify? - Fore/Background: Can the authors add citations for datasets, metrics for this problem? Overall, the channel pruning with sparse \gammas is an interesting concept and the numerical results seem promising. The authors have started with right motivation and the initial section asks the right questions, however, some of those questions are left unanswered in the subsequent work as detailed above.
iclr_2018_H1YynweCb
Our work addresses two important issues with recurrent neural networks: (1) they are over-parametrized, and (2) the recurrent weight matrix is ill-conditioned. The former increases the sample complexity of learning and the training time. The latter causes the vanishing and exploding gradient problem. We present a flexible recurrent neural network model called Kronecker Recurrent Units (KRU). KRU achieves parameter efficiency in RNNs through a Kronecker factored recurrent matrix. It overcomes the ill-conditioning of the recurrent matrix by enforcing soft unitary constraints on the factors. Thanks to the small dimensionality of the factors, maintaining these constraints is computationally efficient. Our experimental results on seven standard data-sets reveal that KRU can reduce the number of parameters by three orders of magnitude in the recurrent weight matrix compared to the existing recurrent models, without trading the statistical performance. These results in particular show that while there are advantages in having a high dimensional recurrent space, the capacity of the recurrent part of the model can be dramatically reduced.
Summary of the paper ------------------------------- This paper proposes to factorize the hidden-to-hidden matrix of RNNs into a Kronecker product of small matrices, thus reducing the number of parameters, without reducing the size of the hidden vector. They also propose to use a soft unitary constraint on those small matrices (which is equivalent to a soft unitary constraint on the Kronecker product of those matrices), that is fast to compute. They evaluate their model on 6 small scale RNN experiments. Clarity, Significance and Correctness -------------------------------------------------- Clarity: The main idea is clearly motivated and presented, but the experiment section failed to convince me (see details below). Significance: The idea of using factorization for RNNs is not particularly novel. However, it is really nice to be able to decouple the hidden size and the number of recurrent parameters in a simple way. Also, the combination of Kronecker product and soft unitary constraint is really interesting. Correctness: There are minor flaws. Some of the baselines seems to perform poorly, and some comparisons with the baselines seems unfair (see the questions below). Questions -------------- 1. Section 3: You say that you can vary 'pf' and 'qf' to set the trade-off between computational budget and performances. Have you run some experiments where you vary those parameters? 2. Section 4: Are you using the soft unitary constraint in your experiments? Do you have an hyper-parameter that sets the amplitude of the constraint? If yes, what is its value? Are you using it also on the vanilla RNN or the LSTM? 3. Section 4.1: You say that you don't train the recurrent matrix in the KRU version. Do you also not train the recurrent matrix in the other models (RNN, LSTM,...)? If yes, how do you explain the differences? If no, I don't see how those curves compare. 4. Section 4.3: Why does your LSTM in pMNIST performs so poorly? There are way better curves reported in the literature (eg in "Unitary Evolution Recurrent Neural Netwkrs" or "Recurrent Batch Normalization"). 5. General: How does your method compares with other factorization approaches, such as in "Factorization Tricks for LSTM Networks"? 6. Section 4: How does the KRU compares to the other parametrizations, in term of wall-clock time? Remarks ------------ The main claim of the paper is that RNN are over-parametrized and take a long time to train (which I both agree with), but you didn't convinced me that your parametrization solve any of those problems. I would suggest to: 1. Compare more clearly setups where you fix the hidden size. 2. Compare more clearly setups where you fix the number of parameters. With systematic comparisons like that, it would be easier to understand where the gains in performances are coming from. 3. Add an experiment where you vary 'pf' and 'qf' (and keep the hidden size fixed) to show how the optimization/generalization performances can be tweaked. 4. Add computation time (wall-clock) for all the experiments, to see how it compares in practice (this could definitively weight in your favor, since you seems to have a nice CUDA implementation). 5. Present results on larger-scale applications (Text8, Teaching Machines to Read and Comprehend, 3 layers LSTM speech recognition setup on TIMIT, DRAW, Machine Translation, ...), especially because your method is really easy to plug in any existing code available online. Typos / Form ------------------ 1. sct 1, par 3: "using Householder reflection vectors, it allows a fine-grained" -> "using Householder reflection vectors, which allows a fine-grained" 2. sct 1, par 3: "This work called as Efficient" -> "This work, called Efficient" 5. sct 1, par 5: "At the heart of KRU is the use of Kronecker" -> "At the heart of KRU, we use Kronecker" 6. sct 1, par 5: "Thanks to the properties of Kronecker matrices" -> "Thanks to the properties of the Kronecker product" 7. sct 1, par 5: "vanilla real space RNN" -> "vanilla RNN" 8. sct 2, par 1: "Consider a standard recurrent" -> "Consider a standard vanilla recurrent" 9. sct 2, par 1: "step t RNN" -> "step t, a vanilla RNN" 11. sct 2.1, par 1: "U and V, this is efficient using modern BLAS" -> "U and V, which can be efficiently computed using modern BLAS" 12. sct 2.3, par 2: "matrices have a determinant of 1 or −1, i.e., the set of all rotations and reflections respectively" -> "matrices, i.e., the set of all rotations and reflections, have a determinant of 1 or −1." 13. sct 3, par 1: "are called as Kronecker" -> "are called Kronecker" 14. sct 3, par 3: "used it's spectral" -> "used their spectral" 15. sct 3, par 3: "Kronecker matrices" -> "Kronecker products" 18. sct 4.4, par 3: "parameters are increased" -> "parameters increases" 19. sct 5: There is some more typos in the conclusion ("it's" -> "its") 20. Some plots are hard to read / interpret, mostly because of the round "ticks" you use on the curves. I suggest you remove them everywhere. Also, in the adding problem, it would be cleaner if you down-sampled a bit the curves (as they are super noisy). In pixel by pixel MNIST, some of the legends might have some typos (FC uRNN), and you should use "N" instead of "n" to be consistent with the notation of the paper. 21. Appendix A to E are not necessary, since they are from the literature. 22. sct 3.1, par 2: "is approximately unitary." -> "is approximately unitary (cf Appendix F)." 23. sct 4, par 1: "and backward operations." -> "and backward operations (cf Appendix G and H)." Pros ------ 1. Nice Idea that allows to decouple the hidden size with the number of hidden-to-hidden parameters. 2. Cheap soft unitary constraint 3. Efficient CUDA implementation (not experimentally verified) Cons ------- 1. Some experimental setups are unfair, and some other could be clearer 2. Only small scale experiments (although this factorization has huge potential on larger scale experiments) 3. No wall-clock time that show the speed of the proposed parametrization.
iclr_2018_SJme6-ZR-
The goal of survival clustering is to map subjects (e.g., users in a social network, patients in a medical study) to K clusters ranging from low-risk to high-risk. Existing survival methods assume the presence of clear end-of-life signals or introduce them artificially using a pre-defined timeout. In this paper, we forego this assumption and introduce a loss function that differentiates between the empirical lifetime distributions of the clusters using a modified Kuiper statistic. We learn a deep neural network by optimizing this loss, that performs a soft clustering of users into survival groups. We apply our method to a social network dataset with over 1M subjects, and show significant improvement in C-index compared to alternatives.
Pros: The paper is a nice read, clearly written, and its originality is well stated by the authors, “addressing the lifetime clustering problem without end-of-life signals for the first time”. I do not feel experienced enough in the field to evaluate the significance of this work. The approach proposed in the manuscript is mainly based on a newly-designed nonparametric loss function using the Kuiper statistic and uses a feed-forward neural network to optimize the loss function. This approach does challenge some traditional assumptions, such as the presence of end-of-life signals or the artificial defined timeouts. Instead of giving a clear end-of-life signal, the authors specify a probability of end-of-life that permits us to take into account the associated uncertainty. By analyzing a large-scale social network dataset, it is shown that the proposed method performs better on average than the other two traditional models. Cons: I think that the main drawback of the paper is that the structure of the neural network and the deep learning techniques used for optimizing the loss function are not explained in sufficient detail.
iclr_2018_SyOK1Sg0W
ADAPTIVE QUANTIZATION OF NEURAL NETWORKS Despite the state-of-the-art accuracy of Deep Neural Networks (DNN) in various classification problems, their deployment onto resource constrained edge computing devices remains challenging due to their large size and complexity. Several recent studies have reported remarkable results in reducing this complexity through quantization of DNN models. However, these studies usually do not consider the changes in the loss function when performing quantization, nor do they take the different importances of DNN model parameters to the accuracy into account. We address these issues in this paper by proposing a new method, called adaptive quantization, which simplifies a trained DNN model by finding a unique, optimal precision for each network parameter such that the increase in loss is minimized. The optimization problem at the core of this method iteratively uses the loss function gradient to determine an error margin for each parameter and assigns it a precision accordingly. Since this problem uses linear functions, it is computationally cheap and, as we will show, has a closed-form approximate solution. Experiments on MNIST, CIFAR, and SVHN datasets showed that the proposed method can achieve near or better than state-of-the-art reduction in model size with similar error rates. Furthermore, it can achieve compressions close to floating-point model compression methods without loss of accuracy.
Revised Review: The authors have addressed most of my concerns with the revised manuscript. I now think the paper does just enough to warrant acceptance, although I remain a bit concerned that since the benefits are only achievable with customized hardware, the relevance/applicability of the work is somewhat limited. Original Review: The paper proposes a technique for quantizing the weights of a neural network, with bit-depth/precision varying on a per-parameter basis. The main idea is to minimize the number of bits used in the quantization while constraining the loss to remain below a specified upper bound. This is achieved by formulating an upper bound on the number of bits used via a set of "tolerances"; this upper bound is then minimized while estimating any increase in loss using a first order Taylor approximation. I have a number of questions and concerns about the proposed approach. First, at a high level, there are many details that aren't clear from the text. Quantization has some bookkeeping associated with it: In a per-parameter quantization setup it will be necessary to store not just the quantized parameter, but also the number of bits used in the quantization (takes e.g. 4-5 extra bits), and there will be some metadata necessary to encode how the quantized value should be converted back to floating point (e.g., for 8-bit quantization of a layer of weights, usually the min and max are stored). From Algorithm 1 it appears the quantization assumes parameters in the range [0, 1]. Don't negative values require another bit? What happens to values larger than 1? How are even bit depths and associated asymmetries w.r.t. 0 handled (e.g., three bits can represent -1, 0, and 1, but 4 must choose to either not represent 0 or drop e.g. -1)? None of these details are clearly discussed in the paper, and it's not at all clear that the estimates of compression are correct if these bookkeeping matters aren't taken into account properly. Additionally the paper implies that this style of quantization has benefits for compute in addition to memory savings. This is highly dubious, since the method will require converting all parameters to a standard bit-depth on the fly (probably back to floating point, since some parameters may have been quantized with bit depth up to 32). Alternatively custom GEMM/conv routines would be required which are impossible to make efficient for weights with varying bit depths. So there are likely not runtime compute or memory savings from such an approach. I have a few other specific questions: Are the gradients used to compute \mu computed on the whole dataset or minibatches? How would this scale to larger datasets? I am confused by the equality in Equation 8: What happens for values shared by many different quantization bit depths (e.g., representing 0 presumably requires 1 bit, but may be associated with a much finer tolerance)? Should "minimization in equation 4" refer to equation 3? In the end, while do like the general idea of utilizing the gradient to identify how sensitive the model might be to quantization of various parameters, there are significant clarity issues in the paper, I am a bit uneasy about some of the compression results claimed without clearer description of the bookkeeping, and I don't believe an approach of this kind has any significant practical relevance for saving runtime memory or compute resources.
iclr_2018_By5SY2gA-
Learning word representations from large available corpora relies on the distributional hypothesis that words present in similar contexts tend to have similar meanings. Recent work has shown that word representations learnt in this manner lack sentiment information which, fortunately, can be leveraged using external knowledge. Our work addresses the question: Can affect lexica improve the word representations learnt from a corpus ? In this work, we propose techniques to incorporate affect lexica, which capture fine-grained information about a word's psycholinguistic and emotional orientation, into the training process of Word2Vec SkipGram, Word2Vec CBOW, and GloVe methods using a Joint Learning approach. We use affect scores from Warriner's affect lexicon to regularize the vector representations learnt from an unlabeled corpus. Our proposed method outperforms previously proposed methods on standard tasks for word similarity detection, outlier detection, and sentiment detection. We also show the usefulness of our approach for a new task related to the prediction of formality, frustration, and politeness in corporate communication.
This paper proposes integrating information from a semantic resource that quantifies the affect of different words into a text-based word embedding algorithm. The affect lexical seems to be a very interesting resource (although I'm not sure what it means to call it 'state of the art'), and definitely support the endeavour to make language models more reflective of complex semantic and pragmatic phenomena such as affect and sentiment. The justification for why we might want to do this with word embeddings in the manner proposed seems a little unconvincing to me: - The statement that 'delighted' and 'disappointed' will have similar contexts is not evident to me at least (other then them both being participle / adjectives). - Affect in language seems to me to be a very contextual phenomenon. Only a tiny subset of words have intrinsic and context-free affect. Most affect seems to me to come from the use of words in (phrasal, and extra-linguistic) contexts, so a more context-dependent model, in which affect is computed over phrases or sentences, would seem to be more appropriate. Consider words like 'expensive', 'wicked', 'elimination'... The model proposes several applications (sentiment prediction, predicting email tone, word similarity) where the affect-based embeddings yield small improvements. However, in different cases, taking different flavours of affect information (V, A or D) produces the best score, so it is not clear what to conclude about what sort of information is most useful. It is not surprising to me that an algorithm that uses both WordNet and running text to compute word similarity scores improves over one that uses just running text. It also not surprising that adding information about affect improves the ability to predict sentiment and the tone of emails. To understand the importance of the proposed algorithm (rather than just the addition of additional data), I would like to see comparison with various different post-processing techniques using WordNet and the affect lexicon (i.e. not just Bollelaga et al.) including some much simpler baselines. For instance, what about averaging WordNet path-based distance metrics and distance in word embedding space (for word similarity), and other ways of applying the affect data to email tone prediction?
iclr_2018_rkGZuJb0b
COMPACT NEURAL NETWORKS BASED ON THE MULTISCALE ENTANGLEMENT RENORMALIZATION ANSATZ This paper demonstrates a method for tensorizing neural networks based upon an efficient way of approximating scale invariant quantum states, the Multi-scale Entanglement Renormalization Ansatz (MERA). We employ MERA as a replacement for the fully connected layers in a convolutional neural network and test this implementation on the CIFAR-10 dataset. The proposed method outperforms factorization using tensor trains, providing greater compression for the same level of accuracy and greater accuracy for the same level of compression. We demonstrate MERA layers with 3900 times fewer parameters and a reduction in accuracy of less than 1% compared to the equivalent fully connected layers, scaling like O(N log 2 3 ).
The paper presents a new parameterization of linear maps for use in neural networks, based on the Multiscale Entanglement Renormalization Ansatz (MERA). The basic idea is to use a hierarchical factorization of the linear map, that greatly reduces the number of parameters while still allowing for relatively complex interactions between variables to be modelled. A limited number of experiments on CIFAR10 suggests that the method may work a bit better than related factorizations. The paper contains interesting new ideas and is generally well written. However, a few things are not fully explained, and the experiments are too limited to be convincing. Exposition On a first reading, it is initially unclear why we are talking about higher order tensors at all. Usually, fully connected layers are written as matrix-vector multiplications. It is only on the bottom of page 3 that it is explained that we will reshape the input to a rank-k (k=12) tensor before applying the MERA factored map. It would be helpful to state this sooner. It would also be nice to state that (in the absense of any factorization of the weight tensor) a linear contraction of such a high-rank tensor is no less general than a matrix-vector multiplication. Most ML researchers will not know Haar measure. It would be more reader friendly to say something like "uniform distribution over orthogonal matrices (i.e. Haar measure)" or something like that. Explaining how to sample orthogonal matrices / tensors (e.g. by SVD) would be helpful as well. The article does not explain what "disentanglers" are. It is very important to explain this, because it will not be generally known by the machine learning audience, and is the main thing that distinguishes this work form earlier tree-based factorizations. On page 5 it is explained that the computational complexity of the proposed method is N^{log_2 D}. For D=2, this is better than a fully connected layer. Although this theoretical speedup may not currently have been realized, it perhaps could be achieved by a custom GPU kernel. It would be nice to highlight this potential benefit in the introduction. Theoretical motivation Although I find the theoretical motivation for the method somewhat compelling, some questions remain that the authors may want to address. For one thing, the paper talks about exploiting "hierarchical / multiscale structure", but this does not refer to the spatial multi-scale structure that is naturally present in images. Instead, the dimensions of a hidden activation vector are arbitrarily ordered, partitioned into pairs, and reshaped into a (2, 2, ..., 2) shape tensor. The pairing of dimensions determines the kinds of interactions the MERA layer can express. Although the earlier layers could learn to produce a representation that can be effectively analyzed by the MERA layer, one is left to wonder if the method could be made to exploit the spatial multi-scale structure that we know is actually present in image data. Another point is that although from a classical statistics perspective it would seem that reducing the number of parameters should be generally beneficial, it has been observed many times that in deep learning, highly overparameterized models are easier to optimize and do not necessarily overfit. Thus at this point it is not clear whether starting with a highly constrained parameterization would allow us to obtain state of the art accuracy levels, or whether it is better to start with an overparameterized model and gradually constrain it or perform a post-training compression step. Experiments In the introduction it is claimed that the method of Liu et al. cannot capture correlations on different length scales because it lacks disentanglers. Although this may be theoretically correct, the paper does not experimentally verify that the proposed factorization with disentanglers outperforms a similar approach without disentanglers. In my opinion this is a critical omission, because the addition of disentanglers seems to be the main or perhaps only difference to previous work. The experiments show that MERA can drastically reduce the number of parameters of fully connected layers with only a modest drop in accuracy, for a particular ConvNet trained on CIFAR10. Unfortunately this ConvNet is far from state of the art, so it is not clear if the method would also work for better architectures. Furthermore, training deep nets can be tricky, and so the poor performance makes it impossible to tell if the baseline is (unintentionally) crippled. Comparing MERA-2 to TT-3 or MERA-3 to TT-5 (which have an approximately equal number of parameters), the difference in accuracy appears to be less than 1 percentage point. Since only a handful of specific MERA / TT architectures were compared on a single dataset, it is not at all clear that we can expect MERA to outperform TT in many situations. In fact, it is not even clear that the small difference observed is stable under random retraining. Summary An interesting paper with novel theoretical ideas, but insufficient experimental validation. Some expository issues need to be fixed.
iclr_2018_SkHl6MWC-
We provide a novel thinking of regularization neural networks. We smooth the objective of neural networks w.r.t small adversarial perturbations of the inputs. Different from previous works, we assume the adversarial perturbations are caused by the movement field. When the magnitude of movement field approaches 0, we call it virtual movement field. By introducing the movement field, we cast the problem of finding adversarial perturbations into the problem of finding adversarial movement field. By adding proper geometrical constraints to the movement field, such smoothness can be approximated in closed-form by solving a min-max problem and its geometric meaning is clear. We define the approximated smoothness as the regularization term. We derive three regularization terms as running examples which measure the smoothness w.r.t shift, rotation and scale respectively by adding different constraints. We evaluate our methods on synthetic data, MNIST and CIFAR-10. Experimental results show that our proposed method can significantly improve the baseline neural networks. Compared with the state of the art regularization methods, proposed method achieves a tradeoff between accuracy and geometrical interpretability as well as computational cost.
This paper tackles the overfitting problem when training neural networks based on regularization technique. More precisely, the authors propose new regularization terms that are related to the underlying virtual geometrical transformations (shift, rotation and scale) of the input data (signal, image and video). By formalizing the geometrical transformation process of a given image, the authors deduce constraints on the objective function which depend on the magnitude of the applied transformation. The proposed method is compared to three methods: one baseline and two methods of the literature (AT and VAT). The comparison is done on three datasets (synthetic data, MNIST and CIFAR10) in terms of test errors (for classification problems) and running time. The paper is well formalized and the idea is interesting. The regularization approach is novel compared to the methods of the literature. Main concerns: 1) The experimental validation of the proposed approach is not consistent: The description of the baseline method is not detailed in the paper. A priori, the baseline should naturally be the method without your regularization terms. But, this seems to be contrary with what you displayed in Figure 3. Indeed, in Figure 3, there is three different graphs for the baseline method (i.e., one for each regularization term). It seems that the baseline method depends on the different kinds of regularization term, why? Same question for AT and VAT methods. In practice, what is the magnitude of the perturbations? Please, explain the axis of all the figures. Please, explain how do you mix your different regularization terms in your method that you call VMT-all? All the following points are related to the experiment for which you presented the results in Table 2: Please, provide the results of all your methods on the synthetic dataset (only VMT-shift is provided). What is VMF? Do you mean VMT? For the evaluations, it would be more rigorous to re-implement also the state-of-the-art methods for which you only give the results that they report in their paper. Especially, because you re-implemented AT with L-2 constraint, so, it seems straightforward to re-implement also AT with L-infinite constraint. Same remark for the dropout regularization technique, which is easy to re-implement on the dense layers of your neural networks, within the Tensorflow framework. As you mentioned, your main contribution is related to running time, thus, you should give the running time in all experiments. 2) The method seems to be a tradeoff between accuracy and running time: The VAT method performs better than all your methods in all the datasets. The baseline method is faster than all the methods (Table 3). This being said, the proposed method should be clearly presented in the paper as a tradeoff between accuracy and running time. 3) The positioning of the proposed approach is not so clear: As mentioned above, your method is a tradeoff between accuracy and running time. But you also mentioned (top of page 2) that the contribution of your paper is also related to the interpretability in terms of ‘’Human perception’’. Indeed, you clearly mentioned that the methods of the literature lacks interpretability. You also mentioned that your method is more ‘’geometrically’’ interpretable than methods of the literature. The link between interpretability in terms of “human perception” and “geometry” is not obvious. Anyway, the interpretability point is not sufficiently demonstrated, or at least, discussed in the paper. 4) Many typos in the paper : Section 1: “farward-backward” Section 2.1: “we define the movement field V of as a n+1…” Section 2.2: “lable” - “the another” - “of how it are generated” – Sentence “Since V is normalized.” seems incomplete… - \mathcal{L} not defined - Please, precise the simplifications like \mathcal{L}_{\theta} to \mathcal{L} Section 3: “DISCUSSTION” Section 4.1: “negtive” Figure 2: “negetive” Table 2: “VMF” Section 4.2: “Tab 2.3” does not exist Section 4.3: “consists 9 convolutional” – “nerual networks”… Please, always use the \eqref latex command to refer to equations. There is many others typos in the paper, so, please proofread the paper…
iclr_2018_Hy7fDog0b
AMBIENTGAN: GENERATIVE MODELS FROM LOSSY MEASUREMENTS Generative models provide a way to model structure in complex distributions and have been shown to be useful for many tasks of practical interest. However, current techniques for training generative models require access to fully-observed samples. In many settings, it is expensive or even impossible to obtain fullyobserved samples, but economical to obtain partial, noisy observations. We consider the task of learning an implicit generative model given only lossy measurements of samples from the distribution of interest. We show that the true underlying distribution can be provably recovered even in the presence of per-sample information loss for a class of measurement models. Based on this, we propose a new method of training Generative Adversarial Networks (GANs) which we call AmbientGAN. On three benchmark datasets, and for various measurement models, we demonstrate substantial qualitative and quantitative improvements. Generative models trained with our method can obtain 2-4x higher inception scores than the baselines. The output of the generator is passed through a simulated random measurement function f Θ . The discriminator must decide if a measurement is real or generated. models of the data structure. Recent work has shown that generative models can be particularly effective for easier sensing [Bora et al. (2017); Mardani et al. (2017)]-but if sensing is expensive in the first place, how can we collect enough data to train a generative model to start with? This work solves this chicken-and-egg problem by training a generative model directly from noisy or incomplete samples. We show that our observations can be even projections or more general measurements of different types and the unknown distribution is still provably recoverable. A critical assumption for our framework and theory to work is that the measurement process is known and satisfies certain technical conditions. We present several measurement processes for which it is possible to learn a generative model from a dataset of measured samples, both in theory and in practice. Our approach uses a new way of training GANs, which we call AmbientGAN. The idea is simple: rather than distinguish a real image from a generated image as in a traditional GAN, our discriminator must distinguish a real measurement from a simulated measurement of a generated image; see Figure 1. We empirically demonstrate the effectiveness of our approach on three datasets and a variety of measurement models. Our method is able to construct good generative models from extremely noisy observations and even from low dimensional projections with drastic per-sample information loss. We show this qualitatively by exhibiting samples with good visual quality, and quantitatively by comparing inception scores [Salimans et al. (2016)] to baseline methods. Theoretical results. We first consider measurements that are noisy, blurred versions of the desired images. That is, we consider convolving the original image with a Gaussian kernel and adding independent Gaussian noise to each pixel (our actual theorem applies to more general kernels and noise distributions). Because of the noise, this process is not invertible for a single image. However, we show that the distribution of measured images uniquely determines the distribution of original images. This implies that a pure Nash equilibrium for the GAN game must find a generative model that matches the true distribution. We show similar results for a dropout measurement model, where each pixel is set to zero with some probability p, and a random projection measurement model, where we observe the inner product of the image with a random Gaussian vector. Empirical results. Our empirical work also considers measurement models for which we do not have provable guarantees. We present results on some of our models now and defer the full exploration to Section 8. In Fig. 2, we consider the celebA dataset of celebrity faces [Liu et al. (2015)] under randomly placed occlusions, where a randomly placed square containing 1/4 of the pixels is set to zero. It is hard to inpaint individual images, so cleaning up the data by inpainting and then learning a GAN on the result yields significant artifacts. By incorporating the measurement process into the GAN training, we can produce much better samples. In Fig. 3a we consider learning from noisy, blurred version of images from the celebA dataset. Each image is convolved with a Gaussian kernel and then IID Gaussian noise is added to each pixel. Learning a GAN on images denoised by Wiener deconvolution leads to poor sample quality while our models are able to produce cleaner samples. In Fig. 3b, we consider learning a generative model on the 2D images in the MNIST handwritten digit dataset [LeCun et al. (1998)] from pairs of 1D projections. That is, measurements consist of picking two random lines and projecting the image onto each line, so the observed value along the line is the sum of all pixels that project to that point. We consider two variants: in the first, the choice of line is forgotten, while in the second the measurement includes the choice of line. We find for both variants that AmbientGAN recovers a lot of the underlying structure, although the first variant cannot identify the distribution up to rotation or reflection.
The paper proposes an approach to train generators within a GAN framework, in the setting where one has access only to degraded / imperfect measurements of real samples, rather than the samples themselves. Broadly, the approach is to have a generator produce the "full" real data, pass it through a simulated model of the measurement process, and then train the discriminator to distinguish between these simulated measurements of generated samples, and true measurements of real samples. By this mechanism, the proposed method is able to train GANs to generate high-quality samples from only imperfect measurements. The paper is largely well-written and well-motivated, the overall setup is interesting (I find the authors' practical use cases convincing---where one only has access to imperfect data in the first place), and the empirical results are convincing. The theoretical proofs do make strong assumptions (in particular, the fact that the true distribution must be uniquely constrained by its marginal along the measurement). However, in most theoretical analysis of GANs and neural networks in general, I view proofs as a means of gaining intuition rather than being strong guarantees---and to that end, I found the analysis in this paper to be informative. I would make a suggestions for possible further experimental analysis: it would be nice to see how robust the approach is to systematic mismatches between the true and modeled measurement functions (for instance, slight differences in the blur kernels, noise variance, etc.). Especially in the kind of settings the paper considers, I imagine it may sometimes also be hard to accurately model the measurement function of a device (or it may be necessary to use a computationally cheaper approximation for training). I think a study of how such mismatches affect the training procedure would be instructive (perhaps more so than some of the quantitative evaluation given that they at best only approximately measure sample quality).
iclr_2018_r1AMITFaW
In this work, we first conduct mathematical analysis on the memory, which is defined as a function that maps an element in a sequence to the current output, of three RNN cells; namely, the simple recurrent neural network (SRN), the long short-term memory (LSTM) and the gated recurrent unit (GRU). Based on the analysis, we propose a new design, called the extended-long short-term memory (ELSTM), to extend the memory length of a cell. Next, we present a multi-task RNN model that is robust to previous erroneous predictions, called the dependent bidirectional recurrent neural network (DBRNN), for the sequence-in-sequenceout (SISO) problem. Finally, the performance of the DBRNN model with the ELSTM cell is demonstrated by experimental results.
The paper proposes a new recurrent cell and a new way to make predictions for sequence tagging. It starts with a theoretical analysis of memory capabilities in different RNN cells and goes on with experiments on POS tagging and dependency parsing. There are serious presentation issues in the paper, which make it hard to understand the ideas and claims. First, I was not able to understand the message of the theoretical analysis from Section 2 and could not see how it is different from similar derivations (i.e. using a linearized version of an RNN and eigenvalue decomposition) that can be found in many other papers, including (Bengio et al, 1994) and (Pascanu et al, 2013). Novelty aside, the analysis has presentation issues. SRN is introduced without a nonlinearity from the beginning, although normally it should have one. From the classical upper bound with a power of the largest singular value the paper concludes that “Clearly, the memory will explode if \lambda_{max} > 1”, which is not true: the memory *may* explode, having an exponentially growing upper bound does not mean that it *will* explode. The notation chosen from LSTM is different from the standard in deep learning community and was very hard to understand (Y_t is used instead of h_t, and h_t is used instead of c_t). This notation also does not seem consistent with the rest of the paper, for example Equations 28 and 29 suggest that Y_t are discrete outputs and not vectors. The novel cell SLSTM-I is meant to be different from LSTM by addition of “input weight vector c_i”, but is not explained where c_i come from. Are they trainable vectors, one for each time step? If yes, then how could such a cell be applied to sequence which are longer than the training ones? Equations 28, 29, 30 describe a very unusual kind of a Bidirectional Recurrent Network. To the best of my knowledge it is much more common to make one prediction based on future and past information, whereas the paper describes an approach in which first predictions are made separately based on the past and on the future. It is also very common to use several BiRNN layers, whereas the paper only uses one. As for the proposed DBRNN method, unfortunately, I was not able to understand it. I also have concerns regarding the experiments. Why is seq2seq without attention is used? On such small datasets attention is likely to make a big difference. What’s the point of reporting results of an LSTM without output nonlinearity (Table 5)? To sum up, the paper needs a lot work on many fronts, but most importantly, presentation should be improved.
iclr_2018_H1dh6Ax0Z
Published as a conference paper at ICLR 2018 TREEQN AND ATREEC: DIFFERENTIABLE TREE-STRUCTURED MODELS FOR DEEP REINFORCEMENT LEARNING Combining deep model-free reinforcement learning with on-line planning is a promising approach to building on the successes of deep RL. On-line planning with look-ahead trees has proven successful in environments where transition models are known a priori. However, in complex environments where transition models need to be learned from data, the deficiencies of learned models have limited their utility for planning. To address these challenges, we propose TreeQN, a differentiable, recursive, tree-structured model that serves as a drop-in replacement for any value function network in deep RL with discrete actions. TreeQN dynamically constructs a tree by recursively applying a transition model in a learned abstract state space and then aggregating predicted rewards and state-values using a tree backup to estimate Q-values. We also propose ATreeC, an actor-critic variant that augments TreeQN with a softmax layer to form a stochastic policy network. Both approaches are trained end-to-end, such that the learned model is optimised for its actual use in the tree. We show that TreeQN and ATreeC outperform n-step DQN and A2C on a box-pushing task, as well as n-step DQN and value prediction networks (Oh et al., 2017) on multiple Atari games. Furthermore, we present ablation studies that demonstrate the effect of different auxiliary losses on learning transition models.
# Update after the rebuttal Thank you for the rebuttal. The authors claim that the source of objective mismatch comes from n-step Q-learning, and their method is well-justified in 1-step Q-learning. However, there is still a mismatch even with 1-step Q-learning because the bootstrapped target is also computed from the TreeQN. More specifically, there can be a mismatch between the optimal action sequences computed from TreeQN at time t and t+1 if the depth of TreeQN is equal or greater than 2. Thus, the author's response is still not convincing to me. I like the overall idea of using a tree-structured neural network which internally performs planning as an abstraction of Q-function, which makes implementation simpler compared to VPN. However, the particular method (TreeQN) proposed in this paper introduces a mismatch in the model learning as mentioned above. One could argue that TreeQN is learning an "abstract" planning rather than "grounded" planning. However, the fact that reward prediction loss is used to train TreeQN significantly weakens this claim, and there is no such an evidence in the paper. In conclusion, I think the research direction is worth pursuing, but the proposed modification from VPN is not well-justified. # Summary This paper proposes TreeQN and ATreeC which perform look-ahead planning using neural networks. TreeQN simulates the future by predicting rewards/values of the future states and performs tree backup to construct Q-values. ATreeC is an actor-critic architecture that uses a softmax over TreeQN. The architecture is trained through n-step Q-learning with reward prediction loss. The proposed methods outperform DQN baseline on 2D Box Pushing domain and outperforms VPN on Atari games. [Pros] - The paper is easy to follow. - The application to actor-critic setting (ATreeC) is novel, though the underlying idea was proposed by [O'Donoghue et al., Schulman et al.]. [Cons] - The proposed method has a technical issue. - The proposed idea (TreeQN) and underlying motivation are almost same as those of VPN [Oh et al.], but there is no in-depth discussion that shows why TreeQN is potentially better than VPN. - Comparison to VPN on Atari is not much convincing. # Novelty and Significance - The underlying motivation (planning without predicting observations), the architecture (transition/reward/value functions applied to the latent state space), and the algorithm (n-step Q-learning with reward prediction loss) are same as those of VPN. But, the paper does not provide in-depth discussion on this. The following is the differences that I found from this paper, so it would be important to discuss why such differences are important. 1) The paper emphasizes the "fully-differentiable tree planning" aspect in contrast to VPN that back-propagates only through "non-branching" trajectories during training. However, differentiating TreeQN also amounts to back-propagating through a "single" trajectory in the tree that gives the maximum Q-value. Thus, the only difference between TreeQN and VPN is that TreeQN follows the best (estimated) action sequence, while VPN follows the chosen action sequence in retrospect during back-propagation. Can you justify why following the best estimated action sequence is better than following the chosen action sequence during back-propagation (see Technical Soundness section for discussion)? 2) TreeQN only sets targets for the final Q-value after tree backup, whereas VPN sets targets for all intermediate value predictions in the tree. Why is TreeQN's approach better than VPN's approach? - The application to actor-critic setting (ATreeC) is novel, though the underlying idea of combining Q-learning with policy gradient was proposed by [O'Donoghue et al.] and [Schulman et al.]. # Technical Soundness - The proposed idea of setting targets for the final Q-value after tree backup can potentially make the temporal credit assignment difficult, because the best estimated actions during tree planning does not necessarily match with the chosen actions. Suppose that TreeQN estimated "up-right-right" as the best future action sequence the during 3-step tree planning, while the agent actually ended up with choosing "up-left-left" (this is possible because the agent re-plans at every step and follows epsilon-greedy policy). Following n-step Q-learning procedure, we end up with setting target Q-value based on on-policy action sequence "up-left-left", while back-propagating through "up-right-right" action sequence in the TreeQN's plan. This causes a wrong temporal credit assignment, because TreeQN can potentially increase/decrease value estimates in the wrong direction due to the mismatch between the planned actions and chosen actions. So, it is unclear why the proposed algorithm is technically correct or better than VPN's approach (i.e., back-propagating through the chosen actions in the search tree). # Quality - Comparison to VPN on Atari is not convincing because TreeQN-1 is actually (almost) equivalent to VPN-1, but the results show that TreeQN-1 performs much better than VPN on many games. Since the authors took the numbers from [Oh et al.] rather than replicating VPN, it is possible that the gap comes from implementation details (e.g., hyperparameter). # Clarity - The paper is overall easy to follow and the description of the proposed method is clear.
iclr_2018_ryserbZR-
Analysis of histopathology slides is a critical step for many diagnoses, and in particular in oncology where it defines the gold standard. In the case of digital histopathological analysis, highly trained pathologists must review vast wholeslide-images of extreme digital resolution (100, 000 2 pixels) across multiple zoom levels in order to locate abnormal regions of cells, or in some cases single cells, out of millions. The application of deep learning to this problem is hampered not only by small sample sizes, as typical datasets contain only a few hundred samples, but also by the generation of ground-truth localized annotations for training interpretable classification and segmentation models. We propose a method for disease localization in the context of weakly supervised learning, where only image-level labels are available during training. Even without pixel-level annotations, we are able to demonstrate performance comparable with models trained with strong annotations on the Camelyon-16 lymph node metastases detection challenge. We accomplish this through the use of pre-trained deep convolutional networks, feature embedding, as well as learning via top instances and negative evidence, a multiple instance learning technique from the field of semantic segmentation and object detection.
The authors approach the task of labeling histology images with just a single global label, with promising results on two different data sets. This is of high relevance given the difficulty in obtaining expert annotated data. At the same time the key elements of the presented approach remain identical to those in a previous study, the main novelty is to replace the final step of the previous architecture (that averages across a vector) with a multiplayer perceptron. As such I feel that this would be interesting to present if there is interest in the overall application (and results of the 2016 CVPR paper), but not necessarily as a novel contribution to MIL and histology image classification. Comments to the authors: * The intro starts from a very high clinical level. A introduction that points out specifics of the technical aspects of this application, the remaining technical challenges, and the contribution of this work might be appreciated by some of your readers. * There is preprocessing that includes feature extraction, and part of the algorithm that includes the same feature extraction. This is somewhat confusing to me and maybe you want to review the structure of the sections. You are telling us you are using the first layer (P=1) of the ResNet50 in the method description, and you mention that you are using the pre-final layer in the preprocessing section. I assume you are using the latter, or is P=1 identical to the prefinal layer in your notation? Tell us. Moreover, not having read Durand 2016, I would appreciate a few more technical details or formal description here and there. Can you detail about the ranking method in Durand 2016, for example? * Would it make sense to discuss Durand 2016 in the base line methods section? * To some degree this paper evaluates WELDON (Durand 2016) on new data, and compares it against and an extended WELDON algorithm called CHOWDER that features the final MLP step. Results in table 1 suggest that this leads to some 2-5% performance increase which is a nice result. I would assume that experimental conditions (training data, preprocessing, optimization, size of ensemble) are kept constant in between those two comparisons? Or is there anything of relevance that also changed (like size of the ensemble, size of training data) because the WELDON results are essentially previously generated results? Please comment in case there are differences.
iclr_2018_Hkbd5xZRb
Published as a conference paper at ICLR 2018 SPHERICAL CNNS Convolutional Neural Networks (CNNs) have become the method of choice for learning problems involving 2D planar images. However, a number of problems of recent interest have created a demand for models that can analyze spherical images. Examples include omnidirectional vision for drones, robots, and autonomous cars, molecular regression problems, and global weather and climate modelling. A naive application of convolutional networks to a planar projection of the spherical signal is destined to fail, because the space-varying distortions introduced by such a projection will make translational weight sharing ineffective. In this paper we introduce the building blocks for constructing spherical CNNs. We propose a definition for the spherical cross-correlation that is both expressive and rotation-equivariant. The spherical correlation satisfies a generalized Fourier theorem, which allows us to compute it efficiently using a generalized (non-commutative) Fast Fourier Transform (FFT) algorithm. We demonstrate the computational efficiency, numerical accuracy, and effectiveness of spherical CNNs applied to 3D model recognition and atomization energy regression.
Summary: The paper proposes a framework for constructing spherical convolutional networks (ConvNets) based on a novel synthesis of several existing concepts. The goal is to detect patterns in spherical signals irrespective of how they are rotated on the sphere. The key is to make the convolutional architecture rotation equivariant. Pros: + novel/original proposal justified both theoretically and empirically + well written, easy to follow + limited evaluation on a classification and regression task is suggestive of the proposed approach's potential + efficient implementation Cons: - related work, in particular the first paragraph, should compare and contrast with the closest extant work rather than merely list them - evaluation is limited; granted this is the nature of the target domain Presentation: While the paper is generally written well, the paper appears to conflate the definition of the convolutional and correlation operators? This point should be clarified in a revised manuscript. In Section 5 (Experiments), there are several references to S^2CNN. This naming of the proposed approach should be made clear earlier in the manuscript. As an aside, this appears a little confusing since convolution is performed first on S^2 and then SO(3). Evaluation: What are the timings of the forward/backward pass and space considerations for the Spherical ConvNets presented in the evaluation section? Please provide specific numbers for the various tasks presented. How many layers (parameters) are used in the baselines in Table 2? If indeed there are much less parameters used in the proposed approach, this would strengthen the argument for the approach. On the other hand, was there an attempt to add additional layers to the proposed approach for the shape recognition experiment in Sec. 5.3 to improve performance? Minor Points: - some references are missing their source, e.g., Maslen 1998 and Kostolec, Rockmore, 2007, and Ravanbakhsh, et al. 2016. - some sources for the references are presented inconsistency, e.g., Cohen and Welling, 2017 and Dieleman, et al. 2017 - some references include the first name of the authors, others use the initial - in references to et al. or not, appears inconsistent - Eqns 4, 5, 6, and 8 require punctuation - Section 4 line 2, period missing before "Since the FFT" - "coulomb matrix" --> "Coulomb matrix" - Figure 5, caption: "The red dot correcpond to" --> "The red dot corresponds to" Final remarks: Based on the novelty of the approach, and the sufficient evaluation, I recommend the paper be accepted.
iclr_2018_rypT3fb0b
Published as a conference paper at ICLR 2018 LEARNING TO SHARE: SIMULTANEOUS PARAMETER TYING AND SPARSIFICATION IN DEEP LEARNING Deep neural networks (DNNs) may contain millions, even billions, of parameters/weights, making storage and computation very expensive and motivating a large body of work aimed at reducing their complexity by using, e.g., sparsityinducing regularization. Parameter sharing/tying is another well-known approach for controlling the complexity of DNNs by forcing certain sets of weights to share a common value. Some forms of weight sharing are hard-wired to express certain invariances; a notable example is the shift-invariance of convolutional layers. However, other groups of weights may be tied together during the learning process to further reduce the network complexity. In this paper, we adopt a recently proposed regularizer, GrOWL (group ordered weighted 1 ), which encourages sparsity and, simultaneously, learns which groups of parameters should share a common value. GrOWL has been proven effective in linear regression, being able to identify and cope with strongly correlated covariates. Unlike standard sparsity-inducing regularizers (e.g., 1 a.k.a. Lasso), GrOWL not only eliminates unimportant neurons by setting all their weights to zero, but also explicitly identifies strongly correlated neurons by tying the corresponding weights to a common value. This ability of GrOWL motivates the following two-stage procedure: (i) use GrOWL regularization during training to simultaneously identify significant neurons and groups of parameters that should be tied together; (ii) retrain the network, enforcing the structure that was unveiled in the previous phase, i.e., keeping only the significant neurons and enforcing the learned tying structure. We evaluate this approach on several benchmark datasets, showing that it can dramatically compress the network with slight or even no loss on generalization accuracy.
SUMMARY The paper proposes to apply GrOWL regularization to the tensors of parameters between each pair of layers. The groups are composed of all coefficients associated to inputs coming from the same neuron in the previous layer. The proposed algorithm is a simple proximal gradient algorithm using the proximal operator of the GrOWL norm. Given that the GrOWL norm tend to empirically reinforce a natural clustering of the vectors of coefficients which occurs in some layers, the paper proposes to cluster the corresponding parameter vectors, to replace them with their centroid and to retrain with the constrain that some vectors are now equal. Experiments show that some sparsity is obtained by the model and that together with the clustering and high compression of the model is obtained which maintaining or improving over a good level of generalization accuracy. In comparison, plain group Lasso yields compressed versions that are too sparse, and tend to degrade performance. The method is also competitive with weight decay with much better compression. REVIEW Given the well known issue that the Lasso tends to select arbitrarily and in a non stable way variables that are correlated *but* given that the well known elastic-net (and conceptually simpler than GrOWL) was proposed to address that issue already more than 10 years ago, it would seem relevant to compare the proposed method with the group elastic-net. The proposed algorithm is a simple proximal gradient algorithm, but since the objective is non-convex it would be relevant to provide references for convergence guarantees of the algorithm. How should the step size eta be chosen? I don't see that this is discussed in the paper. In the clustering algorithm how is the threshold value chosen? Is it chosen by cross validation? Is the performance better with clustering or without? Is the same threshold chosen for GrOWL and the Lasso? It would be useful to know which values of p, Lambda_1 and Lambda_2 are selected in the experiments? For Figures 5,7,8,9 given that the matrices do not have particular structures that need to be visualized but that the important thing to compare is the distribution of correlation between pairs, these figures that are hard to read and compare would be advantageously replaced by histograms of the values of the correlations between pairs (of different variables). Indeed, right now one must rely on comparison of shades of colors in the thin lines that display correlation and it is really difficult to appreciate how much of correlation of what level are present in each Figure. Histograms would extract exactly the relevant information... A brief description of affinity propagation, if only in the appendix, would be relevant. Why this method as opposed to more classical agglomerative clustering? A brief reminder of what the principle of weight decay is would also be relevant for the paper to more self contained. The proposed experiments are compelling, except for the fact that it would be nice to have a comparison with the group elastic-net. I liked figure 6.d and would vote for inclusion in the main paper. TYPOS etc 3rd last line of sec. 3.2 can fail at selecting -> fail to select In eq. (5) theta^t should be theta^{(t)} In section 4.1 you that the network has a single fully connected layer of hidden units -> what you mean is that the network has a single hidden layer, which is furthermore fully connected. You cite several times Sergey (2015) in section 4.2. It seems you have exchanged first name and last name plus the corresponding reference is quite strange. Appendix B line 5 ", while." -> incomplete sentence.
iclr_2018_r1iuQjxCZ
ON THE IMPORTANCE OF SINGLE DIRECTIONS FOR GENERALIZATION Despite their ability to memorize large datasets, deep neural networks often achieve good generalization performance. However, the differences between the learned solutions of networks which generalize and those which do not remain unclear. Additionally, the tuning properties of single directions (defined as the activation of a single unit or some linear combination of units in response to some input) have been highlighted, but their importance has not been evaluated. Here, we connect these lines of inquiry to demonstrate that a network's reliance on single directions is a good predictor of its generalization performance, across networks trained on datasets with different fractions of corrupted labels, across ensembles of networks trained on datasets with unmodified labels, across different hyperparameters, and over the course of training. While dropout only regularizes this quantity up to a point, batch normalization implicitly discourages single direction reliance, in part by decreasing the class selectivity of individual units. Finally, we find that class selectivity is a poor predictor of task importance, suggesting not only that networks which generalize well minimize their dependence on individual units by reducing their selectivity, but also that individually selective units may not be necessary for strong network performance.
article summary: The authors use ablation analyses to evaluate the reliance on single coordinate-aligned directions in activation space (i.e. the activation of single units or feature maps) as a function of memorization. They find that the performance of networks that memorize more are also more affected by ablations. This result holds even for identical networks trained on identical data. The dynamics of this reliance on single directions suggest that it could be used as a criterion for early stopping. The authors discuss this observation in relation to dropout and batch normalization. Although dropout is an effective regularizer to prevent memorization of random labels, it does not prevent over-reliance on single directions. Batch normalization does appear to reduce the reliance on single directions, providing an alternative explanation for the effectiveness of batch normalization. Networks trained without batch normalization also demonstrated a significantly higher amount of class selectivity in individual units compared to networks trained without batch normalization. Highly selective units were found to be no more important than units that were not selective to a particular class. These results suggest that highly selective units may actually be harmful to network performance. * Quality: The paper presents thorough and careful empirical analyses to support their claims. * Clarity: The paper is very clear and well-organized. Sufficient detail is provided to reproduce the results. * Originality: This work is one of many recent papers trying to understand generalization in deep networks. Their description of the activation space of networks that generalize compared to those that memorize is novel. The authors throughly relate their findings to related work on generalization, regularization, and pruning. However, the authors may wish to relate their findings to recent reports in neuroscience observing similar phenomena (see below). * Significance: The paper provides valuable insight that helps to relate existing theories about generalization in deep networks. The insights of this paper will have a large impact on regularization, early stopping, generalization, and methods used to explain neural networks. Pros: * Observations are replicated for several network architectures and datasets. * Observations are very clearly contextualized with respect to several active areas of deep learning research. Cons: * The class selectivity measure does not capture all class-related information that a unit may pass on. Comments: * Regarding the class selectivity of single units, there is a growing body of literature in neurophysiology and neuroimaging describing similar observations where the interpretation has been that a primary role of any neural pathway is to “denoise” or cancel out the “distractor” rather than just amplifying the “signal” of interest. * Untuned But Not Irrelevant: The Role of Untuned Neurons In Sensory Information Coding, https://www.biorxiv.org/content/early/2017/09/21/134379 * Correlated variability modifies working memory fidelity in primate prefrontal neuronal ensembles https://www.ncbi.nlm.nih.gov/pubmed/28275096 * On the interpretation of weight vectors of linear models in multivariate neuroimaging http://www.sciencedirect.com/science/article/pii/S1053811913010914 * see also LEARNING HOW TO EXPLAIN NEURAL NETWORKS https://openreview.net/forum?id=Hkn7CBaTW * Regarding the intuition in section 3.1, "The minimal description length of the model should be larger for the memorizing network than for the structure- finding network. As a result, the memorizing network should use more of its capacity than the structure-finding network, and by extension, more single directions”. Does reliance on single directions not also imply a local encoding scheme? We know that for a fixed number of units, a distributed representation will be able to encode a larger number of unique items than a local one. Therefore if this behaviour was the result of needing to use up more of the capacity of the network, wouldn’t you expect to observe more distributed representations? Minor issues: * In the first sentence of section 2.3, you say you analyzed three models and then you only list two. It seems you forgot to include ResNet trained on ImageNet.
iclr_2018_SkrHeXbCW
In high dimensions, the performance of nearest neighbor algorithms depends crucially on structure in the data. While traditional nearest neighbor datasets consisted mostly of hand-crafted feature vectors, an increasing number of datasets comes from representations learned with neural networks. We study the interaction between nearest neighbor algorithms and neural networks in more detail. We find that the network architecture can significantly influence the efficacy of nearest neighbor algorithms even when the classification accuracy is unchanged. Based on our experiments, we propose a number of training modifications that lead to significantly better datasets for nearest neighbor algorithms. Our modifications lead to learned representations that can accelerate nearest neighbor queries by 5×.
This paper investigates learning representations for the problem of nearest neighbor (NN) search by exploring various deep learning architectural choices. The crux of the paper is the connection between NN and the angles between the closest neighbors -- the higher this angle, more data points need to be explored for finding the nearest one, and thus more computational expense. Thus, the paper proposes to learn a network that tries to reduce the angles between the inputs and the corresponding class vectors in a supervised framework using softmax cross-entropy loss. Three architectural choices are investigated, (i) controlling the norm of output layers of the CNN (using batch norm essentially), (ii) removing relu so that the outputs are well-distributed in both positive and negative orthants, and (iii) normalizing the class vectors. Experiments are given on multiMNIST and Sports 1M and show improvements. Pros: 1) The paper explores different architectural choices for the deep network to some depth and show extensive results. 2) The results do demonstrate clearly the advantage of the various choices and is useful 3) The theoretical connections between data angles and query times are quite interesting, Cons: 1) Unclear Problem Statement. I find the problem statement a bit vague. Standard NN search finds a data point in the database closest to a query under some distance metric. While, the current paper uses the cosine similarity as the distance, the deep framework is trained on class vectors using cross-entropy loss. I do not think class labels are usually assumed to be given in the standard definition of NN, and it is not clear to me how the proposed setup can accommodate NN without class labels. Thus as such, I see this paper is perhaps proposing a classification problem and not an NN problem per se. 2) Lacks Focus The paper lacks a good organization in my opinion. Things that are perhaps technically important are moved to the Appendix. For example, I find the theoretical part of the paper (e.g., Theorem 1) quite elegant and perhaps the main innovation in this paper. However, that is moved completely to the Appendix. So it cannot be really considered a contribution. It is also not clear if those theoretical results are novel. 2) Disconnect/Unclear Assumptions There seems to be some disconnect between LSH and deep learning architectures explored in Sections 2 and 3 respectively. Are the assumptions used in the theoretical results for LSH also assumed in the deep networks? For example, as far as I know, the standard LSH works assumes the projection hyperplanes are randomly chosen and the theoretical results are based on such assumptions. It is not clear how a softmax output of a CNN, which is trained in a supervised way, follow such assumptions. It would be important if the paper could clarify such assumptions to make sure the sections are congruent. 3) No Related Work There have been several efforts for adapting deep frameworks into KNN. The paper ignores all such works. Thus, it is not clear how significant is the proposed contribution. There are also not comparisons what-so-ever to competitive prior works. 4) Novelty The main contribution of this paper is basically a set of experiments looking into architectural choices. However, the results of this study do not provide any surprises. It appears that batch normalization is essential for good performances, while using RELU is not so when one wants to use all directions for effective data encoding. Thus, as such, the novelty or the contributions of this paper are minor. Overall, while I find there are some interesting theoretical bits in this paper, it lacks focus, the experiments do not offer any surprises, and there are no comparisons with prior literature. Thus, I do not think this paper is ready to be accepted in its present form.
iclr_2018_HJBhEMbRb
The recent success of deep neural networks stems from their ability to generalize well on real data; however, Zhang et al. (Zhang et al., 2016) have observed that neural networks can easily overfit random labels. This observation demonstrates that with the existing theory, we cannot adequately explain why gradient methods can find generalizable solutions for neural networks. In this work, we use a Fourierbased approach to study the generalization properties of gradient-based methods over 2-layer neural networks with sinusoidal activation functions. We prove that if the underlying distribution of data has nice spectral properties such as bandlimitedness, then the gradient descent method will converge to generalizable local minima. We also establish a Fourier-based generalization bound for bandlimited spaces, which generalizes to other activation functions. Our generalization bound motivates a grouped version of path norms for measuring the complexity of 2-layer neural networks with ReLU activation functions. We demonstrate numerically that regularization of this group path norm results in neural network solutions that can fit true labels without losing test accuracy while not overfitting random labels.
This work proposes to study the generalization of learning neural networks via the Fourier-based method. It first gives a Fourier-based generalization bound, showing that Rademacher complexity of functions with small bandwidth and Fourier l_1 norm will be small. This leads to generalization for 2-layer networks with appropriate bounded size. For 2-layer networks with sine activation functions, assuming that the data distribution has nice spectral property (ie bounded bandwidth), it shows that the local minimum of the population risk (if with isolated component condition) will have small size, and also shows that the gradient of the empirical risk is close to that of the population risk. Empirical results show that the size of the networks learned on random labels are larger than those learned on true labels, and shows that a regularizer implied by their Fourier-based generalization bound can effectively reduce the generalization gap on random labels. The idea of applying the Fourier-based method to generalization is interesting. However, the theoretical results are not very satisfactory. -- How do the bounds here compared to those obtained by directly applying Rademacher complexity to the neural network functions? -- How to interpret the isolated components condition in Theorem 4? Basically, it means that B(P_X) should be a small constant. What type of distributions of X will be a good example? -- It is not easy to put together the conclusions in Section 6.1 and 6.2. Suppose SGD leads to a local minimum of the empirical loss. One can claim that this is an approximate local minimum (ie, small gradient) by Corollary 3. But to apply Theorem 4, one will need a version of Theorem 4 for approximate local minima. Also, one needs to argue that the local minimum obtained by SGD will satisfy the isolated component condition. The argument in Section 8.6 is not convincing, ie, there is potentially a large approximation error in (41) and one cannot claim that Lemma 1 and Theorem 4 are still valid without the isolated component condition.
iclr_2018_HyfHgI6aW
Published as a conference paper at ICLR 2018 MEMORY AUGMENTED CONTROL NETWORKS Planning problems in partially observable environments cannot be solved directly with convolutional networks and require some form of memory. But, even memory networks with sophisticated addressing schemes are unable to learn intelligent reasoning satisfactorily due to the complexity of simultaneously learning to access memory and plan. To mitigate these challenges we propose the Memory Augmented Control Network (MACN). The network splits planning into a hierarchical process. At a lower level, it learns to plan in a locally observed space. At a higher level, it uses a collection of policies computed on locally observed spaces to learn an optimal plan in the global environment it is operating in. The performance of the network is evaluated on path planning tasks in environments in the presence of simple and complex obstacles and in addition, is tested for its ability to generalize to new environments not seen in the training set.
The paper presents a method for navigating in an unknown and partially observed environment is presented. The proposed approach splits planning into two levels: 1) local planning based on the observed space and 2) a global planner which receives the local plan, observation features, and access to an addressable memory to decide on which action to select and what to write into memory. The contribution of this work is the use of value iteration networks (VINs) for local planning on a locally observed map that is fed into a learned global controller that references history and a differential neural computer (DNC), local policy, and observation features select an action and update the memory. The core concept of learned local planner providing additional cues for a global, memory-based planner is a clever idea and the thorough analysis clearly demonstrates the benefit of the approach. The proposed method is tested against three problems: a gridworld, a graph search, and a robot environment. In each case the proposed method is more performant than the baseline methods. The ablation study of using LSTM instead of the DNC and the direct comparison of CNN + LSTM support the authors’ hypothesis about the benefits of the two components of their method. While the author’s compare to DRL methods with limited horizon (length 4), there is no comparison to memory-based RL techniques. Furthermore, a comparison of related memory-based visual navigation techniques on domains for which they are applicable should be considered as such an analysis would illuminate the relative performance over the overlapping portions problem domains For example, analysis of the metric map approaches on the grid world or of MACN on their tested environments. Prior work in visual navigation in partially observed and unknown environments have used addressable memory (e.g., Oh et al.) and used VINs (e.g., Gupta et al.) to plan as noted. In discussing these methods, the authors state that these works are not comparable as they operate strictly on discretized 2d spaces. However, it appears to the reviewer that several of these methods can be adapted to higher dimensions and be applicable at least a subclass (for the euclidean/metric map approaches) or the full class of the problems (for Oh et al.), which appears to be capable to solve non-euclidean tasks like the graph search problem. If this assessment is correct, the authors should differentiate between these approaches more thoroughly and consider empirical comparisons. The authors should further consider contrasting their approach with “Neural SLAM” by Zhang et al. A limitation of the presented method is requirement that the observation “reveals the labeling of nearby states.” This assumption holds in each of the examples presented: the neighborhood map in the gridworld and graph examples and the lidar sensor in the robot navigation example. It would be informative for the authors to highlight this limitation and/or identify how to adapt the proposed method under weaker assumptions such as a sensor that doesn’t provide direct metric or connectivity information such as a RGB camera. Many details of the paper are missing and should be included to clarify the approach and ensure reproducible results. The reviewer suggests providing both more details in the main section of the paper and providing the precise architecture including hyperparameters in the supplementary materials section.
iclr_2018_r111KtCp-
We study the precise mechanisms which allow autoencoders to encode and decode a simple geometric shape, the disk. In this carefully controlled setting, we are able to describe the specific form of the optimal solution to the minimisation problem of the training step. We show that the autoencoder indeed approximates this solution during training. Secondly, we identify a clear failure in the generalisation capacity of the autoencoder, namely its inability to interpolate data. Finally, we explore several regularisation schemes to resolve the generalisation problem. Given the great attention that has been recently given to the generative capacity of neural networks, we believe that studying in depth simple geometric cases sheds some light on the generation process and can provide a minimal requirement experimental setup for more complex architectures.
The paper considers a toy problem: the space of images of discs of variable radius - a one dimensional manifold. An autoencoder based on convolutional layers with ReLU is experimented with, with a 1D embedding. It is shown that 1) if the bias is not included, the resulting function is homogeneous (meaning f(ax)=af(x)), and so it fails because the 1D representation should be the radius, and the relationship from radius to image is more complex than a homogeneous function. - if we include the bias and L2 regularise only the encoder weights, it works better in terms of interpolation for a limited data sample. The thing is that 1) is trivial (the composition of homogeneous functions is homogeneous... so their proof is overly messy btw). Then, they continue by further analysing (see proposition 2) the solution for this case. Such analysis does not seem to shed much light on anything relevant, given that we know the autoencoder fails in this case due to the trivial proposition 1. Another point: since the homogeneous function problem will not arise for other non-linearities (such as the sigmoid), the focus on the bias as the culprit seems arbitrary. Then, the story about interpolation and regularisation is kind of orthogonal, and then is solved by an arbitrary regularisation scheme. The lesson learned from this case is basically the second last paragraph of section 3.2. In other words, it just works. Since it's a toy problem anyway, the insights seem somewhat trivial. On the plus side, such a toy problem seems like it might lead somewhere interesting. I'd like to see a similar setup but with a suite of toy problems. e.g. vary the aspect ratio of an oval (rather than a disc), vary the position, intensity, etc etc.
iclr_2018_r1nzLmWAb
Action segmentation as a milestone towards building automatic systems to understand untrimmed videos has received considerable attention in the recent years. It is typically being modeled as a sequence labeling problem but contains intrinsic and sufficient differences than text parsing or speech processing. In this paper, we introduce a novel hybrid temporal convolutional and recurrent network (TricorNet), which has an encoder-decoder architecture: the encoder consists of a hierarchy of temporal convolutional kernels that capture the local motion changes of different actions; the decoder is a hierarchy of recurrent neural networks that are able to learn and memorize long-term action dependencies after the encoding stage. Our model is simple but extremely effective in terms of video sequence labeling. The experimental results on three public action segmentation datasets have shown that the proposed model achieves superior performance over the state of the art.
I will be upfront: I have already reviewed this paper when it was submitted to NIPS 2017, so this review is based heavily on the NIPS submission. I am quite concerned that this paper has been resubmitted as it is, word by word, character by character. The authors could have benefited from the feedback they obtained from the reviewers of their last submissions to improved their paper, but nothing has been done. Even very easy remarks, like bolding errors (see below) have been kept in the paper. The proposed paper describes a method for video action segmentation, a task where the video must be temporally densely labeled by assigned an action (sub) class to each frame. The method proceeds by extracting frame level features using convolutional networks and then passing a temporal encoder-decoder in 1D over the video, using fully supervised training. On the positive side, the method has been tested on 3 different datasets, outperforming the baselines (recent methods from 2016) on 2 of them. My biggest concern with the paper is novelty. A significant part of the paper is based on reference [Lea et al. 2017], the differences being quite incremental. The frame-level features are the same as in [Lea et al. 2017], and the basic encoder-decoder strategy is also taken from [Lea et al. 2017]. The encoder is also the same. Even details are reproduced, as the choice of normalized Relu activations. The main difference seems to me that the decoder is not convolutional, but a recurrent network. The encoder-decoder architecture seems to be surprisingly shallow, with only K=2 layers at each side. The paper is well written and can be easily understood. However, a quite large amount of space is wasted on obvious and known content, as for example the basic equation for a convolutional layer (equation (1)) and the following half page of text and equations of LSTM and Bi-directional LSTM networks. This is very well known and the space can be used for more details on the paper's contributions. While the paper is generally well written, there are a couple of exceptions in the form of ambiguous sentences, for example the lines before section 3. There is a bolding error in table 2, where the proposed method is not state of the art (as indicated) w.r.t. to the accuracy metric. To sum it up, the positive aspect of nicely executed experiments is contrasted by low novelty of the method. To be honest, I am not totally sure whether the contribution of the paper should be considered as a new method or as architectural optimizations of an existing one. This is corroborated by the experimental results on the first two datasets (tables 2 and 3): on 50 salads, where ref. [Lea et al. 2017]. seems currently to obtain state of the art performance, the improvement obtained by the proposed method allows it to get state of the art performance. On GTEA, where [Lea et al. 2017] does not currently deliver state of the art performance, the proposed method performs (slightly) better than [Lea et al. 2017] but does not obtain state of the art performance. On the third dataset, JIGSAWS, reference [Lea et al. 2017]. has not been tested, which is peculiar given the closeness.
iclr_2018_SJOl4DlCZ
Suppose a deep classification model is trained with samples that need to be kept private for privacy or confidentiality reasons. In this setting, can an adversary obtain the private samples if the classification model is given to the adversary? We call this reverse engineering against the classification model the Classifier-toGenerator (C2G) Attack. This situation arises when the classification model is embedded into mobile devices for offline prediction (e.g., object recognition for the automatic driving car and face recognition for mobile phone authentication). For C2G attack, we introduce a novel GAN, PreImageGAN. In PreImageGAN, the generator is designed to estimate the the sample distribution conditioned by the preimage of classification model f , P (X|f (X) = y), where X is the random variable on the sample space and y is the probability vector representing the target label arbitrary specified by the adversary. In experiments, we demonstrate PreImageGAN works successfully with hand-written character recognition and face recognition. In character recognition, we show that, given a recognition model of hand-written digits, PreImageGAN allows the adversary to extract alphabet letter images without knowing that the model is built for alphabet letter images. In face recognition, we show that, when an adversary obtains a face recognition model for a set of individuals, PreImageGAN allows the adversary to extract face images of specific individuals contained in the set, even when the adversary has no knowledge of the face of the individuals.
The paper proposes the use of a GAN to learn the distribution of image classes from an existing classifier, that is a nice and straightforward idea. From the point of view of forensic analysis of a classifier, it supposes a more principled strategy than a brute force attack based on the classification of a database and some conditional density estimation of some intermediate image features. Unfortunately, the experiments are inconclusive. Quality: The key question of the proposed scheme is the role of the auxiliary dataset. In the EMNIST experiment, the results for the “exact same” and “partly same” situations are good, but it seems that for the “mutually exclusive” situation the generated samples look like letters, not numbers, and raises the question on the interpolation ability of the generator. In the FaceScrub experiment is even more difficult to interpret the results, basically because we do not even know the full list of person identities. It seems that generated images contain only parts of the auxiliary images related to the most discriminative features of the given classifier. Does this imply that the GAN models a biased probability distribution of the image class? What is the result when the auxiliary dataset comes from a different kind of images? Due to the difficulty of evaluating GAN results, more experiments are needed to determine the quality and significance of this work. Clarity: The paper is well structured and written, but Sections 1-4 could be significantly shorter to leave more space to additional and more conclusive experiments. Some typos on Appendix A should be corrected. Originality: the paper is based on a very smart and interesting idea and a straightforward use of GANs. Significance: If additional simulations confirm the author’s claims, this work can represent a significant contribution to the forensic analysis of discriminative classifiers.
iclr_2018_Bk9zbyZCZ
Published as a conference paper at ICLR 2018 NEURAL MAP: STRUCTURED MEMORY FOR DEEP RE- INFORCEMENT LEARNING A critical component to enabling intelligent reasoning in partially observable environments is memory. Despite this importance, Deep Reinforcement Learning (DRL) agents have so far used relatively simple memory architectures, with the main methods to overcome partial observability being either a temporal convolution over the past k frames or an LSTM layer. More recent work (Oh et al., 2016) has went beyond these architectures by using memory networks which can allow more sophisticated addressing schemes over the past k frames. But even these architectures are unsatisfactory due to the reason that they are limited to only remembering information from the last k frames. In this paper, we develop a memory system with an adaptable write operator that is customized to the sorts of 3D environments that DRL agents typically interact with. This architecture, called the Neural Map, uses a spatially structured 2D memory image to learn to store arbitrary information about the environment over long time lags. We demonstrate empirically that the Neural Map surpasses previous DRL memories on a set of challenging 2D and 3D maze environments and show that it is capable of generalizing to environments that were not seen during training.
This paper presents a fully differentiable neural architecture for mapping and path planning for navigation in previously unseen environments, assuming near perfect* relative localization provided by velocity. The model is more general than the cognitive maps (Gupta et al, 2017) and builds on the NTM/DNC or related architectures (Graves et al, 2014, 2016, Rae et al, 2017) thanks to the 2D spatial structure of the associative memory. Basically, it consists of a 2D-indexed grid of features (the map) M_t that can be summarized at each time point into read vector r_t, and used for extracting a context c_t for the current agent state s_t, compute (thanks to an LSTM/GRU) an updated write vector w_{t+1}^{x,y} at the current position and update the map using that write vector. The position {x,y} is a binned representation of discrete or continuous coordinates. The absolute coordinate map can be replaced by a relative ego-centric map that is shifted (just like in Gupta et al, 2017) as the agent moves. The experiments are exhaustive and include remembering the goal location with or without cues (similarly to Mirowski et al, 2017, not cited) in simple mazes of size 4x4 up to 8x8 in the 3D Doom environment. The most important aspect is the capability to build a feature map of previously unseen environments. This paper, showing excellent and important work, has already been published on arXiv 9 months ago and widely cited. It has been improved since, through different sets of experiments and apparently a clearer presentation, but the ideas are the same. I wonder how it is possible that the paper has not been accepted at ICML or NIPS (assuming that it was actually submitted there). What are the motivations of the reviewers who rejected the paper - are they trying to slow down competing research, or are they ignorant, and is the peer review system broken? I quite like the formulation of the NIPS ratings: "if this paper does not get accepted, I am considering boycotting the conference". * The noise model experiment in Appendix D is commendable, but the noise model is somewhat unrealistic (very small variance, zero mean Gaussian) and assumes only drift in x and y, not along the orientation. While this makes sense in grid world environments or rectilinear mazes, it does not correspond to realistic robotic navigation scenarios with wheel skid, missing measurements, etc... Perhaps showing examples of trajectories with drift added would help convince the reader (there is no space restriction in the appendix).
iclr_2018_HyydRMZC-
Published as a conference paper at ICLR 2018 SPATIALLY TRANSFORMED ADVERSARIAL EXAMPLES Recent studies show that widely used deep neural networks (DNNs) are vulnerable to carefully crafted adversarial examples. Many advanced algorithms have been proposed to generate adversarial examples by leveraging the L p distance for penalizing perturbations. Researchers have explored different defense methods to defend against such adversarial attacks. While the effectiveness of L p distance as a metric of perceptual quality remains an active research area, in this paper we will instead focus on a different type of perturbation, namely spatial transformation, as opposed to manipulating the pixel values directly as in prior works. Perturbations generated through spatial transformation could result in large L p distance measures, but our extensive experiments show that such spatially transformed adversarial examples are perceptually realistic and more difficult to defend against with existing defense systems. This potentially provides a new direction in adversarial example generation and the design of corresponding defenses. We visualize the spatial transformation based perturbation for different examples and show that our technique can produce realistic adversarial examples with smooth image deformation. Finally, we visualize the attention of deep networks with different types of adversarial examples to better understand how these examples are interpreted.
This paper creates adversarial images by imposing a flow field on an image such that the new spatially transformed image fools the classifier. They minimize a total variation loss in addition to the adversarial loss to create perceptually plausible adversarial images, this is claimed to be better than the normal L2 loss functions. Experiments were done on MNIST, CIFAR-10, and ImageNet, which is very useful to see that the attack works with high dimensional images. However, some numbers on ImageNet would be helpful as the high resolution of it make it potentially different than the low-resolution MNIST and CIFAR. It is a bit concerning to see some parts of Fig. 2. Some of Fig. 2 (especially (b)) became so dotted that it no longer seems an adversarial that a human eye cannot detect. And model B in the appendix looks pretty much like a normal model. It might needs some experiments, either human studies, or to test it against an adversarial detector, to ensure that the resulting adversarials are still indeed adversarials to the human eye. Another good thing to run would be to try the 3x3 average pooling restoration mechanism in the following paper: Xin Li, Fuxin Li. Adversarial Examples Detection in Deep Networks with Convolutional Filter Statistics . ICCV 2017. to see whether this new type of adversarial example can still be restored by a 3x3 average pooling the image (I suspect that this is harder to restore by such a simple method than the previous FGSM or OPT-type, but we need some numbers). I also don't think FGSM and OPT are this bad in Fig. 4. Are the authors sure that if more regularization are used these 2 methods no longer fool the corresponding classifiers? I like the experiment showing the attention heat maps for different attacks. This experiment shows that the spatial transforming attack (stAdv) changes the attention of the classifier for each target class, and is robust to adversarially trained Inception v3 unlike other attacks like FGSM and CW. I would likely upgrade to a 7 if those concerns are addressed. After rebuttal: I am happy with the additional experiments and would like to upgrade to an accept.
iclr_2018_SJyEH91A-
Published as a conference paper at ICLR 2018 LEARNING WASSERSTEIN EMBEDDINGS The Wasserstein distance received a lot of attention recently in the community of machine learning, especially for its principled way of comparing distributions. It has found numerous applications in several hard problems, such as domain adaptation, dimensionality reduction or generative models. However, its use is still limited by a heavy computational cost. Our goal is to alleviate this problem by providing an approximation mechanism that allows to break its inherent complexity. It relies on the search of an embedding where the Euclidean distance mimics the Wasserstein distance. We show that such an embedding can be found with a siamese architecture associated with a decoder network that allows to move from the embedding space back to the original input space. Once this embedding has been found, computing optimization problems in the Wasserstein space (e.g. barycenters, principal directions or even archetypes) can be conducted extremely fast. Numerical experiments supporting this idea are conducted on image datasets, and show the wide potential benefits of our method.
This paper proposes approximating the Wasserstein distance between normalized greyscale images based on a learnable approximately isometric embedding of images into Euclidean space. The paper is well written with clear and generally thorough prose. It presents a novel, straightforward and practical solution to efficiently computing Wasserstein distances and performing related image manipulations. Major comments: It sounds like the same image may be present in the training set and eval set. This is methodologically suspect, since the embedding may well work better for images seen during training. This affects all experimental results. I was pleased to see a comparison between using exact and approximate Wasserstein distances for image manipulation in Figure 5, since that's a crucial aspect of whether the method is useful in practice. However the exact computation (OT LP) appears to be quite poor. Please explain why the approximation is better than the exact Wasserstein difference for interpolation. Relatedly, please summarize the argument in Cuturi and Peyre that is cited ("as already explained in"). Minor comments: In section 3.1 and 4.1, "histogram" is used to mean normalized-to-sum-to-1 images, which is not the conventional meaning. It would help to pick one of "Wasserstein Deep Learning" and "Deep Wasserstein Embedding" and use it and the acronym consistently throughout. "Disposing of a decoder network" in section 3.1 should be "using a decoder network"? In section 4.1, the architectural details could be clarified. What size are the input images? What type of padding for the convolutions? Was there any reason behind the chosen architecture? In particular the use of a dense layers followed by convolutional layers seems peculiar. It would be helpful to say explicitly what "quadratic ground metric" means (i.e. W_2, I presume) in section 4.2 and elsewhere. It would be helpful to give a sense of scale for the numbers in Table 1, e.g. give the 95th percentile Wasserstein distance. Perhaps use the L2 distance passed through a 1D-to-1D learned warping as a baseline. Mention that OT stands for optimal transport in section 4.3. Suggest mentioning "there is no reason for a Wasserstein barycenter to be a realistic sample" in the main text when first discussing barycenters.
iclr_2018_H135uzZ0-
MIXED PRECISION TRAINING OF CONVOLUTIONAL NEURAL NETWORKS USING INTEGER OPERATIONS The state-of-the-art (SOTA) for mixed precision training is dominated by variants of low precision floating point operations, and in particular FP16 accumulating into FP32 . On the other hand, while a lot of research has also happened in the domain of low and mixed-precision Integer training, these works either present results for non-SOTA networks (for instance only AlexNet for ImageNet-1K), or relatively small datasets (like CIFAR-10). In this work, we train state-of-the-art visual understanding neural networks on ImageNet-1K dataset, with Integer operations on General Purpose (GP) hardware. In particular, we focus on Integer Fused-Multiply-and-Accumulate (FMA) operations which take two pairs of INT16 operands and accumulate results into an INT32 output.We propose a shared exponent representation of tensors, and develop a Dynamic Fixed Point (DFP) scheme suitable for common neural network operations. The nuances of developing an efficient integer convolution kernel is examined, including methods to handle overflow of the INT32 accumulator. We implement CNN training for ResNet-50, GoogLeNet-v1, VGG-16 and AlexNet; and these networks achieve or exceed SOTA accuracy within the same number of iterations as their FP32 counterparts without any change in hyper-parameters and with a 1.8X improvement in end-to-end training throughput. To the best of our knowledge these results represent the first INT16 training results on GP hardware for ImageNet-1K dataset using SOTA CNNs and achieve highest reported accuracy using half precision representation.
This paper describes an implementation of reduced precision deep learning using a 16 bit integer representation. This field has recently seen a lot of publications proposing various methods to reduce the precision of weights and activations. These schemes have generally achieved close-to-SOTA accuracy for small networks on datasets such as MNIST and CIFAR-10. However, for larger networks (ResNET, Vgg, etc) on large dataset such as ImageNET, a significant accuracy drop are reported. In this work, the authors show that a careful implementation of mixed-precision dynamic fixed point computation can achieve SOTA on 4 large networks on the ImageNET-1K datasets. Using a INT16 (as opposed to FP16) has the advantage of enabling the use of new SIMD mul-acc instructions such as QVNNI16. The reported accuracy numbers show convincingly that INT16 weights and activations can be used without loss of accuracy in large CNNs. However, I was hoping to see a direct comparison between FP16 and INT16. The paper is written clearly and the English is fine.
iclr_2018_SJVHY9lCb
We propose a "Learning to Select" problem that selects the best among the flexible size candidates. This makes decisions based not only on the properties of the candidate, but also on the environment in which they belong to. For example, job dispatching in the manufacturing factory is a typical "Learning to Select" problem. We propose Variable-Length CNN which combines the classification power using hidden features from CNN and the idea of flexible input from Learning to Rank algorithms. This not only can handles flexible candidates using Dynamic Computation Graph, but also is computationally efficient because it only builds a network with the necessary sizes to fit the situation. We applied the algorithm to the job dispatching problem which uses the dispatching log data obtained from the virtual fine-tuned factory. Our proposed algorithm shows considerably better performance than other comparable algorithms.
The paper proposed a new framework called `Learning to select’, in which a best candidate needs to be identified in the decision making process such as job dispatching. A CNN architecture is designed, called `Variable-Length CNN’, to solve this problem. My major concern is on the definition of the proposed concept of `learning-to-select’. Essentially, I’ve not seen its key difference from the classification problem. While `even in the case of completely identical candidates, the label can be 1 in some situations, and in some other situations the label can be 0’, why not including such `situations’ into your feature vector (i.e., x)? Once you do it, the gap between learning to select and classification will vanish. If this is not doable, you should better make more discussions, especially on what the so-called `situations’ are. Furthermore, the application scope of the proposed framework is not very well discussed. If it is restricted to job dispatching scenarios, why do we need a new concept “learning to select”? The proposed model looks quite straightforward. Standard CNN is able to capture the variable length input as is done in many NLP tasks. Dynamic computational graph is not new either. In this sense, the technical novelty of this work is somehow limited. The experiments are weak in that the data are simulated and the baselines are not strong. I’ve not gained enough insights on why the proposed model could outperform the alternative approaches. More discussions and case studies are sorely needed.
iclr_2018_HyIFzx-0b
In this work we present BinaryFlex, a neural network architecture that learns weighting coefficients of predefined orthogonal binary basis instead of the conventional approach of learning directly the convolutional filters. We have demonstrated the feasibility of our approach for complex computer vision datasets such as ImageNet. Our architecture trained on ImageNet is able to achieve top-5 accuracy of 65.7% while being around 2x smaller than binary networks capable of achieving similar accuracy levels. By using deterministic basis, that can be generated on-the-fly very efficiently, our architecture offers a great deal of flexibility in memory footprint when deploying in constrained microcontroller devices.
This paper proposes using a set of orthogonal basis and their combination to represent convolutional kernels. To learn the set of basis, the paper uses an existing algorithm (OSVF) -- Related Work Related work suggests there is redundancy in the number of parameters (According to Denil et al) but the training can be done by learning a subset directly without drop in accuracy. I am not really sure this is strictly correct as many approaches (including Denil et al) suggest the additional parameters are needed to help the optimization process (therefore hard to learn directly a small model). As in the clarity point below, please be consistent. Acronyms are not properly defined. -- Method / Clarity It is nice to read section 3.1 but at the same time probably redundant as it does not add any value (at least the first two paragraphs including Eq. 1). Reading the text, it is not clear to me why there is a lower number of parameters to be updated. To the best of my understanding so far in the explanation, the number of parameters is potentially the same but represented using a single bit. Rephrasing this section would probably improve readability. Runtime is potentially reduced but not clear in current hardware. Section 3.2 is nice as short overview but happens to take more than the actual proposal (so I get lost). Figures 2 and 3. I am surprissed the FlexModule (a building block of BinaryFlex) is not mentioned in the binaryflex architecture and then, sparse blocks are not defined anywhere. Would be nice to be consistent here. Also note that filter banks among other details are not defined. Now, w and b in eq 2 are meant to be binary, is that correct? The text defines them as real valued so this is confusing. - From the explanations in the text, it is not clear to me how the basis and the weights are learned (except using backprop). How do we actually generate the filter bank, is this from scratch? or after some pretraining / preloaded model? What is the difference between BinaryFlex models and how do I generate them when replicating these results? It is correct to assume f_k is a pretrained kernel that is going to be approximated? -- more on clarity I would also appreciate rephrasing some parts of the paper. For instance, the paragraph under section 4.1 is confusing. There is no consistency with namings / acronyms and seems not to be in the right order. Note that the paragraph starts talking about ImageNet and then suggests different schedules for different datasets. The naming for state-of-the-art methods is not consistent. Also note that acronyms are later used (such as BWN) but not defined here. This should be easy to improve. I guess Figurre 4 needs clarification. What are the axis? Why square and cercles? Same for Figure 5. Overall, text needs reviewing. There are typos all over the text. I think ImageNet is not a task but classification using ImageNet. -- Results I find it hard to follow the results. Section 4.1.1 suggests accuracy is comparable when constraints are relaxed and then only 7% drop in accuracy for a 4.5x model reduction. I have not been able to match these numbers with those in table 2. How do I get to see 7% lower accuracy for BinaryFlex-1.6? Results suggest a model under 2MB is convenient for using in ARM, is this actually a fact (is it tested in an ARM?) or just guessing? This is also a point made in the introduction and I would expect at least an example of running time there (showing the benefit compared to competitors). It is also interesting the fact that in the text, the ARM is said to have 512KB while in the experiments there is no model achieving that lower-bound. I would like to see an experiment on ImageNet where the proposed BinaryFlex leads to a model of approximately 7.5MB and see what the preformance is for that model (so comparable in size with the state-of-the-art). I missed details for the exact implementation for the other datsets (as said in the paper). There are modifications that are obscure and the benefits in model size (at least compared to a baseline) are not mentioned. Why?
iclr_2018_rkQsMCJCb
Most existing GANs architectures that generate images use transposed convolution or resize-convolution as their upsampling algorithm from lower to higher resolution feature maps in the generator. We argue that this kind of fixed operation is problematic for GANs to model objects that have very different visual appearances. We propose a novel adaptive convolution method that learns the upsampling algorithm based on the local context at each location to address this problem. We modify a baseline GANs architecture by replacing normal convolutions with adaptive convolutions in the generator. Experiments on CIFAR-10 dataset show that our modified models improve the baseline model by a large margin. Furthermore, our models achieve state-of-the-art performance on CIFAR-10 and STL-10 datasets in the unsupervised setting.
The paper operates under the hypothesis that the rigidity of the convolution operator is responsible in part for the poor performance of GANs on diverse visual datasets. The authors propose to replace convolutions in the generator with an Adaptive Convolution Block, which learns to generate the convolution weights and biases of the upsampling operation adaptively for each pixel location. State-of-the-art Inception scores are presented for the CIFAR-10 and STL-10 datasets. I think the idea of leveraging adaptive convolutions in decoder-based models is compelling, especially given its success in video frame interpolation, which makes me wonder why the authors chose to restrict themselves to GANs. Wouldn't the arguments used to justify replacing regular convolutions in the generator with adaptive convolution blocks apply equally well to any other decoder-based generative model, like a VAE, for instance? I find the paper lacking on the evaluation front. The evaluation of GANs is still very much an open research problem, which means that making a compelling case for the effectiveness of a proposed method requires nuance and contextualization. The authors claim a state-of-the-art Inception score but fail to explain what argument this claim supports. This is important, because the Inception score is not a universal measure of GAN performance: it provides a specific view on the ability of a generator to cover human-defined modes in the data distribution, but it does not inform on intra-class mode coverage and is blind to things like the generator collapsing on one or a few template samples per class. I am also surprised that the relationship with HyperNetworks [1] is not outlined, given that both papers leverage the idea of factoring network parameters through a second neural network. Some additional comments: - Figure 1 should be placed much earlier in the paper, preferably above Section 3. In its current state, the paper provides a lot of mathematical notation to digest without any visual support. - "[...] a transposed convolution is equivalent to a convolution [...]": This is inaccurate. A convolution's backward pass is a transposed convolution and vice versa, but they are not equivalent (especially when non-unit strides are involved). - "The difficulties of training GANs is well known": There is a grammatical error in this sentence. - "If [the discriminator] is too strong, log(1 - D(G(z))) will be close to zero and there would be almost no gradient [...]": This is only true for the minimax GAN objective, which is almost never used in practice. The non-saturating GAN objective does not exhibit this issue, as [2] re-iterated recently. - "Several works have been done [...]": There is a grammatical error here. - The WGAN-GP citation is wrong (Danihelka et al. rather than Gulrajani et al.). Overall, the paper's lack of sufficient convincing empirical support prevents me from recommending its acceptance. References: [1] Ha, D., Dai, A., and Le, Q. V. (2016). HyperNetworks. arXiv:1609.09106. [2] Fedus, W., Rosca, M., Lakshminarayanan, B., Dai, A. M., Mohamed, S., and Goodfellow, I. (2017). Many Paths to Equilibrium: GANs Do Not Need to Decrease a Divergence At Every Step. arXiv:1710.08446.
iclr_2018_B1nLkl-0Z
State-action value functions (i.e., Q-values) are ubiquitous in reinforcement learning (RL), giving rise to popular algorithms such as SARSA and Q-learning. We propose a new notion of action value defined by a Gaussian smoothed version of the expected Q-value. We show that such smoothed Q-values still satisfy a Bellman equation, making them learnable from experience sampled from an environment. Moreover, the gradients of expected reward with respect to the mean and covariance of a parameterized Gaussian policy can be recovered from the gradient and Hessian of the smoothed Q-value function. Based on these relationships we develop new algorithms for training a Gaussian policy directly from a learned smoothed Q-value approximator. Our approach is amenable to proximal optimization techniques by augmenting the objective with a penalty on KLdivergence from a previous policy. We find that the ability to learn both a mean and covariance during training allows this approach to achieve much better results on standard continuous control benchmarks.
This paper explores the idea of using policy gradients to learn a stochastic policy on complex control problems. The central idea is to frame learning in terms of a new kind of Q-value that attempts to smooth out Q-values by framing them in terms of expectations over Gaussian policies. To be honest, I didn't really "get" this paper. * As far I understand, all of the original work policy gradients involved stochastic policies. Many are/were Gaussian. * All Q-value estimators are designed to marginalize out the randomness in these stochastic policies. * As far as I can tell, this is equivalent to a slightly different formulation, where the agent emits a deterministic action (\mu,\Sigma) and the environment samples an action from that distribution. In other words, it seems that if we just draw the box a bit differently, the environment soaks up the nondeterminism, instead of needing to define a new type of Q-value. Ultimately, I couldn't discern /why/ this was a significant advance for RL, or even a meaningful new perspective on classic ideas. I thought the little 2-mode MOG was a nice example of the premise of the model. While I may or may not have understood the core technical contribution, I think the experiments can be critiqued: they didn't really seem to work out. Figures 2&3 are unconvincing - the differences do not appear to be statistically significant. Also, I was disappointed to see that the authors only compared to DDPG; they could have at least compared to TRPO, which they mention. They dismiss it by saying that it takes 10 times as long, but gets a better answer - to which I respond, "Very well, run your algorithm 10x longer and see where you end up!" I think we need to see a more compelling demonstration of why this is a useful idea before it's ready to be published. The idea of penalizing a policy based on KL-divergence from a reference policy was explored at length by Bert Kappen's work on KL-MDPs. Perhaps you should cite that?
iclr_2018_S1cZsf-RW
WHAI: WEIBULL HYBRID AUTOENCODING INFERENCE FOR DEEP TOPIC MODELING To train an inference network jointly with a deep generative topic model, making it both scalable to big corpora and fast in out-of-sample prediction, we develop Weibull hybrid autoencoding inference (WHAI) for deep latent Dirichlet allocation, which infers posterior samples via a hybrid of stochastic-gradient MCMC and autoencoding variational Bayes. The generative network of WHAI has a hierarchy of gamma distributions, while the inference network of WHAI is a Weibull upward-downward variational autoencoder, which integrates a deterministicupward deep neural network, and a stochastic-downward deep generative model based on a hierarchy of Weibull distributions. The Weibull distribution can be used to well approximate a gamma distribution with an analytic Kullback-Leibler divergence, and has a simple reparameterization via the uniform noise, which help efficiently compute the gradients of the evidence lower bound with respect to the parameters of the inference network. The effectiveness and efficiency of WHAI are illustrated with experiments on big corpora.
The authors propose a hybrid Bayesian inference approach for deep topic models that integrates stochastic gradient MCMC for global parameters and Weibull-based multilayer variational autoencoders (VAEs) for local parameters. The decoding arm of the VAE consists of deep latent Dirichlet allocation, and an upward-downward structure for the encoder. Gamma distributions are approximated as Weibull distributions since the Kullback-Leibler divergence is known and samples can be efficiently drawn from a transformation of samples from a uniform distribution. The results in Table 1 are concerning for several reasons, i) the proposed approach underperfroms DLDA-Gibbs and DLDA-TLASGR. ii) The authors point to the scalability of the mini-batch-based algorithms, however, although more expensive, DLDA-Gibbs, is not prohibitive given results for Wikipedia are provided. iii) The proposed approach is certainly faster at test time, however, it is not clear to me in which settings such speed (compared to Gibbs) would be needed, provided the unsupervised nature of the task at hand. iv) It is not clear to me why there is no test-time difference between WAI and WHAI, considering that in the latter, global parameters are sampled via stochastic-gradient MCMC. One possible explanation being that during test time, the approach does not use samples from W but rather a summary of them, say posterior means, in which case, it defeats the purpose of sampling from global parameters, which may explain why WAI and WHAI perform about the same in the 3 datasets considered. - \Phi is in a subset of R_+, in fact, columns of \Phi are in the P_0-dimensional simplex. - \Phi should have K_1 columns not K. - The first paragraph in Page 5 is very confusing because h is introduced before explicitly connecting it to k and \lambda. Also, if k = \lambda, why introduce different notations?
iclr_2018_Sy5OAyZC-
To construct representations for natural language sequences, information from two main sources needs to be captured: (i) semantic meaning of individual words, and (ii) their compositionality. These two types of information are usually represented in the form of word embeddings and compositional functions, respectively. For the latter, Recurrent Neural Networks (RNNs) and Convolutional Neural Networks (CNNs) have been considered. There has not been a rigorous evaluation regarding the relative importance of each component to different text-representation-based tasks; i.e., how important is the modeling capacity of word embeddings alone, relative to the added value of a compositional function? In this paper, we conduct an extensive comparative study between Simple Word Embeddings-based Models (SWEMs), with no compositional parameters, relative to employing word embeddings within RNN/CNN-based models. Surprisingly, SWEMs exhibit comparable or even superior performance in the majority of cases considered. Moreover, in a new SWEM setup, we propose to employ a max-pooling operation over the learned word-embedding matrix of a given sentence. This approach is demonstrated to extract complementary features relative to the averaging operation standard to SWEMs, while endowing our model with better interpretability. To further validate our observations, we examine the information utilized by different models to make predictions, revealing interesting properties of word embeddings.
This paper empirically investigates the differences realized by using compositional functions over word embeddings as compared to directly operating the word embeddings. That is, the authors seek to explore the advantages afforded by RNN/CNN based models that induce intermediate semantic representations of texts, as opposed to simpler (parameter-free) approaches to composing these, like addition. In sum, I think this is exploration is interesting, and suggests that we should perhaps experiment more regularly with simple aggregation methods like SWEM. On the other hand, the differences across the models is relatively modest, and the data resists clear conclusions, so I'm not sure that the work will be very impactful. In my view, then, this work does constitute a contribution, albeit a modest one. I do think the general notion of attempting to simplify models until performance begins to degrade is a fruitful path to explore, as models continue to increase in complexity despite compelling evidence that this is always needed. Strengths --- + This paper does highlight a gap in existing work, as far as I am aware: namely, I am not sure that there are generally known trade-offs associated with different compositional models over token embeddings for NLP. However, it is not clear that we should expect there to be a consistent result to this question across all NLP tasks. + The results are marginally surprising, insofar as I would have expected the CNN/RNN (particularly the former) to dominate the simpler aggregation approaches, and this does not seem borne out by the data. Although this trend is seemingly reversed on the short text data, muddying the story. Weaknesses --- - There are a number of important limitations here, many of which the authors themselves note, which mitigate the implications of the reported results. First, this is a small set of tasks, and results may not hold more generally. It would have been nice to see some work on Seq2Seq tasks, or sequence tagging tasks at least. - I was surprised to see no mention of the "Fixed-Size Ordinally-Forgetting Encoding Method" (FOFE) proposed by Zhang et al. in 2015, which would seem to be a natural point of comparison here, given that it sits in a sweet spot of being simple and efficient while still expressive enough to preserve word-order information. This actually seems like a pretty glaring omission given that it meets many of the desiderata the authors put forward. - The interpretability angle discussed seems underdeveloped. I'm not sure that being able to identify individual words (as the authors have listed) meaningfully constitutes "interpretability" -- standard CNNs, e.g., lend themselves to this as well by tracing back through the filter activations. - Some of the questions addressed seem tangential to the main question of the paper -- e.g., word vector dimensionality seems an orthogonal issue to the composition function, and would influence performance for the more complex architectures as well. Smaller comments --- - On page 1, the authors write "By representing each word as a fixed-length vector, these embeddings can group semantically similar words, while explicitly encoding rich linguistic regularities and patterns", but actually I would say that these *implicitly* encode such regularities, rather than explicitly. - "architecture in Kim 2014; Collobert et al. 2011; Gan et al. 2017" -- citation formatting a bit weird here. *** Update based on author response *** I have read the authors response and thank them for the additional details. Regarding the limited set of problems: of course any given work can only explore so many tasks, but for this to have general implications in NLP I would maintain that a standard (structured) sequence tagging task/dataset should have been considered. This is not about the number of datasets, but rather than diversity of the output spaces therein. I appreciated the additional details regarding FOFE, which as the authors themselves note in their response is essentially a generalization of SWEM. Overall, the response has not changed my opinion on this paper: I think this (exploring simple representations and baselines) is an important direction in NLP, but feel that the paper would greatly benefit from additional work.
iclr_2018_rkYgAJWCZ
Standard deep learning systems require thousands or millions of examples to learn a concept, and cannot integrate new concepts easily. By contrast, humans have an incredible ability to do one-shot or few-shot learning. For instance, from just hearing a word used in a sentence, humans can infer a great deal about it, by leveraging what the syntax and semantics of the surrounding words tells us. Here, we draw inspiration from this to highlight a simple technique by which deep recurrent networks can similarly exploit their prior knowledge to learn a useful representation for a new word from little data. This could make natural language processing systems much more flexible, by allowing them to learn continually from the new words they encounter.
Paper Summary From just seeing a word used in a sentence, humans can infer a lot about this word by leveraging the surrounding words. Based on this idea, this work tries to obtain a better understanding of words in the one-shot or few-shot setting by leveraging surrounding word. They do this by language modeling sentences which contain rarely seen or never seen words. They evaluated their model using percent change in perplexity on test sentences containing new word by varying the number of training sentences containing this word. 3 Proposed Methods to model few-shot words: (1) beginning with random embedding, (2) beginning with zero embedding (3) beginning with the centroid of other words in the sentence. They compare to 2 Baseline Methods: (1) centroid of other words in the sentence, and (2) full training including the sparse words. Their results show that learning from centroids of other words can outperform full training on the new words. Explanation The paper is well written, and the experiments are well explained. It is an interesting paper, and a research topic which is not well studied. The experiments are reasonable. The method seems to work well. However, the method provides a very marginal difference between the previous method in Lazaridou et al. (2017). They just use backdrop to learn from this starting position. The main contribution of this work is the evaluation section. Why only use the PTB language modeling task. Why not use the task in Gauthier & Mordatch or Hermann et al. The one task of language modeling shows promising results, but it’s not totally convincing. One of the biggest caveats is that the experiments are only done in a few words. I’m not sure why more couldn’t have been done. This is discussed in section 4.1, but I think some of these differences could have been alleviated if there were more experiments done. Regardless, the experiments on the 8 words that they did chose were well done. I don’t think that section 3.3 (embedding similarity) is particularly useful.
iclr_2018_Bk-ofQZRb
Temporal Difference Learning with function approximation is known to be unstable. Previous work like Sutton et al. (2009b) and Sutton et al. (2009a) has presented alternative objectives that are stable to minimize. However, in practice, TD-learning with neural networks requires various tricks like using a target network that updates slowly (Mnih et al., 2015). In this work we propose a constraint on the TD update that minimizes change to the target values. This constraint can be applied to the gradients of any TD objective, and can be easily applied to nonlinear function approximation. We validate this update by applying our technique to deep Q-learning, and training without a target network. We also show that adding this constraint on Baird's counterexample keeps Q-learning from diverging.
This paper proposes adding a constraint to the temporal difference update to minimize the effect of the update on the next state’s value. The constraint is added by projecting the original gradient to the orthogonal of the maximal direction of change of the next state’s value. It is shown empirically that the constrained update does not diverge on Baird’s counter example and improves performance in a grid world domain and cart pole over DQN. This paper is reasonably readable. The derivation for the constraint is easy to understand and seems to be an interesting line of inquiry that might show potential. The key issue is that the justification for the constrained gradients is lacking. What is the effect, in terms of convergence, in modifying the gradient in this way? It seems highly problematic to simply remove a whole part of the gradient, to reduce effect on the next state. For example, if we are minimizing the changes our update will make to the value of the next state, what would happen if the next state is equivalent to the current state (or equivalent in our feature space)? In general, when we project our update to be orthogonal to the maximal change of the next states value, how do we know it is a valid direction in which to update? I would have liked some analysis of the convergence results for TD learning with this constraint, or some better intuition in how this effects learning. At the very least a mention of how the convergence proof would follow other common proofs in RL. This is particularly important, since GTD provides convergent TD updates under nonlinear function approximation; the role for a heuristic constrained TD algorithm given convergent alternatives is not clear. For the experiments, other baselines should be included, particularly just regular Q-learning. The primary motivation comes from the use of a separate target network in DQN, which seems to be needed in Atari (though I am not aware of any clear result that demonstrates why, rather just from informal discussions). Since you are not running experiments on Atari here, it is invalid to simply assume that such a second network is needed. A baseline of regular Q-learning should be included for these simpler domains. The results in Baird’s counter example are discouraging for the new constraints. Because we already have algorithms which better solve this domain, why is your method advantageous? The point of showing your algorithm not solve Baird’s counter example is unclear. There are also quite a few correctness errors in the paper, and the polish of the plots and language needs work, as outlined below. There are several mistakes in the notation and background section. 1. “If we consider TD-learning using function approximation, the loss that is minimized is the squared TD error.“ This is not true; rather, TD minimizes the mean-squared project Bellman error. Further, L_TD is strangely defined: why a squared norm, for a scalar value? 2. The definition of v and delta_TD w.r.t. to v seems unnecessary, since you only use Q. As an additional (somewhat unimportant) point, the TD-error is usually defined as the negative of what you have. 3. In the function approximation case the value function and q functions parameterized by \theta are only approximations of the expected return. 4. Defining the loss w.r.t. the state, and taking the derivative of the state w.r.t. to theta is a bit odd. Likely what you meant is the q function, at state s_t? Also, are ignoring the gradient of the value at the next step? If so, this further means that this is not a true gradient. There is a lot of white space around the plots, which could be used for larger more clear figures. The lack of labels on the plots makes them hard to understand at a glance, and the overlapping lines make finding certain algorithm’s performance much more difficult. I would recommend combining the plots into one figure with a drawing program so you have more control over the size and position of the plots. Examples of odd language choices: - “The idea also does not immediately scale to nonlinear function approximation. Bhatnagar et al. (2009) propose a solution by projecting the error on the tangent plane to the function at the point at which it is evaluated. “ - The paper you give exactly solves for the nonlinear function approximation case. What do you mean does not scale to nonlinear function approximation? Also Maei is the first author on this paper. - “Though they do not point out this insight as we have” - This seems to be a bit overreaching. - “the gradient at s_{t+1} that will change the value the most” - This is too colloquial. I think you simply mean the gradient of the value function, for the given s_t, but its not clear.
iclr_2018_BJ6anzb0Z
We propose a novel approach to multimodal sentiment analysis using deep neural networks combining visual recognition and natural language processing. Our goal is different than the standard sentiment analysis goal of predicting whether a sentence expresses positive or negative sentiment; instead, we aim to infer the latent emotional state of the user. Thus, we focus on predicting the emotion word tags attached by users to their Tumblr posts, treating these as "self-reported emotions." We demonstrate that our multimodal model combining both text and image features outperforms separate models based solely on either images or text. Our model's results are interpretable, automatically yielding sensible word lists associated with emotions. We explore the structure of emotions implied by our model and compare it to what has been posited in the psychology literature, and validate our model on a set of images that have been used in psychology studies. Finally, our work also provides a useful tool for the growing academic study of imagesboth photographs and memes-on social networks.
This paper presents a method for classifying Tumblr posts with associated images according to associated single emotion word hashtags. The method relies on sentiment pre-processing from GloVe and image pre-processing from Inception. My strongest criticism for this paper is against the claim that Tumblr post represent self-reported emotions and that this method sheds new insight on emotion representation and my secondary criticism is a lack of novelty in the method, which seems to be simply a combination of previously published sentiment analysis module and previously published image analysis module, fused in an output layer. The authors claim that the hashtags represent self-reported emotions, but this is not true in the way that psychologists query participants regarding emotion words in psychology studies. Instead these are emotion words that a person chooses to broadcast along with an associated announcement. As the authors point out, hashtags and words may be used sarcastically or in different ways from what is understood in emotion theory. It is quite common for everyday people to use emotion words this way e.g. using #love to express strong approval rather than an actual feeling of love. In their analysis the authors claim: “The 15 emotions retained were those with high relative frequencies on Tumblr among the PANAS-X scale (Watson & Clark, 1999)”. However five of the words the authors retain: bored, annoyed, love, optimistic, and pensive are not in fact found in the PANAS-X scale: Reference: The PANAS-X Scale: https://wiki.aalto.fi/download/attachments/50102838/PANAS-X-scale_spec.pdf Also the longer version that the authors cited: https://www2.psychology.uiowa.edu/faculty/clark/panas-x.pdf It should also be noted that the PANAS (Positive and Negative Affect Scale) scale and the PANAS-X (the “X” is for eXtended) scale are questionnaires used to elicit from participants feelings of positive and negative affect, they are not collections of "core" emotion words, but rather words that are colloquially attached to either positive or negative sentiment. For example PANAS-X includes words like:“strong” ,“active”, “healthy”, “sleepy” which are not considered emotion words by psychology. If the authors stated goal is "different than the standard sentiment analysis goal of predicting whether a sentence expresses positive or negative sentiment" they should be aware that this is exactly what PANAS is designed to do - not to infer the latent emotional state of a person, except to the extent that their affect is positive or negative. The work of representing emotions had been an field in psychology for over a hundred years and it is still continuing. https://en.wikipedia.org/wiki/Contrasting_and_categorization_of_emotions. One of the most popular theories of emotion is the theory that there exist “basic” emotions: Anger, Disgust, Fear, Happiness (enjoyment), Sadness and Surprise (Paul Ekman, cited by the authors). These are short duration sates lasting only seconds. They are also fairly specific, for example “surprise” is sudden reaction to something unexpected, which is it exactly the same as seeing a flower on your car and expressing “what a nice surprise.” The surprise would be the initial reaction of “what’s that on my car? Is it dangerous?” but after identifying the object as non-threatening, the emotion of “surprise” would likely pass and be replaced with appreciation. The Circumplex Model of Emotions (Posner et al 2005) the authors refer to actually stands in opposition to the theories of Ekman. From the cited paper by Posner et al : "The circumplex model of affect proposes that all affective states arise from cognitive interpretations of core neural sensations that are the product of two independent neurophysiological systems. This model stands in contrast to theories of basic emotions, which posit that a discrete and independent neural system subserves every emotion." From my reading of this paper, it is clear to me that the authors do not have a clear understanding of the current state of psychology’s view of emotion representation and this work would not likely contribute to a new understanding of the latent structure of peoples’ emotions. In the PCA result, it is not "clear" that the first axis represents valence, as "sad" has a slight positive on this scale and "sad" is one of the emotions most clearly associated with negative valence. With respect to the rest of the paper, the level of novelty and impact is "ok, but not good enough." This analysis does not seem very different from Twitter analysis, because although Tumblr posts are allowed to be longer than Twitter posts, the authors truncate the posts to 50 characters. Additionally, the images do not seem to add very much to the classification. The authors algorithm also seems to be essentially a combination of two other, previously published algorithms. For me the novelty of this paper was in its application to the realm of emotion theory, but I do not feel there is a contribution here. This paper is more about classifying Tumblr posts according to emotion word hashtags than a paper that generates a new insights into emotion representation or that can infer latent emotional state.
iclr_2018_B1hYRMbCW
ON THE REGULARIZATION OF WASSERSTEIN GANS Since their invention, generative adversarial networks (GANs) have become a popular approach for learning to model a distribution of real (unlabeled) data. Convergence problems during training are overcome by Wasserstein GANs which minimize the distance between the model and the empirical distribution in terms of a different metric, but thereby introduce a Lipschitz constraint into the optimization problem. A simple way to enforce the Lipschitz constraint on the class of functions, which can be modeled by the neural network, is weight clipping. Augmenting the loss by a regularization term that penalizes the deviation of the gradient norm of the critic (as a function of the network's input) from one, was proposed as an alternative that improves training. We present theoretical arguments why using a weaker regularization term enforcing the Lipschitz constraint is preferable. These arguments are supported by experimental results on several data sets.
This paper proposes a novel regularization scheme for Wasserstein GAN based on a relaxation of the constraints on the Lipschitz constant of 1. The proposed regularization penalize the critic function only when its gradient has a norm larger than one using some kind of squared hinge loss. The reasons for this choice are discussed and linked to theoretical properties of OT. Numerical experiments suggests that the proposed regularization leads to better posed optimization problem and even a slight advantage in terms of inception score on the CIFAR-10 dataset. The paper is interesting and well written, the proposed regularization makes sens since it is basically a relaxation of the constraints and the numerical experiments also suggest it's a good idea. Still as discussed below the justification do not address a lots of interesting developments and implications of the method and should better discuss the relation with regularized optimal transport. Discussion: + The paper spends a lot of time justifying the proposed method by discussing the limits of the "Improved training of Wasserstein GAN" from Gulrajani et al. (2017). The two limits (sampling from marginals instead of optimal coupling and differentiability of the critic) are interesting and indeed suggest that one can do better but the examples and observations are well known in OT and do not require proof in appendix. The reviewer believes that this space could be better spend discussing the theoretical implication of the proposed regularization (see next). + The proposed approach is a relaxation of the constraints on the dual variable for the OT problem. As a matter of fact we can clearly recognize a squared hinge loss is the proposed loss. This approach (relaxing a strong constraint) has been used for years when learning support vector machines and ranking and a small discussion or at least reference to those venerable methods would position the paper on a bigger picture. + The paper is rather vague on the reason to go from Eq. (6) to Eq. (7). (gradient approximation between samples to gradient on samples). Does it lead to better stability to choose one or the other? How is it implemented in practice? recent NN toolbox can easily compute the exact gradient and use it for the penalization but this is not clearly discussed even in appendix. Numerical experiments comparing the two implementation or at least a discussion is necessary. + The proposed approach has a very strong relations to the recently proposed regularized OT (see [1] for a long list of regularizations) and more precisely to the euclidean regularization. I understand that GANS (and Wasserstein GAN) is a relatively young community and that references list can be short but their is a large number of papers discussing regularized optimal transport and how the resulting problems are easier to solve. A discussion of the links is necessary and will clearly bring more theoretical ground to the method. Note that a square euclidean regularization leads to a regularization term in the dual of the form max(0,f(x)+f(y)-|x-y|)^2 that is very similar to the proposed regularization. In other words the authors propose to do regularized OT (possibly with a new regularization term) and should discuss that. + The numerical experiments are encouraging but a bit short. The 2D example seem to work very well and the convergence curves are far better with the proposed regularization. But the real data CIFAR experiments are much less detailed with only a final inception score (very similar to the competing method) and no images even in appendix. The authors should also define (maybe in appendix) the conditional and unconditional inception scores and why they are important (and why only some of them are computed in Table 1). + This is more of a suggestion. The comparison of the dual critic to the true Wasserstein distance is very interesting. It would be nice to see the behavior for different values of lambda. [1] Dessein, A., Papadakis, N., & Rouas, J. L. (2016). Regularized Optimal Transport and the Rot Mover's Distance. arXiv preprint arXiv:1610.06447. Review update after reply: The authors have responded to most of my concerns and I think the paper is much stronger now and discuss the relation with regularized OT. I change the rating to Accept.
iclr_2018_BJOFETxR-
Published as a conference paper at ICLR 2018 LEARNING TO REPRESENT PROGRAMS WITH GRAPHS Learning tasks on source code (i.e., formal languages) have been considered recently, but most work has tried to transfer natural language methods and does not capitalize on the unique opportunities offered by code's known sematics. For example, long-range dependencies induced by using the same variable or function in distant locations are often not considered. We propose to use graphs to represent both the syntactic and semantic structure of code and use graph-based deep learning methods to learn to reason over program structures. In this work, we present how to construct graphs from source code and how to scale Gated Graph Neural Networks training to such large graphs. We evaluate our method on two tasks: VARNAMING, in which a network attempts to predict the name of a variable given its usage, and VARMISUSE, in which the network learns to reason about selecting the correct variable that should be used at a given program location. Our comparison to methods that use less structured program representations shows the advantages of modeling known structure, and suggests that our models learn to infer meaningful names and to solve the VARMISUSE task in many cases. Additionally, our testing showed that VARMISUSE identifies a number of bugs in mature open-source projects.
Summary: The paper applies graph convolutions with deep neural networks to the problem of "variable misuse" (putting the wrong variable name in a program statement) in graphs created deterministically from source code. Graph structure is determined by program abstract syntax tree (AST) and next-token edges, as well as variable/function name identity, assignment and other deterministic semantic relations. Initial node embedding comes from both type and tokenized name information. Gated Graph Neural Networks (GGNNs, trained by maximum likelihood objective) are then run for 8 iterations at test time. The evaluation is extensive and mostly very good. Substantial data set of 29m lines of code. Reasonable baselines. Nice ablation studies. I would have liked to see separate precision and recall rather than accuracy. The current 82.1% accuracy is nice to see, but if 18% of my program variables were erroneously flagged as errors, the tool would be useless. I'd like to know if you can tune the threshold to get a precision/recall tradeoff that has very few false warnings, but still catches some errors. Nice work creating an implementation of fast GGNNs with large diverse graphs. Glad to see that the code will be released. Great to see that the method is fast---it seems fast enough to use in practice in a real IDE. The model (GGNN) is not particularly novel, but I'm not much bothered by that. I'm very happy to see good application papers at ICLR. I agree with your pair of sentences in the conclusion: "Although source code is well understood and studied within other disciplines such as programming language research, it is a relatively new domain for deep learning. It presents novel opportunities compared to textual or perceptual data, as its (local) semantics are well-defined and rich additional information can be extracted using well-known, efficient program analyses." I'd like to see work in this area encouraged. So I recommend acceptance. If it had better (e.g. ROC curve) evaluation and some modeling novelty, I would rate it higher still. Small notes: The paper uses the term "data flow structure" without defining it. Your data set consisted of C# code. Perhaps future work will see if the results are much different in other languages.
iclr_2018_rJssAZ-0-
Deep reinforcement learning algorithms have proven successful in a variety of domains. However, tasks with sparse rewards remain challenging when the state space is large. Goal-oriented tasks are among the most typical problems in this domain, where a reward can only be received when the final goal is accomplished. In this work, we propose a potential solution to such problems with the introduction of an experience-based tendency reward mechanism, which provides the agent with additional hints based on a discriminative learning on past experiences during an automated reverse curriculum. This mechanism not only provides dense additional learning signals on what states lead to success, but also allows the agent to retain only this tendency reward instead of the whole histories of experience during multi-phase curriculum learning. We extensively study the advantages of our method on the standard sparse reward domains like Maze and Super Mario Bros and show that our method performs more efficiently and robustly than prior approaches in tasks with long time horizons and large state space. In addition, we demonstrate that using an optional keyframe scheme with very small quantity of key states, our approach can solve difficult robot manipulation challenges directly from perception and sparse rewards.
This paper proposes a new method for reverse curriculum generation by gradually reseting the environment in phases and classifying states that tend to lead to success. It additionally proposes a mechanism for learning from human-provided "key states". The ideas in this paper are quite nice, but the paper has significant issues with regard to clarity and applicability to real-world problems: First, it is unclear is the proposed method requires access only high-dimensional observations (e.g. images) during training or if it additionally requires low-dimensional states (e.g. sufficient information to reset the environment). In most compelling problems settings where a low-dimensional representation that sufficiently explains the current state of the world is available during training, then it is also likely that one can write down a nicely shaped reward function using that state information during training, in which case, it makes sense to use such a reward function. This paper seems to require access to low-dimensional states, and specifically considers the sparse-reward setting, which seems contrived. Second, the paper states that the assumption "when resetting, the agent can be reset to any state" can be satisfied in problems such as real-world robotic manipulation. This is not correct. If the robot could autonomously reset to any state, then we would have largely solved robotic manipulation. Further, it is not always realistic to assume access to low-dimensional state information during training on a real robotic system (e.g. knowing the poses of all of the objects in the world). Third, the experiments section lacks crucial information needed to understand the experiments. What is the state, observation, and action space for each problem setting? What is the reward function for each problem setting? What reinforcement learning algorithm is used in combination with the curriculum and tendency rewards? Are the states and actions continuous or discrete? Without this information, it is difficult to judge the merit of the experimental setting. Fourth, the proposed method seems to lack motivation, making the proposed scheme seem a bit ad hoc. Could each of the components be motivated further through more discussion and/or ablative studies? Finally, the main text of the paper is substantially longer than the recommended page limit. It should be shortened by making the writing more concise. Beyond my feedback on clarity and significance, here are further pieces of feedback with regard to the technical content, experiments, and related work: I'm wondering -- can the reward shaping in Equation 2 be made to satisfy the property of not affecting the final policy? (see Ng et al. '09) If so, such a reward shaping would make the method even more appealing. How do the experiments in section 5.4 compare to prior methods and ablations? Without such a comparison, it is impossible to judge the performance of the proposed method and the level of difficulty of these tasks. At the very least, the paper should compare the performance of the proposed method to the performance a random policy. The paper is missing some highly relevant references. First, how does the proposed method compare to hindsight experience replay? [1] Second, learning from keyframes (rather than demonstrations) has been explored in the past [1]. It would be preferable to use the standard terminology of "keyframe". [1] Andrychowicz et al. Hindsight Experience Replay. 2017 [2] Akgun et al. Keyframe-based Learning from Demonstration. 2012 In summary, I think this paper has a number of promising ideas and experimental results, but given the significant issues in clarity and significance to real world problems, I don't think that the current version of this paper is suitable for publication in ICLR. More minor feedback on clarity and correctness: - Abstract: "Deep RL algorithms have proven successful in a vast variety of domains" -- This is an overstatement. - The introduction should be more clear with regard to the assumptions. In particular, it would be helpful to see discussion of requiring human-provided keyframes. As is, it is unclear what is meant by "checkpoint scheme", which is not commonly used terminology. - "This kind of spare reward, goal-oriented tasks are considered the most difficult challenges" -- This is also an overstatement. Long-horizon tasks and high-dimensional observations are also very difficult. Also, the sentence is not grammatically correct. - "That is, environment" -> "That is, the environment" - In the last paragraph of the intro, it would be helpful to more clearly state what the experiments can accomplish. Can they handle raw pixel inputs? - "diverse domains" -> "diverse simulated domains" - "a robotic grasping task" -> "a simulated robotic grasping task" - There are a number of issues and errors in citations, e.g. missing the year, including the first name, incorrect reference - Assumption 1: \mathcal{P} has not yet been defined. - The last two paragraphs of section 3.2 are very difficult to understand without reading the method yet - "conventional RL solver tend" -> "conventional RL tend", also should mention sparse reward in this sentence. - Algorithm 1 and Figure 1 are not referenced in the text anywhere, and should be - The text in Figure 1 and Figure 3 is extremely small - The text in Figure 3 is extremely small
iclr_2018_rJlMAAeC-
Published as a conference paper at ICLR 2018 IMPROVING THE UNIVERSALITY AND LEARNABIL- ITY OF NEURAL PROGRAMMER-INTERPRETERS WITH COMBINATOR ABSTRACTION To overcome the limitations of Neural Programmer-Interpreters (NPI) in its universality and learnability, we propose the incorporation of combinator abstraction into neural programing and a new NPI architecture to support this abstraction, which we call Combinatory Neural Programmer-Interpreter (CNPI). Combinator abstraction dramatically reduces the number and complexity of programs that need to be interpreted by the core controller of CNPI, while still allowing the CNPI to represent and interpret arbitrary complex programs by the collaboration of the core with the other components. We propose a small set of four combinators to capture the most pervasive programming patterns. Due to the finiteness and simplicity of this combinator set and the offloading of some burden of interpretation from the core, we are able construct a CNPI that is universal with respect to the set of all combinatorizable programs, which is adequate for solving most algorithmic tasks. Moreover, besides supervised training on execution traces, CNPI can be trained by policy gradient reinforcement learning with appropriately designed curricula.
Quality The paper is very interesting and clearly motivated. The idea of importing concepts from functional programming into neural programming looks very promising, helping to address a bit the somewhat naive approach taken so far in the deep learning community towards program induction. However, I found the model description difficult to fully understand and have significant unresolved questions - especially *why* exactly the model should be expected to have better universality compared to NPI and RNPI, given than applier memory is unbounded just like NPI/RNPI program memories are unbounded. Clarity The paper does a good job of summarizing NPI and motivating the universality property of the core module. I had a lot of questions while reading: What is the purpose of detectors? It is not clear what is being detected. From the context it seems to be encoding observations from the environment, which can vary according to the task and change during program execution. The detector memory is also confusing. In the original NPI, it is assumed that the caller knows which encoder is needed for each program. In CNPI, is this part learned or more general in some way? Appliers - is it the case that *every* program apart from the four combinators must be written as an applier? For example ADD1, BSTEP, BUBBLESORT, etc all must be implemented as an applier, and programs that cannot be implemented as appliers are not expressible by CNPI? Memory - combinator memory looks like a 4-way softmax over the four combinators, right? The previous NPI program memory is analogous then to the applier memory. Eqn 3 - binarizing the detector output introduces a non-differentiable operation. How is the detector then trained e.g. from execution traces? Later I see that there is a notion of a “correct condition” for the detector to regress on, which makes me confused again about what exactly the output of a detector means. Computing the next subprogram - since the size of applier memory is unbounded, the core still needs to be aware of an unlimited number of subprograms. I must be missing something here - how does the proposed model therefore achieve better universality than the original NPI and RNPI models? Analysis - for the claim of perfect generalization, I think this will not generally hold true for perceptual inputs. Will the proposed model only be useful in discrete domains for algorithmic tasks, or could it be more broadly applicable, e.g. to robotics tasks? Originality This methods proposed in this paper are quite novel and start to bridge an important gap between neural program induction and functional programming, by importing the concept of combinator abstraction into NPI. Significance The paper will be significant to people interested in NPI-related models and neural program induction generally, but on the other hand, there is currently not yet a “killer application” to this line of work. The experiments appear to show significant new capabilities of CNPI compared to NPI and RNPI in terms of better generalization and universality, as well as being trainable by reinforcement learning. Pros - Learns new programs without catastrophic forgetting in the NPI core, in particular where previous NPI models fail. - Detector training is decoupled from core and memory training, so that perfect generalization does not have to be re-verified after learning new behaviors. Cons - So far lacking useful applications in the real world. Could the techniques in this paper help in robotics extensions to NPI? (see e.g. https://arxiv.org/abs/1710.01813) - Adds a significant amount of further structure into the NPI framework, which could potentially make broader applications more complex to implement. Do the proposed modifications reduce generality in any way?
iclr_2018_SylJ1D1C-
Partial differential equations (PDEs) play a prominent role in many disciplines such as applied mathematics, physics, chemistry, material science, computer science, etc. PDEs are commonly derived based on physical laws or empirical observations. However, the governing equations for many complex systems in modern applications are still not fully known. With the rapid development of sensors, computational power, and data storage in the past decade, huge quantities of data can be easily collected and efficiently stored. Such vast quantity of data offers new opportunities for data-driven discovery of hidden physical laws. Inspired by the latest development of neural network designs in deep learning, we propose a new feed-forward deep network, called PDE-Net, to fulfill two objectives at the same time: to accurately predict dynamics of complex systems and to uncover the underlying hidden PDE models. The basic idea of the proposed PDE-Net is to learn differential operators by learning convolution kernels (filters), and apply neural networks or other machine learning methods to approximate the unknown nonlinear responses. Comparing with existing approaches, which either assume the form of the nonlinear response is known or fix certain finite difference approximations of differential operators, our approach has the most flexibility by learning both differential operators and the nonlinear responses. A special feature of the proposed PDE-Net is that all filters are properly constrained, which enables us to easily identify the governing PDE models while still maintaining the expressive and predictive power of the network. These constrains are carefully designed by fully exploiting the relation between the orders of differential operators and the orders of sum rules of filters (an important concept originated from wavelet theory). We also discuss relations of the PDE-Net with some existing networks in computer vision such as Network-In-Network (NIN) and Residual Neural Network (ResNet). Numerical experiments show that the PDE-Net has the potential to uncover the hidden PDE of the observed dynamics, and predict the dynamical behavior for a relatively long time, even in a noisy environment.
Authors propose a neural network based algorithm for learning from data that arises from dynamical systems with governing equations that can be written as partial differential equations. The network architecture is constrained such that regardless of the parameters, it always implements discretization of an arbitrary PDE. Through learning, the network adapts itself to solve a specific PDE. Discretization is finite difference in space and forward Euler in time. The article is quite novel in my opinion. To the best of my knowledge, it is the first article that implements a generic method for learning arbitrary PDE models from data. In using networks, the method differs from previously proposed approaches for learning PDEs. Experiments are only presented with synthetic data but given the potential for the method and its novelty, I believe this can be accepted. However, it would have been a stronger article if authors have applied to a real life model with real initial and boundary conditions, and real observations. I have three main criticism about the article: 1. Authors do not cite Chen and Pock’s article on learning diffusion filters with networks that first published in CVPR 2015 and then authors published a PAMI article this year. To the best of my knowledge, they are the first to show the connection between res-net type architecture and numerical solutions of PDEs. I think proper credit should be given. [I need to emphasize that I am not an author in that article.] 2. Authors emphasize the importance of interpretability, however, the constraint on the moment matrices might cripple this aspect. The frozen filters have clear interpretations. They are in the end finite difference approximations with some level of accuracy depending on the size and the number of zeros. When M(q) matrix is free to change, it is unclear what the effect will be on the filters. Are the numbers that replace stars in Equation 6 for instance, will be absorbed in the O(\epsilon) term? Can one really interpret the final c_{ij} for filters whose M(q) have many non-zeros? 3. The introduction and results sections are well written. The method section on the other hand, needs improvement. The notation is not easy to follow due to missing definitions. I believe with proper definitions — amounting to small modifications — readability of the article can substantially improve. In addition to the main criticisms, I have some other questions and concerns: 1. How sensitive is the model? In real life, one cannot expect to get observations every delta t. Data is most often very sparse. Can the model learn in that regime? Can it differentiate between different PDEs and find the correct one with sparse data? 2. The average operations decreases the interpretability of the proposed model as a PDE. Depending on the filter size, D_{0}u can deviate from u, which should be the term that should be used in the residual block. Why do authors need this? How does the model behave without it? 3. The statement “Thus, the PDE-Net with bigger n owns a longer time stability.” is a very vague statement. I understand with larger n, training would be easier since more data would be used to estimate parameters. However, it is not clear how this relates to “time stability”, which is also not defined in the article. 4. How is the relative error computed? Values in relative error plots goes as high as 10^2. That would be a huge error if it is relative.
iclr_2018_SkFEGHx0Z
We present a radial basis function solver for convolutional neural networks that can be directly applied to both distance metric learning and classification problems. Our method treats all training features from a deep neural network as radial basis function centres and computes loss by summing the influence of a feature's nearby centres in the embedding space. Having a radial basis function centred on each training feature is made scalable by treating it as an approximate nearest neighbour search problem. End-to-end learning of the network and solver is carried out, mapping high dimensional features into clusters of the same class. This results in a well formed embedding space, where semantically related instances are likely to be located near one another, regardless of whether or not the network was trained on those classes. The same loss function is used for both the metric learning and classification problems. We show that our radial basis function solver outperforms state-of-the-art embedding approaches on the Stanford Cars196 and CUB-200-2011 datasets. Additionally, we show that when used as a classifier, our method outperforms a conventional softmax classifier on the CUB-200-2011, Stanford Cars196, Oxford 102 Flowers and Leafsnap fine-grained classification datasets.
The authors propose a loss that is based on a RBF loss for metric learning and incorporates additional per exemplar weights in the index for classification. Significant improvements over softmax are shown on several datasets. IMHO, this could be a worthwhile paper, but the framing of the paper into existing literature is lacking and thus it appears as if the authors are re-inventing the wheel (NCA loss) under a different name (RBF solver). The specific problems are: - The authors completely miss the connection to NCA loss (https://papers.nips.cc/paper/2566-neighbourhood-components-analysis.pdf) and thus appear to be re-inventing the wheel. - The proposed metric learning scenario is exactly as proposed in the NCA loss works, while the classification approach adds an interesting twist by learning per exemplar weights. I haven't encountered this before and it could make an interesting proposal. Of course the benefit of this should be evaluated in ablation studies( Tab 3 shows one experiment with marginal improvements). - The authors' use of 'solver' seems uncommon and confusing. What is proposed is a loss in addition to building a weighted index in the case of classification. - In the metric learning comparison with softmax (end of page 9) the authors mentions that a Gaussian standard deviation for softmax is learned. It appears as if the authors use the softmax logits as embedding whereas the more common approach is to use the bottleneck layer. This is also indicated by the discussion at the end of page 10 where the authors mention that softmax is restricted to axis aligned embeddings. All softmax metric learning experiments should be carried out on appropriately sized bottleneck layers. - Some of the motivations of what the various methods learn seem flawed, e.g. triplet loss CAN learn multiple modes per class and there is nothing in the Softmax loss that encourages the classes to fill a large region of the space. - Why don't the authors compare on ImageNet? Some positive points: - The authors mention in Sec 3.3 that updating the RBF centres is not required. This is a crucial point that should be made a centerpiece of this work, as there are many metric learning works that struggle with this. Additional experiments that can investigate this point would greatly contribute to a well rounded paper. - The numbers reported in Tab 1 show very significant improvements If the paper was re-framed and builds on top of the already existing NCA loss, there could be valuable contributions in this paper. The experimental comparisons are lacking in some respect, as the comparison with Softmax as a metric learning method seems uncommon, i.e. using the logits instead of the bottleneck layer. I encourage the authors to extend the paper and flesh out some of the experiments and then submit it again.
iclr_2018_Hko85plCW
Published as a conference paper at ICLR 2018 MONOTONIC CHUNKWISE ATTENTION Sequence-to-sequence models with soft attention have been successfully applied to a wide variety of problems, but their decoding process incurs a quadratic time and space cost and is inapplicable to real-time sequence transduction. To address these issues, we propose Monotonic Chunkwise Attention (MoChA), which adaptively splits the input sequence into small chunks over which soft attention is computed. We show that models utilizing MoChA can be trained efficiently with standard backpropagation while allowing online and linear-time decoding at test time. When applied to online speech recognition, we obtain state-of-theart results and match the performance of a model using an offline soft attention mechanism. In document summarization experiments where we do not expect monotonic alignments, we show significantly improved performance compared to a baseline monotonic attention-based model.
The paper proposes an extension to a previous monotonic attention model (Raffel et al 2017) to attend to a fixed-sized window up to the alignment position. Both the soft attention approximation used for training the monotonic attention model, and the online decoding algorithm is extended to the chunkwise model. In terms of the model this is a relatively small extention of Raffel et al 2017. Results show that for online speech recognition the model matches the performance of an offline soft attention baseline, doing significantly better than the monotonic attention model. Is the offline attention baseline unidirectional or bidirectional? In case it is unidirectional it cannot really be claimed that the proposed model's performance is competitive with an offline model. My concern with the statement that all hyper-parameters are kept the same as the monotonic model is that the improvement might partly be due to the increase in total number of parameters in the model. Especially given that w=2 works best for speech recognition, it not clear that the model extension is actually helping. My other concern is that in speech recognition the time-scale of the encoding is somewhat arbitrary, so possibly a similar effect could be obtained by doubling the time frame through the convolutional layer. While the empirical result is strong it is not clear that the proposed model is the best way to obtain the improvement. For document summarization the paper presents a strong result for an online model, but the fact that it is still less accurate than the soft attention baseline makes it hard to see the real significance of this. If the contribution is in terms of speed (as shown with the synthetic benchmark in appendix B) more emphesis should be placed on this in the paper. Sentence summarization tasks do exhibit mostly monotonic alignment, and most previous models with monotonic structure were evaluated on that, so why not test that here? I like the fact that the model is truely online, but that contribution was made by Raffel et al 2017, and this paper at best proposes a slightly better way to train and apply that model. --- The additional experiments in the new version gives stronger support in favour of the proposed model architecture (vs the effect of hyperparameter choices). While I'm still on the fence on whether this paper is strong enough to be accepted for ICLR, this version is certainly improves the quality of the paper.
iclr_2018_r1AoGNlC-
PROGRAM SYNTHESIS WITH PRIORITY QUEUE TRAINING We consider the task of program synthesis in the presence of a reward function over the output of programs, where the goal is to find programs with maximal rewards. We introduce an iterative optimization scheme, where we train an RNN on a dataset of K best programs from a priority queue of the generated programs so far. Then, we synthesize new programs and add them to the priority queue by sampling from the RNN. We benchmark our algorithm, called priority queue training (or PQT), against genetic algorithm and reinforcement learning baselines on a simple but expressive Turing complete programming language called BF. Our experimental results show that our simple PQT algorithm significantly outperforms the baselines. By adding a program length penalty to the reward function, we are able to synthesize short, human readable programs.
This paper focuses on using RNNs to generate straightline computer programs (ie. code strings) using reinforcement learning. The basic setup assumes a setting where we do not have access to input/output samples, but instead only have access to a separate reward function for each desired program that indicates how close a predicted program is to the correct one. This reward function is used to train a separate RNN for each desired program. The general space of generating straight-line programs of this form has been explored before, and their main contribution is the use of a priorty queue of highest scoring programs during training. This queue contains the highest scoring programs which have been observed at any point in the training so far, and they consider two different objectives: (1) the standard policy-gradient objective which tries to maximize the expected reward and (2) a supervised learning objective which tries to maximize the average probability of the top-K samples. They show that this priority queue algorithm significantly improves the stability of the resulting synthesis procedure such that when synthesis succeeds at all, it succeeds for most of the random seeds used. This is a nice result, but I did not feel as though their algorithm was sufficently different from the algorithm used by Liang et. al. 2017. In Liang et. al. they keep around the best observed program for each input sample. They argue that their work is different from Liang et. al. because they show that they can learn effectively using only objective (2) while completely dropping objective (1). However I'm quite worried these results only apply in very specific setups. It seems that if the policy gradient objective is not used, and there are not K different programs which generate the correct output, then the Top-K objective alone will encourage the model to continue to put equal probability on the programs in the Top-K which do not generate an incorrect output. I also found the setup itself to be poorly motivated. I was not able to imagine a reasonable setting where we would have access to a reward function of this form without input/output examples. The paper did not provide any such examples, and in their experiments they implement the proposed reward function by assuming access to a set of input/output examples. I feel as though the restriction to the reward function in this case makes the problem uncessarily hard, and does not represent an important use-case. In addition I had the following more minor concerns: 1. At the end of section 4.3 the paper is inconsistent about whether the test cases are randomly generated or hand picked, and whether they use 5 test cases for all problems, or sometimes up to 20 test cases. If they are hand picked (and the number of test cases is hand chosen for each problem), then how dependant are the results on an appropriate choice of test cases? 2. They argue that they don't need to separate train and test, but I think it is important to be sure that the generated programs work on test cases that are not a part of the reward function. They say that "almost always" the synthesizer does not overfit, but I would have liked them to be clear about whether their reported results include any cases of overfitting (i.e. did they ensure they the final generate program always generalized)? 3. It is worth noting that while their technique succeeds much more consistently than the baseline genetic algorithm, the genetic algorithm actually succeeds at least once, on more tasks (19 vs. 17). The success rate is probably a good indicator of whether the technique will scale to more complex problems, but I would have prefered to see this in the results, rather than just hoping it will be true (i.e. by including my complicated problems where the genetic algorithm never succeeds).
iclr_2018_HJOQ7MgAW
Long short-term memory networks (LSTMs) were introduced to combat vanishing gradients in simple recurrent neural networks (S-RNNs) by augmenting them with additive recurrent connections controlled by gates. We present an alternate view to explain the success of LSTMs: the gates themselves are powerful recurrent models that provide more representational power than previously appreciated. We do this by showing that the LSTM's gates can be decoupled from the embedded S-RNN, producing a restricted class of RNNs where the main recurrence computes an element-wise weighted sum of context-independent functions of the inputs. Experiments on a range of challenging NLP problems demonstrate that the simplified gate-based models work substantially better than S-RNNs, and often just as well as the original LSTMs, strongly suggesting that the gates are doing much more in practice than just alleviating vanishing gradients.
This paper presents an analysis of LSTMS showing that they have a from where the memory cell contents at each step is a weighted combination of the “content update” values computed at each time step. The weightings are defined in terms of an exponential decay on each dimension at each time step (given by the forget gate), which lets the cell be computed sequentially in linear time rather than in the exhaustive quadratic time that would apparently be necessary for this definition. Second, the paper offers a simplification of LSTMs that compute the value by which the memory cell at each time step in terms of a deterministic function of the input rather than a function of the input and the current context. This reduced form of the LSTM is shown to perform comparably to “full” LSTMs. The decomposition of the LSTM in terms of these weights is useful, and suggests new strategies for comparing existing quadratic time attention-based extensions to RNNs. The proposed model variations (which replaces the “content update” that has a recurrent network in terms of context-independent update) and their evaluations seem rather more arbitrary. First, there are two RNNs present in the LSTM- one controls the gates, one controls the content update. You get rid of one, not the other. You can make an argument for why the one that was ablated was “more interesting”, but really this is an obvious empirical question that should be addressed. The second problem of what tasks to evaluate on is a general problem with comparing RNNs. One non-language task (e.g., some RL agent with an LSTM, or learning to execute or something) and one synthetic task (copying or something) might be sensible. Although I don’t think this is the responsibility of this paper (although something that should be considered). Finally, there are many further simplifications of LSTMs that could have been explored in the literature: coupled input-forget gates (Greff et al, 2015), diagonal matrices for gates, GRUs. When proposing yet another simplification, some sense for how these different reductions is useful, so I would recommend comparison to those. Notes on clarity: Before Eq 1 it’s hard to know what the antecedent of “which” is without reading ahead. For componentwise multiplication, you have been using \circ, but then for the iterated component wise product, \prod is used. To be consistent, notation like \odot and \bigodot might be a bit clearer. The discussion of dynamic programming: the dynamic program is also only available because the attention pattern is limited in a way that self attention is not. This might be worth mentioning. When presenting Eq 11, the definition of w_j^t elides a lot of complexity. Indeed, w_j^t is only ever implicitly defined in Eq 8, whereas things like the input and forget gates are defined multiple times in the text. Since w_j^t can be defined iteratively and recursively (as a dynamic program), it’s probably worth writing both out, for expository clarity. Eq 11 might be clearer if you show that Eq 8 can also be rewritten in the same wheat, provided, you make h_{t-1} an argument to output and content. Table 4 is unclear. In a language model, the figure looks like it is attending to the word that is being generated, which is clearly not what you want to convey since language models don’t condition on the word they are predicting. Presumably the strong diagonal attention is attending to the previous word when computing the representation to generate the subsequent word? In any case, this figure should be corrected to reflect this. This objection also concerns the right hand figure, and the semantics of the meaning of the upper vs lower triangles should be clarified in the caption (rather than just in the text).
iclr_2018_Hk9Xc_lR-
Published as a conference paper at ICLR 2018 ON THE DISCRIMINATION-GENERALIZATION TRADE- OFF IN GANS Generative adversarial training can be generally understood as minimizing certain moment matching loss defined by a set of discriminator functions, typically neural networks. The discriminator set should be large enough to be able to uniquely identify the true distribution (discriminative), and also be small enough to go beyond memorizing samples (generalizable). In this paper, we show that a discriminator set is guaranteed to be discriminative whenever its linear span is dense in the set of bounded continuous functions. This is a very mild condition satisfied even by neural networks with a single neuron. Further, we develop generalization bounds between the learned distribution and true distribution under different evaluation metrics. When evaluated with neural distance, our bounds show that generalization is guaranteed as long as the discriminator set is small enough, regardless of the size of the generator or hypothesis set. When evaluated with KL divergence, our bound provides an explanation on the counter-intuitive behaviors of testing likelihood in GAN training. Our analysis sheds lights on understanding the practical performance of GANs.
In more detail, the analysis of the paper is as follows. Firstly, it primarily focuses on GAN objective functions which are "integral probability metrics (IPMs)"; one way to define these is by way of similarity to the W-GAN, namely IPMs replace the 1-Lipschitz functions in W-GAN with a generic set of functions F. The paper overall avoids computational issues and treats the suprema as though exactly solved by sgd or related heuristic (the results of the paper simply state supremum, but some of the prose seems to touch on this issue). The key arguments of the paper are as follows. 1. It argues that the discriminator set should be not simply large, it should be dense in all bounded continuous functions; as a consequence of this, the IPM is 0 iff the distributions are equal (in the weak sense). Due to this assertion, it says that it suffices to use two layer neural networks as the discriminator set (as a consequence of the "universal approximation" results well-known in the neural network literature). 2. It argues the discriminator set should be small in order to mitigate small-sample effects. (Together, points 1 and 2 mimic a standard bias-variance tradeoff in statistics.) For this step, the paper relies upon standard Rademacher results plus a little bit of algebraic glue. Curiously, the paper chooses to argue (and forms as a key tenet, indeed in the abstract) that the size of the generator set is irrelevant for this, only the size of the discriminator matters. Unfortunately, I find significant problems with the paper, in order from most severe to least severe. A. The calculation ruling out the impact of the generator in generalization calculations in 2 above is flawed. Before pointing out the concrete bug, I note that this assertion runs completely counter to intuition, and thus should be made with more explanation (as opposed to the fortunate magic it is presented as). Moreover, I'll say that if the authors choose to "fix" this bug by adding a generator generalization term, the bound is still a remedial application of Rademacher complexity, so I'm not exactly blown away. Anyway, the bug is as follows. The equation which drops the role of the generator in the generalization calculation is the equation (10). The proof of this inequality is at the start of appendix E. Looking at the derivation in that appendix, everything is correct up to the second-to-last display, the one with a supremum over nu in G. First of all, this right hand side should set off alarm bells; e.g., if we make the generator class big, we can make this right hand side essentially as big as the IPM allows even when mu = mu_m. Now the bug itself appears when going to the next display: if the definition of d_F is expanded, one obtains two suprema, each own over _their own_ optimization variable (in this case the variables are discriminator functions). When going to the next equation, the authors accidentally made the two suprema have the same variable and invoke a fortuitous but incorrect cancellation. As stated a few sentences back, one can construct trivial counterexamples to these inequalities, for instance by making mu and mu_m arbitrarily close (even exactly equal if you wish) and then making nu arbitrarily far away and the discriminator set large enough to identify this. B. The assertions in 1, regarding sizes of discriminator sets needed to achieve the goal of the IPM being 0 iff the distributions are equal (in the weak sense), are nothing more than immediate corollaries of approximation results well-known for decades in the neural network literature. It is thus hard to consider this a serious contribution. C. I will add on a non-technical note that the paper's assertion on what a good IPM "should be" is arguably misled. There is not only a meaning to specific function classes (as with Lip_1 in Wasserstein_1) beyond simply "many functions", but moreover there is an interplay between the size of the generator set and the size of the discriminator set. If the generator set is simple, then the discriminator set can also get away with being simple (this is dicussed in the Arora et al 2017 ICML paper, amongst other places). Perhaps I am the one that is misled, but even so the paper does not appear to give a good justification of its standpoint. I will conclude with typos and minor remarks. I found the paper to contain a vast number of small errors, to the point that I doubted a single proofread. Abstract, first line: "a minimizing"? general grammar issue in this sentence; this sort of issue throughout the paper. Abstract, "this is a mild condition". Optimizing over a function class which is dense in all bounded measurable functions is not a mild assumption. In the particular case under discussion, the size of the network can not be bounded (even though it has just two layers, or as the authors say is the span of single neurons). Abstract, "...regardless of the size of the generator or hypothesis set". This really needs explanation in the abstract, it is such a bold claim. For instance, I wrote "no" in the margin while reading the abstract the first time. Intro, first line: its -> their. Intro, #3 "energy-based GANs": 'm' clashes with sample size. Intro, bottom of page 1, the sentence with "irrelenvant": I can't make any sense of this sentence. Intro, bottom of page 1, "is a much smaller discriminator set": no, the Lip_1 functions are in general incomparable to arbitrary sets of neural nets. From here on I'll comment less on typos. Middle of page 2, point (i): this is the only place it is argued/asserted that the discriminator set should contain essentially everything? I think this needs a much more serious justification. Section 1.1: Lebegure -> Lebesgue. Page 4, vicinity of equation 5: there should really be a mention that none of these universal approximation results give a meaningful bound on the size of the network (the bound given by Barron's work, while nice, is still massive). Start of section 3. To be clear, while one can argue that the Lipschitz-1 constraint has a regularization effect, the reason it was originally imposed is to match the Kantorovich duality for Wasserstein_1. Moreover I'll say this is another instance of the paper treating the discriminator set as irrelevant other than how close it is to being dense in all bounded measurable functions.