id
stringlengths
11
20
paper_text
stringlengths
29
163k
review
stringlengths
666
24.3k
iclr_2020_Bkx4AJSFvB
Neural networks are known to be sensitive to adversarial perturbations. To investigate this undesired behavior we consider the problem of computing the distance to the decision boundary (DtDB) from a given sample for a deep neural net classifier. In this work we present an iterative procedure where in each step we solve a convex quadratic programming (QP) task. Solving the single initial QP already results in a lower bound on the DtDB and can be used as a robustness certificate of the classifier around a given sample. In contrast to currently known approaches our method also provides upper bounds used as a measure of quality for the certificate. We show that our approach provides better or competitive results in comparison with a wide range of existing techniques.
This paper proposed to use a convex QP relaxed formulation to solve the neural network verification problem, and demonstrated its effectiveness on a few small networks (1-2 hidden layers) on MNIST and Fashion-MNIST datasets. There are several benefits the proposed methods: they are technically tighter relaxations of ReLU neurons and empirically the authors show they perform well in L2 norm (but not L infinity norm, unfortunately); solving this formulation does not require to know pre-activation bounds of hidden neurons; also the convexity of the QP problem needs to be determined only once for a model, rather than once per example. Although the QP relaxation of ReLU neuron is not new and has been used in Raghunathan et al., 2018b, they solve the problem as a SDP rather than convex QP. SDP is tighter than the convex QP formulation used in this paper, however is much slower. Issues and Questions: 1. The concept of "bi-direction verification" is not new, since finding an upper bound is basically finding adversarial examples. Many previous papers have been using PGD based attacks to obtain the upper bound. Convex relaxation based verification methods like CROWN can also be used for generating adversarial examples, and it is called "Interval attack", which is demonstrated in [1][2]. Claiming this is the first "bi-direction robustness verification technique" is not accurate. 2. The use of FGSM as an upper bound is inappropriate, as FGSM is known to be a very weak attack. Replacing it with a multi-step PGD attack is necessary. Using a stronger attack will also close the gap between upper and lower bound. Also, compare the upper bound found by PGD with QPRel-LB and update Figure 3(a). If a stronger attack like PGD is used, I think for larger norms CROWN+PGD in Figure 3(c) should be able to verify almost all examples. 3. The models used in Table 1 is trained using a L2 perturbation of epsilon=0.1. This epsilon value is too small for L2 norm. In page 22 (last page of appendix in arxiv version) of [3], you can find the they conduct L2 robustness training but at a much larger epsilon value (eps=1.58). Sine the authors did not use these standard epsilon setting, my concern is that does the proposed method works at larger L2 epsilon? 4. Some experiments on larger and deeper networks are necessary; especially, it is interesting to see how CROWN and the proposed method scale to deeper networks. The presented experiments only include networks with 1 and 2 hidden layers, which is insufficient. A new experiment with number of hidden neurons per layer kept (say 50) and increase the depth from 2 to 10 will be very helpful. 5. The main claim of the paper in Introduction needs to be made clearer, especially the primary strength of the proposed algorithm is in L2 norm, and it does not seem to outperform CROWN in L infinity norm setting. Further improvements and potential directions: 1. In the proposed method, the authors relaxed ReLU neurons using quadratic programming. This relaxation does not require to computing bounds for the neuron activation values. However, I think it is possible to include neuron activation upper and lower bounds as constraints of the QP problem (adding them as constraints like l <= x <= u in Eq. QPRel). This will make the bounds tighter. The per-neuron lower and upper bounds can be obtained using CROWN efficiently, so there is no too much computation cost. 2. Improving the scalability of QP relaxation is another challenge. CROWN can be implemented efficiently on GPUs [4]. For QP relaxations, this can possibly be done by transforming QP solving into a computation graph that can be executed efficiently on GPUs (this is a potential future work directions and I do not expect the authors to address them during the discussion period). Overall I am positive with this paper, however before accepting it I think the authors should at least make their claims clearer (the relaxation performs well mainly in L2 norm, and the concept of "bi-directional verification" is also not entirely new), replacing FGSM by a 200-step PGD and compare the upper bound found by PGD with QPRel, and test the proposed algorithm in models trained with a larger epsilon (eps=1.58 to align with previous works, if possible) and deeper models. [1] Wang, S., Chen, Y., Abdou, A., & Jana, S. (2019). Enhancing Gradient-based Attacks with Symbolic Intervals. arXiv preprint arXiv:1906.02282. [2] Wang, S., Chen, Y., Abdou, A., & Jana, S. (1811). MixTrain: Scalable Training of Verifiably Robust Neural Networks. [3] Wong, E., Schmidt, F., Metzen, J. H., & Kolter, J. Z. (2018). Scaling provable adversarial defenses. In Advances in Neural Information Processing Systems (pp. 8400-8409). [4] https://github.com/huanzhang12/RecurJac-and-CROWN
iclr_2020_H1laeJrKDB
Recent deep generative models are able to provide photo-realistic images as well as visual or textual content embeddings useful to address various tasks of computer vision and natural language processing. Their usefulness is nevertheless often limited by the lack of control over the generative process or the poor understanding of the learned representation. To overcome these major issues, very recent work has shown the interest of studying the semantics of the latent space of generative models. In this paper, we propose to advance on the interpretability of the latent space of generative models by introducing a new method to find meaningful directions in the latent space of any generative model along which we can move to control precisely specific properties of the generated image like the position or scale of the object in the image. Our method does not require human annotations and is particularly well suited for the search of directions encoding simple transformations of the generated image, such as translation, zoom or color variations. We demonstrate the effectiveness of our method qualitatively and quantitatively, both for GANs and variational auto-encoders. (Brock et al., 2018), showing that the position of the object can be controlled within the image.
This paper proposes a method to learn and control continuous factors of variations within generative models by finding meaningful directions in the latent space which correspond to specified properties. A new method is proposed for inverting generative models and embedding images in the latent space when an encoder is not available. Specifically, reconstruction error is defined in the Fourier domain such that the weighting on high frequency image components can be reduced. Results are evaluated with qualitative comparison to previous embedding methods. Using this image embedding technique, a dataset of latent space trajectories is created by manipulating a desired property in images (such as position or scale) via affine transformations and recording the latent space vectors of the original and new images. The dataset is then used to learn a simple model of the latent space transformation corresponding to changes in the desired image property, which in turn can be used to manipulate images accordingly. To evaluate the effectiveness of this image manipulation approach, a saliency detector is used to measure the change in position or scale of objects in generated images as the latent codes are changed. Overall, I would tend towards accepting this work. The goal of being able to manipulate continuous factors of variation within generative models is useful for controllable image synthesis, and the proposed method clearly achieves the desired result. Things to improve the paper: 1) The paper proposes a new reconstruction error metric which is optimized to embed images into the latent space of the generative models. While this new metric is compared qualitatively to existing methods, quantitative evaluation is lacking. It would be useful to also include quantitative comparison of methods measuring the perceptual distance between the original image and the embedded image, perhaps by using Learned Perceptual Image Patch Similarity (LPIPS) [1]. Minor things to improve the paper that did not impact the score: 2) In the abstract: "Our method is weakly supervised...". I am not sure if this method would be considered weakly supervised. I might tend more towards calling it self-supervised, since we have exact labels that are derived from transformations applied to the images themselves. 3) In the first paragraph of the introduction: "an increasing number of applications are emerging such as image in-painting, dataset-synthesis, deep-fakes... ". I find the use of the ellipses here to be a bit strange, since it seems like the sentence is trailing off mid-thought. I would recommend the use of "etc." over "...". 4) In Section 2.2, second paragraph, the dSprite dataset is mentioned but not cited. The reference is not given until Section 3. Should the citation be paired with the first mention of the dataset? Or even just in both places. 5) In Section 3, Implementation details: "The first part is injected at the bottom layer while next parts are used to modify the style of the generated image thanks to AdaIN layers (Huang & Belongie, 2017)". BigGAN uses conditional BatchNorm instead of AdaIN, although they are both very similar. I think the proper citation here is [2], which first introduced conditional BatchNorm. Questions: 6) I am not fully convinced of the argument that using a saliency detector makes the method more general purpose than a dedicated object detector. The majority of high quality generative models are class conditional, hence requiring a labelled dataset, and therefore an object detector can easily be trained on the same dataset. Additionally, Section 3.2 mentions that "We performed quantitative analysis on ten chosen categories for which the object can be easily segmented by using saliency detection approach", which seems to indicate that the saliency detector struggles with some objects. How does the saliency detector perform on more complicated objects? References: [1] Zhang, Richard, et al. "The unreasonable effectiveness of deep features as a perceptual metric." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018. [2] De Vries, Harm, Florian Strub, Jérémie Mary, Hugo Larochelle, Olivier Pietquin, and Aaron C. Courville. "Modulating early visual processing by language." In Advances in Neural Information Processing Systems, pp. 6594-6604. 2017. ### Post-Rebuttal Comments ### Thanks you for addressing my concerns and for adding the quantitative reconstructions measures. Appendix C looks much more complete now. My overall opinion of the paper remains about the same, so I will leave my score unchanged.
iclr_2020_HJgySxSKvB
Factorization Machines (FMs) is an important supervised learning approach due to its unique ability to capture feature interactions when dealing with highdimensional sparse data. However, FMs assume each sample is independently observed and hence incapable of exploiting the interactions among samples. On the contrary, Graph Neural Networks (GNNs) has become increasingly popular due to its strength at capturing the dependencies among samples. But unfortunately, it cannot efficiently handle high-dimensional sparse data, which is quite common in modern machine learning tasks. In this work, to leverage their complementary advantages and yet overcome their issues, we proposed a novel approach, namely Deep Relational Factorization Machines, which can capture both the feature interaction and the sample interaction. In particular, we disclosed the relationship between the feature interaction and the graph, which opens a brand new avenue to deal with high-dimensional features. Finally, we demonstrate the effectiveness of the proposed approach with experiments on several real-world datasets.
In this paper, the authors propose generalize the FM to consider both interaction between features and interaction between samples. For the interaction between features, the authors propose to use graph convolution to capture high-order feature interactions. Moreover, the authors construct a graph on the instances based on similarity. Then a GCN is applied to the sample graph where the feature embedding is shared between the two components. Experiments are carried out on four datasets with tasks of link prediction and regression. Comparison to several baselines demonstrate the superior performance of the proposed method. Strength: 1. The idea of utilizing GCN on the feature co-occurrence graph is interesting and innovative. The idea could possibly be combined with other variants of Deep FM models. 2. It is an interesting idea to combine sample similarity together with feature co-occurrence for better prediction accuracy. Weakness: 1. Many descriptions in the paper are not very clear. First, the authors only mention how prediction is carried out with trained parameters. However, there is no description of the training process like what is the target used for the two components. What is the training procedure? Are the two components trained jointly? Second, the authors provide little description on how the sample similarity graph is constructed excepts for the Ad campaign dataset. Third, it is not clear how is the link prediction evaluation carried out. From the size of the graph, the authors seem to include both user and item in the graph. However, the user and item has disjointed feature set. It is not clear how the GCN is computed for the heterogenous nodes in the graph. Moreover, how is link prediction carried out, by taking inner product (cosine similarity) of the final representation. 2. For equation (8) in section 4.1, why we need to compute h_i^{RFI}. This should be the feature representation of sample i. However, the average is computed without include sample i itself. Also, are the neighbors defined in the sample similarity graph? Should we use the sample interaction in section 4.2 to capture that? 3. Though it is interesting idea to use graph convolution on the feature occurrence graph, it would be much better if the authors could provide more intuition on the output of the GCN. It would be helpful to study a few simple cases like without non-linearity. Is it a generalization to high-order FM without non-linearity? Also, it would be interesting to see experiments results using the graph convoluted feature representation directly for final representation. Also, some visualization of the learned feature embedding also helps. 4. The authors should carry out ablation study for different components of the model. Moreover, it would be much better if the authors can carry out experiments on some widely used recommendation datasets and use standard evaluation metrics for ranking.
iclr_2020_SJexHkSFPS
We study reinforcement learning in settings where sampling an action from the policy must be done concurrently with the time evolution of the controlled system, such as when a robot must decide on the next action while still performing the previous action. Much like a person or an animal, the robot must think and move at the same time, deciding on its next action before the previous one has completed. In order to develop an algorithmic framework for such concurrent control problems, we start with a continuous-time formulation of the Bellman equations, and then discretize them in a way that is aware of system delays. We instantiate this new class of approximate dynamic programming methods via a simple architectural extension to existing value-based deep reinforcement learning algorithms. We evaluate our methods on simulated benchmark tasks and a large-scale robotic grasping task where the robot must "think while moving."
The paper tackles the problem of making decisions for the next action while still engaged in doing the previous actions. Such a delay could either be part of the design (like a robot deciding the next action before its actors and motors have come to full rest after the current action) or an artefact of the delays inherent in the system (i.e. latency induced by calculations or latency of sensors). The paper shows how to model such delays within the Q-learning framework, show that their modelling preserves desirable contraction property of the Bellman update operator, and put their model into practice by an extensive set of experiments: learning policies for several simulated and a real-world setting. The authors claim that that addition of this "delay" does not hinder the performance much of the RL method is given sufficient "context" about the delay, i.e., given extra features as input in order to learn to compensate for it. The writing of the paper is lucid and sufficient background is provided to make the paper self-sufficient in its explanations. However, there are some reasons which do not allow me to fully support the paper's acceptance. The changes made to the basic Q-learning setup, albeit novel and with desirable properties, in my opinion, are (i) theoretically relatively straight forward, (ii) are not expressive enough to capture the problem in its full generality (explained later), and (iii) need more empirical justification with problems where their modification is indeed indispensable. The authors touch on several different research areas cursorily (viz. continuous reinforcement learning, Bellman contractions, feature engineering) while providing grounds for their idea, but in the end return to the familiar domain of discrete Q-learning with semi-hand-crafted (though theoretically motivated) features where the latency of actions can take a set of fixed values and the state is sampled at fixed intervals. If the actions are continuous, then could method from Doya (2000) directly be used to solve these problems? Can the value-based models which he describes be augmented and extensions developed which build on Lemma 3.1 instead of the well-trodden ground of Lemma 3.2? Especially, if one of the objectives which the authors claim their policies are better is "policy duration", then the absence of purely continuous policies is particularly egregious. Further, reducing the policy duration seems like an independent objective which perhaps can be used for reward shaping for the traditional policy methods, which will also lead to different baselines. The authors explicitly say that their method focuses on "optimizing for a specific latency regime as opposed to being robust to all of them;" and that they explicitly avoid learning forward models by including additional features. However, the advantages of placing such restrictions on the design space are unclear at best. Would it be the case that the high-dimensional methods will fail in this setting? Are there theoretical advantages to working on limiting the attention to known latency regimes? I suspect that the authors have concrete reasons for making these design decisions, but these do not come across in the paper in the writing, or by means of additional baselines. As an example of a different approach towards the problem, which the authors overlook in their related work section, is that of learning with spiking neurons and point processes. These areas of research have also been interested in problems of the "thinking while moving" nature: that of reinforcement learning in the context of neurons where the neurons act by means of spikes in response to the environment and other "spikes" [1, 2]. More recently, with point processes, methods have been developed to attain truly asynchronous action and state updates [3, 4]. A differently motivated work which ends up dealing with similar problems is in the direction of adaptive skip intervals [5], where the network also chooses the "latency" in the discrete sense. Adding such related work would help better contextualize this paper. Some other ways the authors can improve the paper are (in no particular order): - The description of the Vector-to-go is insufficient; some concrete examples will help. - The results of the simulated experiments are given in the form of distributions and it is very difficult to discern the effect of individual features in Figure 1. Additionally, due to missing error bars, or other measures of uncertainty, the claim that the performance of models with and without the delayed-actions is comparable to the blocking setting seems tenuous at best, just looking at the rewards. - In particular, for the real experiments, we need more details about the experiment runs to determine why the performance of the policies in the real world is so vastly different. Could the authors describe why the gap can be completely covered through simulations but not in the real world? [1]: Vasilaki, Eleni, et al. "Spike-based reinforcement learning in continuous state and action space: when policy gradient methods fail." PLoS computational biology 5.12 (2009): e1000586. [2]: Frémaux, Nicolas, Henning Sprekeler, and Wulfram Gerstner. "Reinforcement learning using a continuous time actor-critic framework with spiking neurons." PLoS computational biology 9.4 (2013): e1003024. [3]: Upadhyay, Utkarsh, Abir De, and Manuel Gomez Rodriguez. "Deep reinforcement learning of marked temporal point processes." Advances in Neural Information Processing Systems. 2018. [4]: Li, Shuang, et al. "Learning temporal point processes via reinforcement learning." Advances in Neural Information Processing Systems. 2018. [5]: Neitz, Alexander, et al. "Adaptive skip intervals: Temporal abstraction for recurrent dynamical models." Advances in Neural Information Processing Systems. 2018.
iclr_2020_S1efxTVYDr
For typical sequence prediction problems like language generation, maximum likelihood estimation (MLE) has been commonly adopted as it encourages the predicted sequence most consistent with the ground-truth sequence to have the highest probability of occurring. However, MLE focuses on a once-for-all matching between the predicted sequence and gold-standard consequently, treating all incorrect predictions as being equally incorrect. We call such a drawback negative diversity ignorance in this paper. Treating all incorrect predictions as equal unfairly downplays the nuance of these sequences' detailed token-wise structure. To counteract this, we augment the MLE loss by introducing an extra KL divergence term which is derived from comparing a data-dependent Gaussian prior and the detailed training prediction. The proposed data-dependent Gaussian prior objective (D2GPo) is defined over a prior topological order of tokens, poles apart from the data-independent Gaussian prior (L2 regularization) commonly adopted for smoothing the training of MLE. Experimental results show that the proposed method can effectively make use of more detailed prior in the data and significantly improve the performance of typical language generation tasks, including supervised and unsupervised machine translation, text summarization, storytelling, and image caption.
This paper proposes to add a prior/objective to the standard MLE objective for training text generation models. The prior penalizes incorrect generations/predictions when they are close to the reference; thus, in contrast with standard MLE alone, the training objective does not equally penalize all incorrect predictions. For the experiments, the authors use cosine similarity between fastText embeddings to determine the similarity of a predicted word and the target word. The method is tested on a comprehensive set of text generation tasks: machine translation, unsupervised machine translation, summarization, storytelling, and image captioning. In all cases, simply adding the proposed prior improves over a state-of-the-art model. The results are remarkable, as the proposed prior is useful despite the variety of architectures, tasks (including multi-modal ones), and models with/without pre-training. In general, it is promising to pursue work in altering the standard MLE objective; changes to learning objective seem orthogonal to the modeling gains made in many papers (as evidenced by the gains the authors show across diverse models). This paper opens up several new directions, i.e., how can we impose even more effective priors? The authors show that it's effective to use a relatively simple fastText-based prior, but it's possible to consider other priors based on large-scale pre-trained language models or learned models. In this vein, a concurrent paper "Neural Text Generation with Unlikelihood Training" has also shown it effective to alter the standard MLE objective. I think it would be nice to discuss this paper and related works. Overall, I think the approach is quite general and elegant. My main criticism is that the writing was unfocused or unclear at times. The intro discusses a variety of problems in generation, before explaining that the authors only intend to tackle one ("negative diversity ignorance"). It would have been more helpful to read more text in the intro that motivated the problem of negative diversity ignorance and the proposed solution. The second paragraph in the Discussion in Section 4 is rather ambiguous and hand-wavy. It would be nice to see the authors' intuition described more rigorously (i.e., explicitly describing in math how the cosine similarity score is used in the Gaussian prior, or describing in math how the central limit theorem is used). Some of the existing mathematical explanation in section 4 could be made simpler or more clear (the description of f(y) seems to be a distraction since it doesn't end up in the final loss). I would have also appreciated more analysis. After reading the paper, I have the following questions (which the authors may be able to address in the rebuttal): * Do off-the-shelf fastText embeddings work well? How important is it to train fastText embeddings on the data itself? If off-the-shelf embeddings worked well, that could make the method easier to use for others in practice. * How does the gain in performance with D2GPo vary based on the number of training examples? Priors are generally more helpful in low-data regimes. If that is the case here as well, you might get even more compelling results on low-data tasks (all tasks attempted here are somewhat large-scale, as I understand) * Qualitatively, do you notice any difference in the generations? How does the model make mistakes (are these more "semantic" somehow, i.e. swapping a different synonym in). Perhaps the gaussian prior has some failure modes, i.e., where it increases the probability of very incorrect/opposite words because they have a similar fastText representation. These kinds of intuitions would be useful to know I also have one technical question: * When you compare against MASS (Song et al. 2019), do you use the same code and/or pre-trained weights from MASS, or do you pre-train from scratch using the procedure from MASS? (The wording in the text is somewhat ambiguous.) I'm just wondering how comparable the results are vs. MASS, or if it would be useful to know how your version of the pre-trained model does. Despite my above questions/concerns, I think the proposed method or its predecessors could provide improvements across a variety of text generation tasks, so I overall highly recommend this paper for acceptance.
iclr_2020_Ske6qJSKPH
We study the problem of fitting task-specific learning rate schedules from the perspective of hyperparameter optimization. This allows us to explicitly search for schedules that achieve good generalization. We describe the structure of the gradient of a validation error w.r.t. the learning rate, the hypergradient, and based on this we introduce a novel online algorithm. Our method adaptively interpolates between the recently proposed techniques of Franceschi et al. (2017) and Baydin et al. (2018), featuring increased stability and faster convergence. We show empirically that the proposed method compares favorably with baselines and related methods in terms of final test accuracy.
In this paper, the authors introduce a hypergradient optimization algorithm for finding learning rate schedules that maximize test set accuracy. The proposed algorithm adaptively interpolates between two recently proposed hyperparameter optimization algorithms and performs comparably in terms of convergence and generalization with these baselines. Overall the paper is interesting, although I found it a bit dense and hard to read. I frequently found myself having to scroll to different parts of the paper to remind myself of the notation used and the definition of the different matrices. This makes it harder to evaluate the paper properly. The proposed algorithm seems interesting however, and the experimental results look quite impressive. I have a few concerns regarding the experiments however, which explains my score: 1. In figure 2, does MARTHE diverge for values of beta greater than 1e-4? This seems to indicate that MARTHE is somehow more sensitive to beta than the other variations used. Do the authors have any intuition about what might be causing this behavior? 2. The initial learning rate for SGDM and Adam was fixed at certain values for all experiments. Why is this a reasonable thing to do? It feels like MARTHE should be compared to SGDM and Adam at least when the initial learning rate is tuned for these properly. Otherwise, it doesn't feel like a fair evaluation? To the best of my knowledge, the final achieved accuracies achieved with MARTHE however seem quite competitive with the best results typically reached with tuned SGDM on the convolutional nets used in the paper. 3. The learning rate schedules found by MARTHE seem to be somewhat counterintuitive. While an initial increase matches the heuristic of warmup learning rates frequently used when training convnets, the algorithms seems to decrease down the learning rate after that even quicker than what the greedy algorithm HD does. Do the authors have any intuition why this can lead to such a big improvement in performance over HD? 4. Is it possible to provide some sort of estimate of how much computation MARTHE requires compared to a single SGDM run? How feasible is to test this algorithm on a bigger classification model on ImageNet? I think this paper is borderline, although I am leaning towards accepting it given the impressive empirical results. It would really improve the paper if the readability was improved, as well as if larger experimental results were included. ==================================== Edit after rebuttal: I thank the authors for their response. I am happy with their response and am sticking to my score.
iclr_2020_rygjmpVFvB
out-of-distribution examples in the MNIST dataset, from another perspective, are found to belong to the differences between the sets of examples in MNIST and the universal set. It should be noted that in traditional GAN, the target distribution is identical to the training data distribution; however, in the DSGAN these two distributions, are considered to be different. This paper makes the following contributions: (1) We propose the DSGAN to generate any unseen data only if the density of the target (unseen data) distribution is the difference between those of any two distributions, pd and p d . (2) We show that the DSGAN possesses the flexibility to learn different target (unseen data) distributions in two key applications, semi-supervised learning and novelty detection. Specifically, for novelty detection, the DSGAN can produce boundary points around the seen data because this type of unseen data is easily misclassified. For semi-supervised learning, the unseen data are linear combinations of any labeled data and unlabeled data, excluding the labeled and unlabeled data themselves 2 . (3) The DSGAN yields results comparable to a semi-supervised learning but with a short training time and low memory consumption. In novelty detection, combining both the DSGAN and variational auto-encoder (VAE, Kingma & Welling (2014b)) methods achieve the state-of-the-art results.
This paper proposed DSGAN which learns to generate unseen data from seen data distribution p_d and its somehow “broad” version p_{\hat d} (E.g., p_d convolved with Gaussian). The “unseen data” is the one that appears in p_{\hat d} but not in p_d. DSGAN is trained to generate such data. In particular, it uses samples from p_d as fake data and samples from p_{\hat d} as the real one. Although the idea seems to be interesting, the paper seems to be a bit incremental and is a simple application of existing GAN techniques. The paper shows two applications (semi-supervised learning and novelty detection) and it is not clear that the proposed method outperforms existing GAN methods in the classification accuracy in MNIST/SVHN/CIFAR10 (Table 1) and existing sampling methods (Table. 3). It seems that the sampled reconstruction results (Fig. 8) are not as good as VAE on CIFAR10. I would also expect more ablation studies about how to pick p_{\had d}, which seems to be the key of this approach, in MNIST and CIFAR10. In terms of writing, the paper is a bit confusing in terms of motivations and notations. Overall, the method looks incremental and experimental results are mixed on small datasets so I vote for rejection. Note that I am not an expert on GAN/VAE so I put low confidence here.
iclr_2020_BkgZSCEtvr
In this paper, we propose Continuous Graph Flow, a generative continuous flow based method that aims to model complex distributions of graph-structured data. Once learned, the model can be applied to an arbitrary graph, defining a probability density over the random variables represented by the graph. It is formulated as an ordinary differential equation system with shared and reusable functions that operate over the graphs. This leads to a new type of neural graph message passing scheme that performs continuous message passing over time. This class of models offers several advantages: a flexible representation that can generalize to variable data dimensions; ability to model dependencies in complex data distributions; reversible and memory-efficient; and exact and efficient computation of the likelihood of the data. We demonstrate the effectiveness of our model on a diverse set of generation tasks across different domains: graph generation, image puzzle generation, and layout generation from scene graphs. Our proposed model achieves significantly better performance compared to state-of-the-art models.
This paper proposes a new variant of invertible flow-based model for graph structured data. Specifically, the authors proposed a continuous normalizing flow model for graph generation, first in the graph generation literature. The authors claim that the free-form model architecture of neural ODE formulation of continuous flow is advantageous against standard discrete flow models. Experimental results show that the proposed model is superior to recent models in image puzzling and layout generation datasets. Overall, manuscript is well organized. The combination of continuous flows and graph structured data is new in the literature (as far as I know). Proposed formulation seems natural and reasonable. I find no fatal flows in the formulation. In experiments, the proposed model achieved good scores against recent GNN works. Concerning the existing invertible flow-based models for graph structured data, Madhawa’s work is one of the first attempts in the literature. I think the paper below should be refereed appropriately: Madhawa+, “Graph NVP”, arXiv: 1905.11600, 2019. The way of incorporating relational structure into flows are very similar to the GraphNVP and Graph Normalizing Flow: using neighboring nodes’ hidden vectors as parameters (or input to parameter inference networks). In addition, I found no special tricks or theoretical considerations to achieve the continuous flow for graphs. Based on these points, I think technical contribution of this paper is somewhat limited. I cannot find information about the specific chosen forms of f-hat and g in Eq.(10) within the manuscript. Are the choices of f-hat and g are crucial for performance? It is preferable if the authors can present any experimental validations concerning this issue. My main concerns are in the experimental section. I’m not fully convinced in the necessity of the continuous normalizing flows for the experimental tasks. None of the experimental tasks have `````'' intrinsic continuous time dynamics over graph-structured data (Sec. 2)''. Then, what is the rationale to adopt Continuous graph flow for these tasks? One reason to adopt continuous flows is that the ODE formulation allows users to choose free-form model architecture, yielding more complicated mappings to capture delicate variable distributions. I expect some assessments are made concerning this issue. My suggestion is (i) to test the discretized model of the proposed Continuous Graph Flow and see how the discretization deteriorate the performance, and (ii) to test several choices of f-hat and g (Eq.10) to show the necessity of ODE formulation, accommodating free-form model architecture. In the current manuscript, Graph Normalizing Flow (GNF) is the closest competitor. However, GNF is not tested in the puzzle and the scene graph experiments. Why is that? I’d like to hear opinions of the authors concerning these issues, and hope some discussions are included in the manuscript. Summary + continuous normalizing flow is first applied to graph structure data + manuscript is well organized +- natural and reasonable formulation. But at the same time, technological advancement is limited. - Less convinced to adopt continuous flow for graph-structured data without no intrinsic continuous dynamics. - Necessity or advantages of ``continuous’’ flows are not well assessed in the experiments. Please consider some additional assessments suggested in the review comment. - GNF, the closest competitor, is omitted in the 2d and the 3rd experiments. No explanations about this.
iclr_2020_rylvYaNYDH
As deep reinforcement learning driven by visual perception becomes more widely used there is a growing need to better understand and probe the learned agents. Understanding the decision making process and its relationship to visual inputs can be very valuable to identify problems in learned behavior. However, this topic has been relatively under-explored in the research community. In this work we present a method for synthesizing visual inputs of interest for a trained agent. Such inputs or states could be situations in which specific actions are necessary. Further, critical states in which a very high or a very low reward can be achieved are often interesting to understand the situational awareness of the system as they can correspond to risky states. To this end, we learn a generative model over the state space of the environment and use its latent space to optimize a target function for the state of interest. In our experiments we show that this method can generate insights for a variety of environments and reinforcement learning methods. We explore results in the standard Atari benchmark games as well as in an autonomous driving simulator. Based on the efficiency with which we have been able to identify behavioural weaknesses with this technique, we believe this general approach could serve as an important tool for AI safety applications.
Summary This paper proposes a generative technique to sample "interesting" states useful for analyzing the behavior of deep reinforcement learning agents. In this context, the concept of "interesting" is defined via user-specific target functions, e.g. states that arise as a consequence of taking specific actions (such as actions associated with high or low Q-values for example). The approach is evaluated in the Atari domain and in an autonomous driving simulator. Results are mainly presented as visualizations of interesting states that are described verbally. Quality The quality of the submission is extremely low. The optimization objectives chosen by the authors seem very ad hoc to me and how the motivation relates to the objectives is hard to comprehend (see my Clarity section). The experimental results have very low quality as well---results are mainly depicted as images with a verbal explanation. Clarity The clarity of the paper is extremely poor. While I do conceptually understand Section 2.1, I have a hard time linking it precisely to Section 2.2. Just some examples regarding lack in clarity: - What is z in Section 2.1? - How do the objectives in Section 2.1 and Section 2.2 relate to each other, i.e. how does the algorithm operate? Some pseudocode would be really helpful here. - What are the target functions S^+, S^- and S^\pm in Section 2.3? - What is the difference in the KL-regularizer mentioned in the text below Equation (3) and in Equation (5)? - In the text above Equation (2), it is mentioned that a squared reconstruction loss is insensitive to small elements in the image that have a huge impact on the reward. While this is true, I don't see how the multiplicative policy gradient norm term in Equation (2), as proposed by the authors, is addressing this issue. The proposed modification puts emphasis on states where the norm of the policy gradient is high, which is different from putting emphasis on specific regions in the image. I guess the intention would be to do an element-wise multiplication of the squared loss vector and the absolute value policy gradient vector before collapsing to a scalar, or something similar? In general, I found the entire writing from Section 3 onward a bit wordy and I do not think that nine pages are required to deliver the message of the paper in its current form. Originality The idea of visualizing states that reveal interesting insights about an agent's behavior based on a user-defined target function sounds interesting. But I have not worked in interpretability of agent behavior, which is why I leave the assessment of the originality to the other reviewers and the area chair. Significance If the results of the paper were backed up with some proper scientific metrics other than verbally explaining images, there might be some significance in the paper. Update After the authors' response, I am currently not inclined to change my score. While I do agree that the paper proposes an interesting idea, the technical presentation of the work is simply too poor and not convincing at this stage. Here are a few points: - A variational autoencoder works as follows. There is a generative model over latent variables z and observed variables s, consisting of a prior for z and a likelihood for s conditioned on z. The prior over z can be e.g. a normal distribution denoted as N(z|\mu_prior, \Sigma_prior). Then the likelihood (decoder) can be e.g. a deep neural net that maps z to a normal distribution of s denoted as N(s|\mu_likelihood(z, \theta), \Sigma_likelihood(z, \theta)) where \theta refers to the decoder's neural network weights. Furthermore, there is a recognition model that approximates the posterior over z given s (encoder)---this can also be e.g. a deep neural net that maps s to a normal distribution in z denoted as N(z|\mu_posteriorapprox(s, \psi), \Sigma_posteriorapprox(s, \psi)) where \psi refers to the encoder's neural network weights. - The reparameterization trick is not required for the technical explanation of the involved random variables and how they relate to each other. It is merely an optimization trick to establish a functional dependency between a random variable and the parameters of its distribution (e.g. mean and covariance in the Gaussian case). - To be specific about your updated paper. The notation you chose for the encoder f(s) = (\mu, \sigma) is confusing because it hides the dependency on s on the right hand side. The notation for the decoder g(\mu, \sigma, z) is also confusing because the decoder is supposed to map z to something in s space (see my first bullet point). The notation g(f(s), z) is particularly confusing because it is not consistent with the other notation that you use (which I mentioned in the sentence before). Usually, f(s) represents an element in latent space and is fed through the decoder to yield something in s space---so I don't understand why the decoder receives both f(s) and z as an argument. - You talk about optimization objectives, then please specify what the optimization arguments are---this is not clear from the description given. For example, in Equation (1) the optimization arguments seem to be both the recognition (encoder) and the generative (prior over latents plus decoder) model parameters? In Equation (4), the optimization argument seems to be the latent variable z? Given all the comments above, it is pretty obvious that the paper in its current form simply does not adhere to scientific standards for technically reporting machine learning algorithms in a proper way. So I clearly still vote for rejection because of the lack in technical clarity. And yes, as I said, I would like to see pseudocode.
iclr_2020_BJlJVCEYDB
How can animals behave effectively in conditions involving different motivational contexts? Here, we propose how reinforcement learning neural networks can learn optimal behavior for dynamically changing motivational salience vectors. First, we show that Q-learning neural networks with motivation can navigate in environment with dynamic rewards. Second, we show that such networks can learn complex behaviors simultaneously directed towards several goals distributed in an environment. Finally, we show that in Pavlovian conditioning task, the responses of the neurons in our model resemble the firing patterns of neurons in the ventral pallidum (VP), a basal ganglia structure involved in motivated behaviors. We show that, similarly to real neurons, recurrent networks with motivation are composed of two oppositely-tuned classes of neurons, responding to positive and negative rewards. Our model generates predictions for the VP connectivity. We conclude that networks with motivation can rapidly adapt their behavior to varying conditions without changes in synaptic strength when expected reward is modulated by motivation. Such networks may also provide a mechanism for how hierarchical reinforcement learning is implemented in the brain.
This paper presents a computational model of motivation for Q learning and relates it to biological models of motivation. Motivation is presented to the agent as a component of its inputs, and is encoded in a vectorised reward function where each component of the reward is weighted. This approach is explored in three domains: a modified four-room domain where each room represents a different reward in the reward vector, a route planning problem, and a pavlovian conditioning example where neuronal activations are compared to mice undergoing a similar conditioning. Review Summary: I am uncertain of the neuroscientific contributions of this paper. From a machine learning perspective, this paper has insufficient details to assess both the experimental contributions and proposed formulation of motivation. It is unclear from the discussion of biological forms of motivation, and from the experimental elaboration of these ideas, that the proposed model of motivation is a novel contribution. For these reasons, I suggest a reject. The Four Rooms Experiment: In the four-rooms problem, the agent is provided with a one-hot encoding representing which cell it the agent is located in within the grid-world. The reward given to the agent is a combination of the reward signal from the environment (a one-hot vector where the activation is dependent on the room occupied by the agent) and the motivation vector, which is a weighting of the rooms. One agent is given access to the weighting vector mu in its state vector: the motivation is concatenated to the position, encoding the weighting of the rooms at any given time-step. The non-motivated agent does not have access to mu in its state, although its reward is weighted as the motivated agent’s is. The issue with this example is that the non-motivated agent does not have access to the information required to learn a value-function suitable to solve this problem. By not giving the motivation vector to non-motivated agent, the problem has become a partially observable problem, and the comparison is now between a partially observable and fully observable setting, rather than a commentary on the difference between learning with and without motivation. In places, the claims made go beyond the results presented. How do we know that the non-motivated network is engaging in a "non-motivated delay binge"? We certainly can see that the agent acquires an average reward of 1, but it is not evident from this detail alone that the agent is engaging in the behaviour that the paper claims. Moreover, the network was trained 41 times for different values of the motivation parameter theta. Counting out the points in figure 2, it would suggest that the sweep was over 41 values of theta, which leaves me wondering if the results represent a single independent trial, or whether the results are averaged over multiple trials. Looking at the top-right hand corner I see a single yellow dot (non-motivated agent) presented in line with blue (motivated agent) suggesting that the point is possibly an outlier. Given this outlier, I’m led believe that the graph represents a single independent trial. A single trial is insufficient to draw conclusions about the behaviour of an agent. The Path Routing Experiment: In the second experiment, where a population of agents is presented in fig 5, it is claimed that on 82% of the trials, the agent was able to find the shortest path. Looking at the figure itself, at the final depicted iteration, all of the points are presented in a different colour and labelled “shortest path”. The graph suggests that 100% of the agents found the shortest path. The claim is made that for the remaining 18% of the agents, the agents found close to the shortest path—a point not evident in the figures presented. Pavlovian Conditioning Experiment: In the third experiment, shouldn’t Q(s) be V(s)? In this setting, the agent is not learning the value of a state action pair, but rather the value of a state. Moreover, the value is described as Q(t), where t is the time-step in the trial; however, elsewhere in the text it is mentioned that the state is not simply t, but contains also the motivation value mu. The third experiment does not have enough detail to interpret the results. It is unclear how many trials there were for both of the prediction settings. It is unclear whether the problem described is a continuing problem or a terminating prediction problem—i.e., whether after the conditioned stimulus and unconditioned stimulus are presented to the agent, does the time-step (and thus the state) reset to 0, or does time continue incrementing? If it is a terminating prediction problem, it is unclear whether the conditioned stimulus and unconditioned stimulus were delivered on the same time-steps for each independent trial. If I am interpreting the state-construction correctly, the state is incrementing by one on each time-step; this problem is effectively a Markov Reward Process where the agent transitions from one state to the next until time stops with no ability to transition to previous states. In both the terminating and continuing cases, the choice of inputs is unusual. What was the motivation for using the time-step as part of the state construction? How is the conditioned stimulus formulated in this setting? It is mentioned that it is a function of time, but there are no additional details. From reading the text, it is unclear whether fig 7b/c presents activations over multiple independent trials or a single trial. General Thoughts on Framing: This paper introduces non-standard terms without defining them first. For example, TD error is introduced as Reward Prediction Error, or RPE: a term that is not typically used in the Reinforcement Learning literature. To my understanding, there is a hypothesis about RPE in the brain in the cognitive science community; however, the connection between this idea in the cognitive science literature and its relation to RL methods is not immediately clear. Temporal Difference learning is incorrectly referred to as "Time Difference" learning (pg 2). Notes on technical details: - The discounting function gamma should be 0<= gamma <=1, rather than just <=1. - discounting not only prevents the sum of future rewards from diverging, but also plays an important role in determining the behaviour of an agent---i.e., the preference for short-term versus long-term rewards. - pg 2 "the motivation is a slowly changing variable, that is not affected substantially by an average action" -- it is not clear from the context what an average action is. - Why is the reward r(s|a), as opposed to r(s,a)? Notes on paper structure: - There are some odd choices in the structure of this paper. For instance, the second section---before the mathematical framing of the paper has been presented---is the results section. - In some sentences, citations are added where no claim is being made; it is not clear what the relevance of the citation is, or what the citation is supporting. E.g., “We chose to use a recurrent neural network (RNN) as a basis for our model” following with a citation for Sutton & Barto, 1987. - In some sentences, citations are not added where substantial claims are being made. E.g, “The recurrent network structure in this Pavlovian conditioning is compatible with the conventional models of working memory”. This claim is made, but it is never made clear what the conventional computational models of working memory are, or how they fit into the computational approaches proposed. - Unfortunately, a number of readers in the machine learning community might be unfamiliar with pavlovian conditioning and classical conditioning. Taking the time to unpack these ideas and contextualise them for the audience might help readers understand the paper and its relevance. - Figure 7B may benefit from displaying not just the predicted values V(s), but a plot of the prediction over time in comparison to the true expected return.
iclr_2020_H1lkYkrKDB
Extracting underlying dynamics of objects in image sequences is one of the challenging problems in computer vision. On the other hand, dynamic mode decomposition (DMD) has recently attracted attention as a way of obtaining modal representations of nonlinear dynamics from (general multivariate time-series) data without explicit prior knowledge about the dynamics. In this paper, we propose a convolutional autoencoder based DMD (CAE-DMD) that is an extended DMD (EDMD) approach, to extract underlying dynamics in videos. To this end, we develop a modified CAE model by incorporating DMD on the encoder, which gives a more meaningful compressed representation of input image sequences. On the reconstruction side, a decoder is used to minimize the reconstruction error after applying the DMD, which in result gives an accurate reconstruction of inputs. We empirically investigated the performance of CAE-DMD in two applications: background/foreground extraction and video classification, on publicly available datasets.
I found the topic of this paper interesting and I believe I understand what the authors are trying to achieve but I'm afraid this was after several readings and I do think the paper could be presented differently that would make it more accessible. My suggestion would be to explain how the model will be applied first (identify the required properties) to motivate the need for the learned basis and then present the DMD as a method for providing a basis that meets the properties required. I acknowledge that different communities have different styles of presentation so apologies if this is just me. First I would just like to check that I have understood correctly so please could the authors point out if I have missed something or misunderstood in the following? Our goal is to establish a basis invariant to the video dynamics that can then be used, for example, to partition the video into parts with differing dynamics - e.g. foreground/background. To do this we need to identify such a basis from a specific video - we will use the collection of pairs of neighboring frames. The Koopman operator acts on a differential system to identify a function space invariant to the dynamics. If we instantiate this with a finite number of dimensions we can essentially establish the invariance as an eigenvalue problem. From this and our pairs of successive videos we can establish a vector basis for the space and then project the video into this basis. The spectral properties of the coefficients of the projection will determine whether something is static (omega = 0) or transitory in the scene and these can be used to identify foreground and background. Next there is the issue that this method operates in a linear domain with something like Gaussian noise which is not a good fit for image space videos so the authors propose to identify the dynamics in a linear latent space determined by an autoencoder to handle the non-linear mapping to image space. I hope I have understood the main points? If this is the case, I think that much more needs to be said about the second part, which is the essential novelty of the paper, with a discussion of the merits of different approaches and full details - at the moment there is just one small paragraph at the end of 4.2 which contains the majority of the contribution. My main concern about the paper is that I find it very difficult to appreciate the efficacy of the method given the current presentation of the results. There are no error bars to ascertain significance for any of the results and the summarization of multiple experiments to a single percentage gives very little insight into where this method works and where it doesn't. There are a number of ways that a dynamic prior could be added to a latent space and it is unclear why we would expect this approach to be preferred given the evidence presented in the paper. Other Notes: I found that the notation is not always consistent and sometimes could be simplified - it is unclear whether some operators are convolutions or multiplications (vector or scalar). To me the asterisk does not represent straight forward multiplication but it might be being used for this? Could Table 1 be placed in the experiments section rather than in the middle of page 5? Do the authors mean half the number of pixels or half the edge size (e.g. a quarter of the area) in terms of the latent space? Please can all equations be numbered so that they can be referred to - there are no equation numbers in all of section 2.
iclr_2020_BJewlyStDr
Research on exploration in reinforcement learning, as applied to Atari 2600 gameplaying, has emphasized tackling difficult exploration problems such as MON-TEZUMA'S REVENGE (Bellemare et al., 2016). Recently, bonus-based exploration methods, which explore by augmenting the environment reward, have reached above-human average performance on such domains. In this paper we reassess popular bonus-based exploration methods within a common evaluation framework. We combine Rainbow (Hessel et al., 2018) with different exploration bonuses and evaluate its performance on MONTEZUMA'S REVENGE, Bellemare et al.'s set of hard of exploration games with sparse rewards, and the whole Atari 2600 suite. We find that while exploration bonuses lead to higher score on MONTEZUMA'S REVENGE they do not provide meaningful gains over the simpler -greedy scheme. In fact, we find that methods that perform best on that game often underperform -greedy on easy exploration Atari 2600 games. We find that our conclusions remain valid even when hyperparameters are tuned for these easy-exploration games. Finally, we find that none of the methods surveyed benefit from additional training samples (1 billion frames, versus Rainbow's 200 million) on Bellemare et al.'s hard exploration games. Our results suggest that recent gains in MONTEZUMA'S REVENGE may be better attributed to architecture change, rather than better exploration schemes; and that the real pace of progress in exploration research for Atari 2600 games may have been obfuscated by good results on a single domain.
Updated review: I am overall happy with the response of the authors. I can appreciate the contributions of the paper and I am happy to recommend accept. The empirical study offers some insights into deep RL methods for ATARI games and raises some key questions. I feel the current version of the paper does not build upon these insights to propose a new method. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Summary: This paper presents a detailed empirical study of the recent bonus based exploration method on the Atari game suite. The paper concludes that methods that perform well on Montezuma’s revenge do not necessarily perform well on the other games, sometimes, even worse than the eps-greedy approach. This also leads to the conclusion that recent results on the game Montezuma’s revenge can be attributed to architectural changes instead of the exploration method. I think this is a-ok paper in that it does what it says it does. The paper is clear and well-written. I think the main contribution of the paper is that it raises some questions over existing methods/trends in solving exploration problems in reinforcement learning by comparing the performance of multiple methods across various games in ATARI suite. I think this is relevant to the ICLR community and will be appreciated by it. However, I also feel that while the paper runs a satisfactory empirical analysis, it was all too much focussed on the existing methods. Throughout the paper, the experiments and results raise questions on the robustness and generalization of existing exploration methods across various ATARI games, but the paper puts absolutely zero effort into investigating if there is a quick fix to the questions it poses. For example, one could easily investigate in the CTS method if the factor by which exploration bonus dies N^{alpha} (alpha=-1/2 by default) changes, then does it do better or worse (more below on this). I can understand that might not be the aim of the paper but still. Here are a couple of points that I felt conflicted/confused about the paper: - The conclusion of the paper is that ‘progress of exploration in ATARI suite is obfuscated by good results in single domain’. I am confused if the paper is making a narrow point that (1) dont focus on Montezuma’s revenge OR (2) is it admitting a broader point that focussing on even ATARI is probably not a good choice. I am not saying that I know the answer to this question, but I am unclear as to what is the question the paper is trying to raise. If it is saying (1st) then I find it contradictory that it is not ok to focus on MR but it is ok to focus on ATARI as a single domain; if it is saying the second then also it is contradictory because the paper only experiments with the ATARI suite. - It is interesting to note that noisy networks are most robust to hyperparameter optimization on a separate set of games when tested on a different set of games. It is also interesting to note that noisy networks are the only exploration bonus method that does not decrease/reduce exploration as the experience of the agent increases. I would have liked to see if the paper had made an attempt to investigate this. I feel such a hypothesis would have been easy to investigate with simple modifications to the CTS methods. Currently, the exploration bonus goes down by the factor of 1/sqrt(N) in the CTS method. A comparison that showed the performance of CTS for a couple more values of factors such as (1/N) or (1/N)^{1/4} would have been nice to see if that mattered. - One of the comparisons I did not particularly find fair was when the hyperparameters of various methods were tuned to play MR and then the hyperparameters were fixed and the method were tested on other ATARI games. - Another point I felt was missing was checking if rainbow DQN is really the reason behind the observed performance of the methods. It would have been interesting to know how the methods performed when combined with the original DQN algorithm.
iclr_2020_BkeaxAEKvB
Quantization based methods are popular for solving large scale maximum inner product search problems. However, in most traditional quantization works, the objective is to minimize the reconstruction error for datapoints to be searched. In this work, we focus directly on minimizing error in inner product approximation and derive a new class of quantization loss functions. One key aspect of the new loss functions is that we weight the error term based on the value of the inner product, giving more importance to pairs of queries and datapoints whose inner products are high. We provide theoretical grounding to the new quantization loss function, which is simple, intuitive and able to work with a variety of quantization techniques, including binary quantization and product quantization. We conduct experiments on public benchmarking datasets http://ann-benchmarks.com to demonstrate that our method using the new objective outperforms other state-of-the-art methods. We are committed to release our source code.
The paper proposes new loss functions for quantization when the task of interest is maximum inner product search (MIPS). The paper is well written with clear descriptions, fairly comprehensive analysis and empirical exploration, and good results, and in general I agree that learning quantization so as to minimize quantization related errors on task at hand is a good strategy. Specific comments and suggestions for strengthening the paper are: a) The proposed loss function in (2) includes a weight function that serves as a proxy for the task objective of giving more emphasis to quantization errors on samples with larger inner product. Instead, why not use the true task objective which for the MIPS task is stated in the Introduction section? If this was considered please comment on reasons for not including / discussing this in the paper, otherwise perhaps this’ll be good to discuss. b) Did the authors consider using a task dependent training data set which will capture both ‘q’ and ‘x’ distributions and potentially lead to even further improved quantization? This has the disadvantage of making quantization dependent on query distribution, but in cases where such data is available it will be very valuable to know if incorporating data distributions in quantization process helps performance and to what extent. c) It will also be valuable to consider the closely related task of cosine distance based retrieval and comment on how that impacts the modifications of loss functions. d) The idea of learning quantization under objective of interest using observed data distribution has been studied earlier (e.g. see Marcheret et al., “Optimal quantization and bit allocation for compressing large discriminative feature space transforms,” ASRU 2009), perhaps worth citing as related work.
iclr_2020_rJguRyBYvr
This paper is concerned with the defense of deep models against adversarial attacks. We develop an adversarial detection method, which is inspired by the certificate defense approach, and captures the idea of separating class clusters in the embedding space to increase the margin. The resulting defense is intuitive, effective, scalable, and can be integrated into any given neural classification model. Our method demonstrates state-of-the-art (detection) performance under all threat models.
Summary ======== This paper proposes a defense against adversarial examples that detects perturbed inputs using kernel density estimation. The paper uses a combination of known (and often known to be broken) techniques, and does not provide a fully convincing evaluation. I lean towards rejection of this paper. Detailed comments ================= The idea of increasing robustness by maximizing inter-class margins and minimizing intra-class variance is fairly natural, but the author's discussion of their approach (mainly in sections 1 and 2) is very hand-wavy and relies on a lot of general intuitions and unproven claims about neural networks. For example, in the introduction, the authors claim: "A trained deep classification model tends to organize instances into clusters in the embedding space, according to class labels. Classes with clusters in close proximity to one another, provide excellent opportunities for attackers to fool the model. This geometry explains the tendency of untargeted attacks to alter the label of a given image to a class adjacent in the embedding space as demonstrated in Figure 1a." First, a t-SNE representation is just a 2D projection of high-dimensional data that is useful for visualization purposes, and one should be careful when extrapolating insights about the actual data from it. For example, distances in the 2D projection do not necessarily correspond directly to distances in the embedding space. The claim that untargeted attacks lead to a "nearby" cluster are hard to verify given just Figure 1. First, the colors of the labels between 1a and 1b do not seem to match (e.g., Dog is bright green in 1b but this color does not appear in 1a). If the other colors match, then this would seem to suggest that trucks (purple) often get altered to ships (orange). Yet, the two clusters are quite far apart in 1a. It seems hard to say something qualitative here. An actual experiment comparing distances in the embedding space and the tendency of untargeted attacks to move from one class to another would be helpful. The color scheme in Figure 1b is also unclear. A color bar would help here at the very least. These observations are then used to justify increasing cluster distance while minimizing cluster variance, but it would be nice to see a more formal argument relating these concepts to the embedding distance. The technique proposed in Section 3.2. to reduce variance loss estimates each class' variance on each batch. Would this still work for a dataset with a large number of classes (e.g., ImageNet)? For such a dataset, each class will be present less than once in expectation in each batch, which seems problematic. The plots in Figure 2 don't give much of a sense of how the combination of the different proposed techniques is better than any individual technique. The evaluation compares PDM to RCE, but from Figure 2 one could guess that variance reduction alone (2c) performs very similarly to PDM (2e). An ablation study showing the contribution of each of the individual techniques would be helpful. The evaluation section could be improved significantly. FGSM, JSMA, and to some extent BIM, are not recommended attacks for evaluating robustness. The gray-box and black-box threat model evaluations are also not the most interesting here. Instead, and following the recommendations of Carlini et al. (2019), the evaluation should: - Propose an adaptive attack objective, tailored for the proposed defense in a white-box setting. The authors do this to some extent, by re-using the attack objective from Carlini & Wagner 2017, which targets KDE. It would still be good to provide additional explanations about how the hyperparameters for this attack were set. - Optimize this objective using both gradient-based and gradient-free attacks - As the proposed defense is attack-agnostic, I also suggest trying it out on rotation-translation attacks, as the worst-case attack can always be found by brute-force search Other ===== - The citations for adversarial training in the 2nd paragraph of the intro are unusual. Standard references here are for sure the first two below, and maybe some of the other three as is relevant to your work - Szegedy et al. 2013: "intriguing properties of neural networks" - Goodfellow et al. 2014: "Explaining and harnessing adversarial examples" - Kurakin et al. 2016: "Adversarial Machine Learning at Scale" - Madry et al. 2017: "Towards deep learning models resistant to adversarial attacks" - Tramer et al. 2017: "Ensemble Adversarial Training" - The Taylor approximation in (1) does not seem to be well defined. The Jacobian of F is a matrix, so it isn't clear what evaluating that matrix at a point x means. - The "greater yet similar" symbol (e.g., in equation (4)) should be defined formally.
iclr_2020_rkl2s34twS
In unsupervised domain adaptation (UDA), classifiers for the target domain (TD) are trained with clean labeled data from the source domain (SD) and unlabeled data from TD. However, in the wild, it is hard to acquire a large amount of perfectly clean labeled data in SD given limited budget. Hence, we consider a new, more realistic and more challenging problem setting, where classifiers have to be trained with noisy labeled data from SD and unlabeled data from TD-we name it wildly UDA (WUDA). We show that WUDA ruins all UDA methods if taking no care of label noise in SD, and to this end, we propose a Butterfly framework, a powerful and efficient solution to WUDA. Butterfly maintains four models (e.g., deep networks) simultaneously, where two take care of all adaptations (i.e., noisy-to-clean, labeled-to-unlabeled, and SD-to-TD-distributional) and then the other two can focus on classification in TD. As a consequence, Butterfly possesses all the conceptually necessary components for solving WUDA. Experiments demonstrate that under WUDA, Butterfly significantly outperforms existing baseline methods.
This paper introduces the idea of wildly unsupervised domain adaptation, where the source labels are noisy and the target data is unsupervised. To tackle this, the authors propose an architecture based one two branches: one acting on the mixed source-target data and the other on the target data only. During training, each branch is updated using the idea of co-teaching, by finding the samples with the lowest loss values. Pseudo-labeling is then applied to the target data, and the process iterates. Originality: - In essence, the proposed method combines co-teaching (Han et al., 2018) and pseudo-labeling (Saito et al., 2017). While it goes beyond the naive two-stage approach, used here as a baseline, the technical novelty remains limited. - The main novelty consists of using two branches to model the two domains. However, the necessity for the second branch is not very clearly explained, and remains obscure to me, since the first branch already acts on the mixed source-target data. Clarity: In addition the fact that, as mentioned above, the design of the overall framework is not entirely well motivated, I found the paper somewhat hard to follow. In particular: - One has to go to the appendix to find Alg. 2, which describes one of the key components (the Checking method in Alg. 1). The steps performed by Alg. 2 are not explained anywhere. - In Alg. 1, it seems surprising that \tilde{D}_T^l is initialized as \tilde{D}_s, since, according to Fig. 3, the second branch should act on the target data only. - In Alg. 1, the authors do not explain how they initialize the values R(T) and R_t(T). - I would expect that, to obtain meaningful results from the Checking method, the parameters of the networks F_1, F_2, F_{t1} and F_{t2} should already be initialized to reasonable values. Can the authors comment on the initialization procedure? - In Alg. 2, how are the inner minimization problems solved? Are u_1 and u_2 truly enforced to be binary variables? How fast can one obtain the solutions to these problems? - In Fig. 1, it is not clear to me what the term Interaction between the two branches represent. From the text, I could not find any reference to explicit interactions between the branches. - In Eq. 1, the loss function \ell() is not defined (although I imagine that it is the cross-entropy). Experiments: - The experiments show the good behavior of the method. However, while I understand the motivation behind defining the two-stage baseline using ATDA, which is used in the proposed method, there seem to be no strict constrain on using this specific method in the two-stage scenario. For example, based on the results in Table 1, one might rather want to use TCL as the second stage, i.e., have a baseline Co+TCL. - As I mentioned before, the motivation behind the second branch in the framework is not clear to me. I would appreciate it if the authors could explain the reasoning behind this branch and evaluate their method without it. Summary: The novelty of the proposed method, relying on a combination of co-teaching and pseudo-labeling, is limited. Furthermore, the clarity of the paper could be significantly improved.
iclr_2020_rkxdexBYPB
Character-level language modeling based on Transformer has brought great success by alleviating limitation of recursive operation. However, existing Transformer-based models require substantial computational resources, which hinders the usability of character-level language models in applications with limited resources. In this paper, we propose a lightweight model, called GroupTransformer , that factorizes the calculation paths by grouped embedding operators. Additionally, Group-Transformer employs inter-group linear operators to prevent performance degradation from the group strategy. With comparison experiments on about five times larger and the best performing LSTM-based models and compatible parameter size of Transformer-based models, we show that GroupTransformer has better performance on two benchmark tasks, enwik8 and text8. Further experiments including ablation studies and qualitative analysis revealed that the proposed work contributes to the effective lightweight model for practical application. The implementation code will be available.
Summary: This paper proposes a lightweight alternative to the design of self-attention based Transformers on character-level language modeling (LM). The approach was motivated by the similar technique that has been applied on group convolutions, but with a few notable differences too, such as inter-group mixing and low-rank approximation (which also appeared in ConvNets before, but this still strkes me as a difference in the Transformer context). Via experiments on two large-scale char-level LM datasets as well as a relatively extensive set of ablative experiments, the authors demonstrated the effectiveness of their approach. Pros: + A very well-written paper. Most of the math symbols in the paper come with clear dimensionalities, which make it very easy to follow. The description for the methodology is also pretty clear. + Well-designed experiments. Enwik-8 and text8, while widely used to benchmark Transformers these days, are still very challenging large-scale tasks. The authors also provide a series of ablative studies comparing the group-transformer with the original transformer in Table 3. + Table 2 and Figure 3 (in the Appendix) are pretty strong proof of the effectiveness of the approach (at least on character-level language modeling). ================================ A few questions/issues/comments: 1. For the key/value computation, why did you still keep the "more complex/expensive" $D_\text{model}^2$ design? You explained in the paper that they could "come from other source domain", but in the specific case of character-level language modeling (in which you are just using a decoder Transformer without encoder-decoder attention), I don't think this is a problem. Why not make $\mathbf{k}_{gh}$ and $\mathbf{v}_{gh}$ something similar to how you compute the query? Or alternatively, why don't you make them low-rank too, as in the feed-forward layer? This difference in design seems strange to me. 2. In Section 3.4, you mentioned that the Group-Transformer (I'll call it GT for simplicity below) has resource complexity $O(D_\text{model}^2/G)$ whereas the original Transformer has complexity $O(D_\text{model}^2)$. However, this is not true by your design of the key/value module, and by your own analysis in Appendix B.1, where you still have a $2 D_\text{model}^2$ term. Therefore, I suggest reworking on Section 3.4, as the big-O complexity of the parameter space should be the same. (This again makes me curious about question (1) above...) 3. Section 4.1 says that you only explored group size from {2, 4}. How did you pick this number? Why not 8 groups or more? As the 2-group option only saves about 10%-15% of the parameters (according to your analysis in Appendix B), it's actually not a large difference. Meanwhile, it seems 2-group is always better than 4-group. While I guess the 8-group option would certain make the model size very small, I'm very curious to see how good/bad it is when you match the # of parameters of an 8-group GT with a {2,4}-group GT. 4. As the "lightweight" property of GT is what you are focusing on, could you also show/approximate the number of FLOPs used by LSTMs in Table 1? While LSTMs use more parameters, they don't use as much computation as do the Transformers (which has needs to form a $O(L^2)$ matrix in the self-attention module, where $L$ is the sequence length). Also, I think it's important to show the actual (wall-clock) runtime comparison of GT with Transformer-XL and the best LSTM model(s). 5. I find it a bit strange (and slightly disappointing) that this method does not generalize that well to word-level language modeling, as none of the designs introduced in the paper are specific to "character"-level modeling alone. How's the performance of GT if you forget about the word embedding compression for a while (e.g., use a large embedding size, such as 500 like in prior works)? Some recent work [1] seems to suggest that very small Transformer-XL (only 4M parameters + a normal embedding) can achieve a perplexity around 35, too. ------------------------------------ Some issues that did not really affect the score: 6. In Secton 3.2 (currently at the bottom of page 3), maybe add the dimensionality of $\mathbf{x}$ (which should be $D_\text{model}$) just for clarity, as you are omitting the "time" dimension (of a sequence) and only considering a single time step. 7. Right after Eq. (2), the first $\mathbf{W}_{gh}^\text{m-intra}$ should be $\mathbf{W}_{gh}^\text{o-intra}$. 8. In Eq. (4) (and the sentence following it), $W_{hg}^\text{f2}$ shouldn't have a reference to $h$, as the reference to heads should only be in the self-attention. 9. Eq. (7) intra -> inter. 10. Some descriptions in Appendix A are confusing. For instance, you didn't really define function $\text{Shuffle}(\cdot)$, and it took me a while to realize you mean transposing the 0th and 2nd dimension of a $G \times M \times G$ matrix. Similarly, the $\text{Concat}(\cdot)$ function in Eq. (7) is "undefined", in the sense that its input is already a $G \times M$ matrix (each row is a $1 \times M$ vector). I think what you want is to vectorize it to shape $1 \times (M * G)$ , and $\mathbf{W}_g^\text{intra[2]}$ should have shape $(M * G) \times \bar{D}_\text{group}$. I suggest you revise an clarify this part. 6. I'm curious (and wonder if you've tried this): What if you increase the model size of the Group-Transformer to be as large as the original Transformer on enwik-8 and text8 (e.g., 40M)? How does the GT perform? While Table 3 is indeed convincing, the result obtained by GT is still far from the actual SOTA (e.g., obtained by Child et al. [2] with a much larger model). Would be interesting to compare how a model "as large" would do. ------------------------------------ Overall, I think this is a promising strategy that seems to work very well on character-level language modeling. My only major concerns are some of the specifics of the design of the methodology (e.g., the key/value part) and the failure of the approach to generalize to the very-relevant domain such as word-level LM. [1] https://arxiv.org/abs/1909.01377 [2] https://arxiv.org/abs/1904.10509
iclr_2020_HJe6uANtwH
We introduce a new routing algorithm for capsule networks, in which a child capsule is routed to a parent based only on agreement between the parent's state and the child's vote. The new mechanism 1) designs routing via inverted dot-product attention; 2) imposes Layer Normalization as normalization; and 3) replaces sequential iterative routing with concurrent iterative routing. When compared to previously proposed routing algorithms, our method improves performance on benchmark datasets such as CIFAR-10 and CIFAR-100, and it performs at-par with a powerful CNN (ResNet-18) with 4× fewer parameters. On a different task of recognizing digits from overlayed digit images, the proposed capsule model performs favorably against CNNs given the same number of layers and neurons per layer. We believe that our work raises the possibility of applying capsule networks to complex real-world tasks.
Authors improve upon dynamic routing between capsules by removing the squash function (norm normalization) and apply a layerNorm normalization instead. Furthermore, they experiment with concurrent routing rather than sequential routing (route all caps layers once, then all layers concurrently again and again). This is an interesting development since provides better gradient in conjunction with layerNorm. They report results on Cifar10 and Cifar100 and achieve similar to CNN (resnet) performance. First, I want to point out that inverted attention is exactly what happens in dynamic routing (sabour et al 2017), proc. 1 line 4,5, and 7. In dynamic routing the dot product with the next layer capsule is calculated and then normalized over all next layer capsules. The only difference that I notice between alg. 1 here and proc. 1 there is replacement of squash with layer norm. There is no "reconstructing the layer bellow" in Dynamic routing as authors suggest in intro. Second, the Capsules are promised to have better viewpoint generalizability than CNNs while having comparable performance. Replacing the 1 convolution layer with a ResNet backbone and replacing the activation with a classifier on top seems reducing the proposed CapsNet to the level of CNNs in terms of Viewpoint Generalization. Why should someone use this network rather than the ResNet itself? Fewer number of parameters by itself is not interesting, the reason it is reported usually is that it indicates lower memory consumption or fewer flops. Is that the case when comparing the baseline ResNet with the proposed CapsNet? Otherwise, a set of experiments showcasing the viewpoint generalizability of proposed CapsuleNetworks might only justify the switch between resnets to the proposed capsnets. Thirdly, Fig. 4 top images seems to indicate all 3 routing procedures are following the same Learning Rate schedule. In the text it is said that optimization hyperparameters are tuned individually. Did authors tune learning rate schedule individually? Forth, the proper baseline for the current study is the dynamic routing CapsNet. Why the multiMNIST experiment lacks comparison with dynamic routing capsnet? For the reasons above, the manuscript in its current format is not ready for publication. ------------------------------------------------------rebuttal Thank you for your response. I acknowledged the novel contributions of this work. My comment was that some claims in the paper are not right. i.e. "inverted dot-product attention" is not new and "reconstructing the layer bellow" does not happen in Sabour et al . Parallel execution + layer norm definitely is novel and significant. Regarding the LR-schedule, I am not sure how fair it is to use same hyper-params tuned for the proposed method on the baselines. Regarding the viewpoint, the diverseMultiMNIST is two over lapping MNIST digits shifted 6 pixels. There is no rotation or scale in this dataset. An example experiment verifying the viewpoint generalizability of the proposed model is training on MNIST testing on AFFNIST.
iclr_2020_rJx7wlSYvB
While deep neural networks (NNs) do not provide the confidence of its prediction, Bayesian neural network (BNN) can estimate the uncertainty of the prediction. However, BNNs have not been widely used in practice due to the computational cost of predictive inference. This prohibitive computational cost is a hindrance especially when processing stream data with low-latency. To address this problem, we propose a novel model which approximate BNNs for data streams. Instead of generating separate prediction for each data sample independently, this model estimates the increments of prediction for a new data sample from the previous predictions. The computational cost of this model is almost the same as that of nonBayesian deep NNs. Experiments including semantic segmentation on real-world data show that this model performs significantly faster than BNNs, estimating uncertainty comparable to the results of BNNs.
The paper proposes a differentiable Bayesian neural network. Traditional BNN can model the model uncertainty and data uncertainty via adding a prior to the weight and assuming a Gaussian likelihood for the output y. However, it's slow in practice since evaluating the loss function of BNN involves multiple runs over the entire network. Also when the input data is non-stationary, the output function can not be differentiated with respect to the input data. The paper proposes to use an online code vector histogram (OCH) method attached to the input and output of a classical DNN. Input OCH captures the distribution of both input data and network weights, and the output OCH captures the distribution of the predictions. Since these OCHs are differentiable, the proposed DBNN model can be used for streaming input data with time-variant distributions. I think the idea is interesting and novel. It explores a different way of modeling distributions with DNN. Instead of adding priors, DBNN relies on histograms, which is usually used to describe distributions for discrete observed input data. So the paper is well-motivated. 1. The paper needs more literature review in the area of data streaming. Papers, such as [1], have proposed to use a vector quantization process that can be applied online to a stream of inputs. This paper introduces the vector quantization but doesn't mention the use of it in streaming data in related work, which kind of blurs the contribution a bit. Moreover, it would be helpful for readers to learn about useful techniques for streaming data from this paper. [1] Hervé Frezza-Buet. Online computing of non-stationary distributions velocity fields by an accuracy controlled growing neural gas 2. I think the paper might need a bit more explanation about codevector, since it's not a very well-acknowledged concept in this field. The main issue for me to understand it is how to get these codevectors. When DBNN deals with streaming data and starts from no input, is the set of codevector empty at the beginning? The input data points are accumulated as codevectors? I hope the authors could clarify this process a bit more. 3. Given the insufficient understanding of codevector, figure 2 is a bit hard to read. 1) (a)-(d) are figures for x0 at t=0, which is not time-variant. 2) what are these codevectors picked. 3) It seems that the codevectors are out of the regime of the distribution of y. But according to algorithm 2, y_*<-T(c_*), would that be a problem? I think (a)-(d) are informative but not straightforward to read. The authors need to put more text to explain these figures, since this simulated example can help readers to understand what is codevector and how it helps for uncertainty estimation. Overall I think the paper is well-written. The idea is novel and practical in the scenario of DNN. I would vote for accept.
iclr_2020_HylA41Btwr
GANs have been very popular in data generation and unsupervised learning, but our understanding of GAN training is still very limited. One major reason is that GANs are often formulated as non-convex-concave min-max optimization. As a result, most recent studies focused on the analysis in the local region around the equilibrium. In this work, we perform a global analysis of GANs from two perspectives: the global landscape of the outer-optimization problem and the global behavior of the gradient descent dynamics. We find that the original GAN has exponentially many bad strict local minima which are perceived as mode-collapse, and the training dynamics (with linear discriminators) cannot escape mode collapse. To address these issues, we propose a simple modification to the original GAN, by coupling the generated samples and the true samples. We prove that the new formulation has no bad basins, and its training dynamics (with linear discriminators) has a Lyapunov function that leads to global convergence. Our experiments on standard datasets show that this simple loss outperforms the original GAN and WGAN-GP.
In this paper, the authors introduce a new training loss for GANs. This loss allows the outer optimization problem to have no spurious local minima, under an appropriate finite sample analysis. In contrast, the authors establish that there are exponentially many spurious local minima under the conventional GAN training loss. Under a linear discriminator model, the authors show that a standard GAN can not escape from collapsed modes in a finite sample analysis, whereas the new trining loss allows for such an escape (due to the presence of a Lyapunov functional with favorable properties). The authors use this new training loss to train GANS on MNIST, CIFAR10, CelebA, and LSUN datasets, and observe mild improvements in Inception Scores and Frechet Inception Distances of the resulting generated images. I recommend the paper be accepted because it provides a new formulation for training GANs that both demonstrates improved empirical performance while also allowing theoretically favorable properties (on spurious local minima and avoidance of mode collapse) that specifically do not hold for a standard GAN. The primary question I am left with after reading the paper is: is there a probabilistic interpretation of the new loss function (equation 4a). The authors justify this formulation because it allows analysis via Lyapunov functions, but it would be very useful to know if it itself is the maximum likelihood estimate under an alternate data model. Such an explanation would improve the understandability of this method. Minor comment: The fourth bullet point under the contributions section should specific the sense in which the new GAN "performs better"
iclr_2020_BygMreSYPB
This paper addresses the data-driven identification of latent Ordinary Differential Equation (ODE) representations of partially-observed dynamical systems, i.e. dynamical systems whose some components are never observed, with an emphasis on forecasting applications and long-term asymptotic patterns. Whereas stateof-the-art data-driven approaches rely on an explicit mapping of the observation series onto a latent space, the proposed approach only relies on the definition of an augmented space, higher-dimensional than the manifold spanned by the observed variables, where the dynamics of the observations can be fully described by an ODE. From a numerical point of view, the proposed approach exploits a neural-network ODE representation and the associated variational minimization scheme. Numerical experiments support the relevance of the proposed framework w.r.t. state-of-the-art approaches, including neural ODE schemes, both in terms of short-term forecasting errors and long-term behaviour. We further discuss how the proposed framework relates to Koopman operator theory and Takens' embedding theorem.
Update: I raised the score from 1 to 3 to acknowledge the authors' consideration for the 2000-2010 literature on learning dynamical systems from partial observations. Unfortunately, the writing is still confusing, some of the claims in the introduction and rebuttal are inexact ([5] does not embed the observations and does work with partially observed environments), and the method lacks originality compared to existing work. Other work that relies on ODE integration and in finding high-dimensional state variables has recently been published and tested on more ambitious datasets, e.g. Ayed et al, "Learning Dynamical Systems from Partial Observations", arXiv 2019. *** TL;DR: relatively well written (if sometimes confusing) paper that reinvents the inference of latent variables in nonlinear dynamical systems that has been published in the 2000s, and that misses an important chunk of literature (and experiments on dynamical systems such as Lorenz-63) from that time. This paper proposes an approach for learning dynamical systems from partial observations x, by using an augmented state variable z that follows dynamics that can be described by an ordinary differential equation (ODE) with dynamics f. The authors motivate their work by the problem of dynamical system identification when only partial observations are available. The authors claim that was to date primarily addressed using time-delay embedding, following Takens' theorem. The authors introduce s-dimensional unknown state variables z, dynamical function f (for the ODE on z), flow phi on z, limit cycles on z, observation function H: z -> x that goes from z to n-dimensional observations x, low k-dimensional manifold r (with a map M: x -> r), and state augmentation variable y. The reconstructed state variable u is the concatenation of r and y. One key ingredient of the method is to infer the optimal value of state augmentation variable y during learning (see equations 5 and 6) and inference for forecasting (7); this is not well explained in the abstract and introduction. I would note that the problem of state space modeling (SSM) and dynamical system identification has been well studied, and the notation and reformulation in this paper is somewhat confusing for those who are used to the notation in SSMs (specifically, expressing the observation approximation as M^{-1}(G(phi(u_{t-1}))). Learning a state-space model involves both learning parameters and inferring the latent states representation (or, in graphical models, the distribution of these latent states) given the parametric models. One approach has been to formulate the state-space model learning by maximum likelihood learning of the model parameters so that the generative model fits observed data x, and this would involve factoring out the distribution of latent states z; the algorithm would rely on Expectation Maximisation, and could involve variational approximations or sampling. While the state space models were hampered by their linearity, several papers in 2000s showed how it is possible to learn nonlinear dynamical models, e.g. [4], [5], [6] and [7] to cite earlier ones. Equations (5) and (6) are similar to the standard equations for a dynamical system expressed in continuous time, with the only difference that the optimisation is with respect to y only, rather than w.r.t. z or u (why not \tilde z or \hat z?). The paper mentions various initialisation strategies for y (last paragraph of section 3). Why not predict from the past of the observations, like is done in many other similar work? The literature review mixes older and newer references. For example, on page 1, I would note that the Takens' theorem has been applied in conjuction with Support Vector Regression as early as 1999 [1][2], and with neural networks in 1993 [3]. Most importantly, the ideas of this paper have already been published in [4] (with architecture constraints on the neural network state-space model), in [5] (with any nonlinear neural network state-space model), in [6] (using Restricted Boltzmann Machines) and in [7] (using Gaussian Process latent variable models). The model is illustrated with experiments on a 2D linear attractor, on the Lorenz-63 attractor. Given the results published in [1] and [2] using SVR on 1D observations of that attractor, and in [5] using a recurrent neural network, I am unconvinced by these results. It seems in particular that the number of training points (around 4000) limits the performance of RNN / LSTM models. The application to Sea Level Anomaly is interesting. Minor comments: "Unfortunately, When" (page 1) There is a missing -1 after M in equation (5) and (10) In equation (7), should not the sum go from t=0 to T, as x_t is unknown for t>T? What is prediction and what is extrapolation on Figure 1? The caption of Fig 1 contains (left) The figures seem squeezed with the captions / titles un-aesthetically wide. Labels on Figure 5 in the appendix seem mixed, and red should be the ground truth [1] Mattera & Haykin (1999) "Support vector machines for dynamic reconstruction of a chaotic system" [2] Muller, Smola, Ratsch, Scholkopf, Kohlmorgen & Vapnik (1999) "Using support vector machines for time-series prediction" [3] Wan (1994) "Time series prediction by using a connectionist network with internal delay lines" [4] Ghahramani, and Roweis (1999) "Learning nonlinear dynamical systems using an EM algorithm" [5] Mirowski & LeCun (2009) "Dynamic Factor Graphs for Time Series Modeling" [6] Taylor, Hinton & Roweis (2006) "Modeling human motion using binary latent variables" [7] Wang, Fleet & Hertzmann (2006) "Gaussian process dynamical models"
iclr_2020_rklhqkHFDB
In this paper, we discuss the fundamental problem of representation learning from a new perspective. It has been observed in many supervised/unsupervised DNNs that the final layer of the network often provides an informative representation for many tasks, even though the network has been trained to perform a particular task. The common ingredient in all previous studies is a low-level feature representation for items, for example, RGB values of images in the image context. In the present work, we assume that no meaningful representation of the items is given. Instead, we are provided with the answers to some triplet comparisons of the following form: Is item A more similar to item B or item C? We provide a fast algorithm based on DNNs that constructs a Euclidean representation for the items, using solely the answers to the above-mentioned triplet comparisons. This problem has been studied in a sub-community of machine learning by the name "Ordinal Embedding". Previous approaches to the problem are painfully slow and cannot scale to larger datasets. We demonstrate that our proposed approach is significantly faster than available methods, and can scale to real-world large datasets. Thereby, we also draw attention to the less explored idea of using neural networks to directly, approximately solve non-convex, NP-hard optimization problems that arise naturally in unsupervised learning problems.
Summary: Many prior works have found that the features output by the final layer of neural networks can often be used as informative representations for many tasks despite being trained for one in particular. These feature representations, however, are learned transformations of low-level input representations, e.g. RGB values of an image. In this paper, they aim to learn useful feature representations without meaningful low-level input representations, e.g. just an instance ID. Instead, meaningful representations are learned through gathered triplet comparisons of these IDs, e.g. is instance A more similar to instance B or instance C? Similar existing techniques fall in the realm of learning ordinal embeddings, but this technique demonstrates speed-ups that allow it to scale to large real world datasets. The two primary contributions of the paper are given as: - a showcase of the power of neural networks as a tool to approximately solve NP-hard optimization problems with discrete inputs - a scalable approach for the ordinal embedding problem After experimentation on synthetic data, they compare the effectiveness of their proposed method Ordinal Embedding Neural Network (OENN) against the baseline techniques of Local Ordinal Embedding (LOE) and t-distributed Stochastic Triplet Embedding (TSTE). The test error given by the systems is comparable, but there are clear speed benefits to the proposed method OENN as the other techniques could not be run for a dataset size of 20k, 50k, or 100k. Then, they gathered real-world data using MTurk applied to a subset of ImageNet and applied OENN to learning embeddings of different image instances using only the MTurk triplet information rather than the input RGB input features. Decision: Weak Reject 1. Interesting technique to take advantage of neural networks to efficiently learn ordinal embeddings from a set of relationships without a low-level feature representation, but I believe the experiments could be improved. One of the main advantages of this approach is efficiency, which allows it to be used on large real-world datasets. The MTurk experiment gives a qualitative picture, but it could be improved with comparisons to pairwise distances learned through alternative means using the RGB image itself (given that images would permit such a comparison). By this I mean, that you may be able to use relationships learned using conventional triplet methods which use input RGB features as ground truth, and test your learned relationships against those. However, since quantitative exploration of large real-world datasets may be challenging and expensive to collect, the synthetic experiments could have been more detailed. The message of synthetic experiments would be stronger if more of them were available and if the comparison between LOE, TSTE, and OENN was made on more of them. 2. I think that the claim that the use of neural networks with discrete inputs can approximately solve NP-hard optimization problems is an exciting one, which likely necessitates more experiments (or theoretical results), but as it stands I don't think it is a fundamentally different conclusion from the fact that this method provides a great scalable solution for the ordinal embedding problem. This claim can be made secondarily or as motivation for continued exploration along this direction, but I think listing them as two distinct contributions is necessary. Additional feedback: Since quantitative real-world results are challenging to obtain, improved presentation of the qualitative results would be helpful as well. You may be able to show more plots which help display the quality of the embedding space varying with the number of triplets used. For example, an additional plot after Figure 5 (b) which shows a few scatter plots of points (color coded by class) for training with different numbers of collected triplets. Also, since it should be fairly easy to distinguish between cars and animals or cars and food, it may be more interesting to focus on the heat-maps from along the block diagonal of Figure 5 (a) and talk about what relationships may have been uncovered within the animal or food subsets. Very minor details: In Figure 5, a legend indicating the relationship between color intensity and distance would be helpful. In Figure 6 there seem to be unnecessary discrepancies between the y-axis and colorbar of subplots (a) and (b), and keeping those more consistent would improve readability.
iclr_2020_HyeAPeBFwS
Bayesian inference is used extensively to quantify the uncertainty in an inferred field given the measurement of a related field when the two are linked by a mathematical model. Despite its many applications, Bayesian inference faces challenges when inferring fields that have discrete representations of large dimension, and/or have prior distributions that are difficult to characterize mathematically. In this work we demonstrate how the approximate distribution learned by a generative adversarial network (GAN) may be used as a prior in a Bayesian update to address both these challenges. We demonstrate the efficacy of this approach by inferring and quantifying uncertainty in inference problems arising in computer vision and physics-based applications. In both instances we highlight the role of computing uncertainty in providing a measure of confidence in the solution, and in designing successive measurements to improve this confidence.
Summary of the paper: The paper proposes a Bayesian approach to make inference about latent variables such as un-corrupted images. The prior distribution plays a key role in this task. The authors use a GAN to estimate this prior distribution. Then, standard Bayesian techniques such a Hamilton Monte Carlo are used to make inference about the latent variables. Detailed comments: Eq. (8) is expected to give very bad results. The reason is that it is very unlikely to sample from the prior configurations for z that are compatible with y. The paper does not address learning any model parameters. e.g. the amount of noise. A more principled approach would be to estimate the prior parameters using maximum likelihood estimation. That has already been done in the case of the variational autoencoder. The variational autoencoder is an already known method that can be used to solve the problem formulated by the authors. It also automatically proposes an inference network that can be used for recognition. If the likelihood is Gaussian and p(x|z) is also Gaussian, one can directly marginalize x and work with p(y|z) and p(z). The authors should at leas discuss the potential use of this method alongside with the BIGAN model which also provides a recognition model. It is not clear how the HMC parameters are fixed. The experiments do not have error bars (Figure 4.) This questions the significance of the results. My overall impression is that there is little novelty in the proposed approach. Namely, using a GAN to learn the prior distribution, and then very well known techniques to infer the original input image. I have missed some references to related work on inverse problems. An example is: https://arxiv.org/pdf/1712.03353.pdf Is the original figure contained in the training set used to infer the GAN. If so that can lead to biased results. I have missed a simple baseline in which one simply finds the training image that is closest to the corrupted observed or partially observed image. My overall impression is that there is not much novelty in the paper as it is simply a combination of well known techniques. E.g. GANs and Bayesian inference with Monte Carlo methods.
iclr_2020_S1xqRTNtDr
Imitation Learning (IL) is a machine learning approach to learn a policy from a set of demonstrations. IL can be useful to kick-start learning before applying reinforcement learning (RL) but it can also be useful on its own, e.g. to learn to imitate human players in video games. However, a major limitation of current IL approaches is that they learn only a single "average" policy based on a dataset that possibly contains demonstrations of numerous different types of behaviors. In this paper, we present a new approach called Behavioral Repertoire Imitation Learning (BRIL) that instead learns a repertoire of behaviors from a set of demonstrations by augmenting the state-action pairs with behavioral descriptions. The outcome of this approach is a single neural network policy conditioned on a behavior description that can be precisely modulated. We apply this approach to train a policy on 7,777 human demonstrations for the build-order planning task in StarCraft II. Dimensionality reduction techniques are applied to construct a lowdimensional behavioral space from the high-dimensional army unit composition of each demonstration. The results demonstrate that the learned policy can be effectively manipulated to express distinct behaviors. Additionally, by applying the UCB1 algorithm, the policy can adapt its behavior -in-between games -to reach a performance beyond that of the traditional IL baseline approach.
Summary: The paper proposes behavioral repertoire imitation learning (BRIL) which aims to learn a collection of policy from diverse demonstrations. BRIL learns such a collection by learning a context-dependent policy, where the context variable represents behavior of each demonstration. To obtain a context variable, BRIL rely on user’s knowledge, where the user manually defines a feature space that describes behavior. This feature space is then reduced by using a dimensionality reduction method such as t-SNE. Lastly, the policy is learned by supervised learning (behavior cloning) with a state-context input variable and an action output variable. The method is experimentally evaluated on the StarCraft environment. The results show that BRIL performs better than two baselines: behavior cloning on diverse demonstrations and behavior cloning on clustered demonstrations. Score: The weaknesses of the paper are novelty, clarity, and evaluation. Please see the detailed comments below. I vote for rejection. Comments: - Novelty of the proposed idea. The major issue of the paper is the lack of novelty. The idea of learning a context-dependent policy in BRIL closely resembles that of existing multi-modal IL methods (Wang et al., 2017, Li et al., 2017). The main difference is that, BRIL relies on a manually defined context variable (behavioral feature space). In contrast, the existing methods aim to learn the context variable from demonstrations. BRIL is too simple when compared to the existing methods. Moreover, using a manually specified feature space does not go well with the main principle of deep learning, which is to learn informative feature spaces from data end-to-end. I think that ICLR is not a suitable venue for this paper. - Clarity of the proposed method. The second issue of the paper is clarity. Specifically, two important steps of BRIL is policy learning by supervised learning (behavior cloning) and dimensionality reduction by t-SNE. However, explanations of these two steps are vague and incomplete. For example, in Section 2.1, the paper describes IL as supervised learning, but does not mention the issue of covariate shift, which is well-known when treating IL as supervised learning (Ross et al., 2011). Also, it is incorrect to state that an IL agent cannot interact with the environment during training, since many IL methods such as GAIL require interactions with the environment during training. Meanwhile, in Section 2.4, it is unclear how probability distributions in t-SNE reflect similarity between data points. [1] Stéphane Ross, Geoffrey Gordon, and Drew Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. AISTATS, 2011. - Evaluation of the proposed method is too narrow. The paper lacks important baseline methods in the experiment. Specifically, the paper does not compare BRIL against multi-modal IL methods (Wang et al., 2017, Li et al., 2017) which also learn a context-dependent policy. Moreover, BRIL is evaluated only on the StarCraft environment with only one kind of manually specified feature. This raises a question of generality and sensitivity against the choice of feature of BRIL. To improve the paper, I suggest the authors to compared the proposed method against multi-modal IL methods on different environment, and evaluate BRIL with different choices of the behavioral features.
iclr_2020_BkeYSlrYwH
Reinforcement Learning (RL) has demonstrated promising results across several sequential decision-making tasks. However, reinforcement learning struggles to learn efficiently, thus limiting its pervasive application to several challenging problems. A typical RL agent learns solely from its own trial-and-error experiences, requiring many experiences to learn a successful policy. To alleviate this problem, we propose collaborative inter-agent knowledge distillation (CIKD). CIKD is a learning framework that uses an ensemble of RL agents to execute different policies in the environment while sharing knowledge amongst agents in the ensemble. Our experiments demonstrate that CIKD improves upon state-of-the-art RL methods in sample efficiency and performance on several challenging MuJoCo benchmark tasks. Additionally, we present an in-depth investigation on how CIKD leads to performance improvements.
This paper introduces a method for using an ensemble of deep reinforcement learning policies, where members of the ensemble are periodically updated to imitate the most promising member of the ensemble. Thus learning proceeds by performing off policy reinforcement learning updates for each individual policy, as well as some supervised learning for inter-policy imitation learning. I start by what I view as the positive aspects about the paper: 1- The algorithm is quite simple (to understand and to implement). 2- Experimental results are performed on a variety of domains, and more importantly, each experiment is motivated by a question. That said, I have some concerns about this paper which I list below: 1- Perhaps my biggest concern is that the approach is not motivated from a theory stand point. There has been interesting results in Osband's work [Osband, 2016] (and references therein) for randomized value functions which can serve as a foundation for this work. That said, a) Osband's results, at least immediately, are related to value-function based methods, as opposed to policy gradient b) the KL update which one could argue is the main and only significant contribution of the paper, is not justified by Osband or any other prior work c) there is not anything that this paper adds to the literature to better justify diversity through randomization and/or imitation learning based on the best member of the ensemble. 2- I have found various claims in the paper which are unclear, scientifically not true, or sometimes even contradicting. In Introduction, for example, the authors mention that the agent sometimes gets into a sub-optimal policy and may require a large number of interactions before escaping the sub optimal policy. How does gathering more data help to improve the policy? Either we are in a local maximum, which if we are doing gradient ascent, there is really not much we could do, or that we are in a saddle point, which we can escape by adding some noise to the gradient. [Jin,2017] 3- In section 4.3 the authors talk about on-policy methods requiring importance sampling (IS) ratios. To the best of my knowledge, IS is only used for off-policy learning. Can the authors provide a link to an on-policy method that does IS? 4- Again in section 4.3 authors claim and I quote "Using off-policy methods, all the policies in the ensemble can easily be updated, since off-policy update methods can perform updates from any \tau". But later on in Section 5.3 authors claim that "off-policy actor-critic methods (e.g. SAC) cannot fully utilize the other agent's or past experience." So which statement is true? 5- Again, the KL update is interesting, but is it even surprising that the KL update is necessary for an ensemble of policies updates using policy gradients? In the absence of this KL update, which the authors characterize as the method that Osband proposed, the policies could generally be arbitrarily far from one another. This means that each policy needs to perform policy evaluation using trajectories that are coming from other policies who in principle can be radically different than the policy we want to update. This means that updates will be quite "off-policy" which we know can really degrade the quality of the estimated gradient. This is perhaps why even choosing a random policy to update towards is providing "some" improvement. I think this is the real insight, but it is not really discussed at all in the paper. 6- On the same note, I do not think that one can say Osband's method is the same as CIKD but only without the KL update. Most notably, Osband's work was presented for value-function-based methods like DQN. These methods work fundamentally different than policy gradient methods, which rely on (near) on-policy updates to perform good policy improvements. In that sense, the presented results make sense, but I disagree with the framing of the results and how they are presented here. 7- In section 5.3, when the authors utilize more policy updates to have a fair comparison, are they retuning hyper parameters? Surely they need to do that, at least for hyper-parameters that are known to be super important such as the step size. 8- Overall I liked section 5.5 that is trying to dissect causes for improvement. However, it seems like that the "dominant agent" hypothesis has been rejected hastily, unless I misunderstood the experiment. The authors show that the notion of best is spread across different agents. But of course this will be the case in light of the KL update, since the policies are getting closer to one another. Can you redo the experiment in the absence of the KL update? 9- Have the authors thought about any connection between this and genetic algorithms? In genetic algorithms, the idea is the next set of candidates are chosen based on the most promising candidates in the current iteration. CIKD seems like a soft implementation of this idea. In light of the comments above, I am voting for weak rejection, though as I said before, I do see some interesting things in this paper. I encourage the authors to think about CIKD from a theoretical lens in the future.
iclr_2020_S1ekaT4tDB
It has been repeatedly observed that convolutional architectures when applied to image understanding tasks learn oriented bandpass filters. A standard explanation of this result is that these filters reflect the structure of the images that they have been exposed to during training: Natural images typically are locally composed of oriented contours at various scales and oriented bandpass filters are matched to such structure. The present paper offers an alternative explanation based not on the structure of images, but rather on the structure of convolutional architectures. In particular, complex exponentials are the eigenfunctions of convolution. These eigenfunctions are defined globally; however, convolutional architectures operate locally. To enforce locality, one can apply a windowing function to the eigenfunctions, which leads to oriented bandpass filters as the natural operators to be learned with convolutional architectures. From a representational point of view, these filters allow for a local systematic way to characterize and operate on an image or other signal.
This paper claims that convolutional filters in CNNs are not the result of fitting to the input data distribution but they are the optimal solution to a spectral decomposition of the convolutional operator. Positive things about this work: 1) it is clearly written 2) the fact that Gabor wavelets are the eigenfunctions of convolution is sound. 3) it provides good food for thought about what needs to be learned and what comes from the pre- specified choice of architecture Negative things about this work: 1) it misses references to relevant works. In particular, there is an article making essentially the same points: Joan Bruna, Soumith Chintala, Yann LeCun, Serkan Piantino, Arthur Szlam, and Mark Tygert, "A mathematical motivation for complex-valued convolutional networks," Neural Computation, 28 (5): 815-825, 2016 http://tygert.com/ccnet.pdf where these authors make the same conclusions and observe that learning reduces to figuring out the windowing, number of scales, etc. but not the type of filters. This prior work greatly reduces the impact of this contribution, unfortunately. There are other references that are missing, but these are minor points compared to the above. For instance, I'd recommend to cite Hubel & Wiesel's seminal work on mapping the mammalian receptive fields, and older work by M. Lewicki about analyzing learned receptive fields by sparse coding algorithms, similarly to the cited B. Olshausen et al. 2) The Authors show that Gabor wavelets are eigenfunction of convolutions and they stop here to conclude that filters in CNNs are the way they are because of the architecture, but how about the effect of the non-linearities, depth and the type of cost used for training? The work is unfinished without a thorough analysis and discussion of these crucial aspects.
iclr_2020_SylOlp4FvH
Some of the most successful applications of deep reinforcement learning to challenging domains in discrete and continuous control have used policy gradient methods in the on-policy setting. However, policy gradients can suffer from large variance that may limit performance, and in practice require carefully tuned entropy regularization to prevent policy collapse. As an alternative to policy gradient algorithms, we introduce V-MPO, an on-policy adaptation of Maximum a Posteriori Policy Optimization (MPO) that performs policy iteration based on a learned statevalue function. We show that V-MPO surpasses previously reported scores for both the Atari-57 and DMLab-30 benchmark suites in the multi-task setting, and does so reliably without importance weighting, entropy regularization, or population-based tuning of hyperparameters. On individual DMLab and Atari levels, the proposed algorithm can achieve scores that are substantially higher than has previously been reported. V-MPO is also applicable to problems with high-dimensional, continuous action spaces, which we demonstrate in the context of learning to control simulated humanoids with 22 degrees of freedom from full state observations and 56 degrees of freedom from pixel observations, as well as example OpenAI Gym tasks where V-MPO achieves substantially higher asymptotic scores than previously reported.
The paper proposes an online variant of MPO, V-MPO. Compared to MPO, the main difference seems to be in the E-step. Instead of optimizing the non-parametric distribution towards a parameterized Q-function, V-MPO learns the V-function and updates the non-parametric distribution towards the advantages, which can be estimated on the samples of the last roll-outs based on the empirical returns and the learned V-function. Of course, updating towards (exponentiated and normalized) Q- or A-function does not make a difference. There are also some minor changes (which might still be crucial) such as an entropy constraint in the M-step and using only the top-k advantages during the E-Step (an option that was discussed in Abdolmaleki, et al. 2018a). V-MPO is evaluated on DLM, ALE and two humanoid tasks from the DeepMind Control Suite. In most of these tasks V-MPO achieves returns that are to the best of my knowledge higher than any previously reported ones. However, the experiments also use a very large number of system interactions (in the order of billions). Contribution / Significance: I think that there will be relatively high interest in the paper due to the reported performances. The technical contribution seems a bit incremental compared to MPO. Also, by learning a value function V-MPO gets closer to REPS. The submission lists the use of top-k samples and the M-step KL bound as the main differences to REPS. However, the former is not evaluated in the submission and the latter, albeit crucial, seems to be a relatively small modification. I do think that there are more differences to REPS, most important probably in the way of learning the value function and the corresponding differences in the derivations. However, I think that the differences abd similarities to MPO and REPS need to be discussed more thoroughly. Soundness: The derivation of V-MPO is relatively sound. The optimization of the KL constraints seems very approximate, although it seems to work well in practice. Clarity: I do not like the way the algorithm is presented. The submission specifies the complete loss function already at the beginning of the "Method"-Section and derives/motivates the individual terms in hindsight. The spaghetti-code like structure unnecessarily forces the reader to jump between pages or keeps the reader in the dark. I also do not like the "stop-gradient" notation which in my opinion puts the focus on low-level implementation details at the cost of not properly explaining what the optimization actually does. I think that paper is well-written in general, but the structure needs to be improved. Experiments: The evaluation clearly focuses on achieving the best performance, and does a good job in that regard. However, a good evaluation should also help in understanding the mechanics of V-MPO. How does k (in top-k) affect the performance? How well are the constraints met during optimization? How does V-MPO compare to related on-policy methods (e.g. TRPO/PPO) on slightly more computational constrainted settings (eg rllab/mujoco with < 1e7 steps)? Questions: - Did you experiment with controlled entropy reduction akin to MORE, instead of using fixed epsilon eta? - Can you give a rough estimate of the computational time required to perform these experiments on a standard desktop pc? It really is difficult for me to even roughly estimate it. Assessment: Currently, I am leaning to accept because I think that V-MPO is overall a nice work. However I do think that submission needs to be revised. I mainly think that the structure needs to be improved and that V-MPO needs to better related to closely related work. I listed some additional experiments that would significantly improve the submission in my opinion.
iclr_2020_ryloogSKDS
Reasoning about uncertain orientations is one of the core problems in many perception tasks such as object pose estimation or motion estimation. In these scenarios, poor illumination conditions, sensor limitations, or appearance invariance may result in highly uncertain estimates. In this work, we propose a novel learning based representation for orientation uncertainty. Characterizing uncertainty over unit quaternions with the Bingham distribution allows us to formulate a loss that naturally captures the antipodal symmetry of the representation. We discuss the interpretability of the learned distribution parameters and demonstrate the feasibility of our approach on several challenging real-world pose estimation tasks involving uncertain orientations.
The paper proposes a Brigham loss (based on the Brigham distribution) to model the uncertainty of orientations (an important factor for pose estimation and other tasks). This distribution has the necessary characteristics required to represent orientation uncertainty using quaternions (one way to represent object orientation in 3D) such as antipodal symmetry. The authors propose various additions such as using precomputed lookup tables to represent a simplified version of the normalization constant (to make it computationally tractable), and the use of Expected Absolute Angular Deviation (EAAD) to make the uncertainty of the Bingham distribution more interpretable. +Uncertainty quantification of neural networks is an important problem that I believe should gain more attention so I am happy to see papers such as this one. +Various experiments on multiple datasets show the efficacy of the method as well as out performing or showing comparable results to state-of-the-art -In the caption for Table 1 the author’s write: “the high likelihood and lower difference between EAAD and MAAD indicate that the Bingham loss better captures the underlying noise.” How much difference between EAAD and MAAD is considered significant and why? -In section 4.5 they write “While Von Mises performs better on the MAAD, we observe that there is a larger difference between the MAAD and EAAD values for the Von Mises distribution than the Bingham distribution. This indicates that the uncertainty estimates of the Von Mises distribution may be overconfident.” Same question as above. What amount of difference between MAAD and EAAD is considered significant and why?
iclr_2020_rJg3zxBYwH
Normalizing Flows (NFs) are able to model complicated distributions p Y (y) with strong inter-dimensional correlations and high multimodality by transforming a simple base density p Z (z) through an invertible neural network under the change of variables formula. Such behavior is desirable in multivariate structured prediction tasks, where handcrafted per-pixel loss-based methods inadequately capture strong correlations between output dimensions. We present a study of conditional normalizing flows (CNFs), a class of NFs where the base density to output space mapping is conditioned on an input x, to model conditional densities p Y |X (y|x). CNFs are efficient in sampling and inference, they can be trained with a likelihood-based objective, and CNFs, being generative flows, do not suffer from mode collapse or training instabilities. We provide an effective method to train continuous CNFs for binary problems and in particular, we apply these CNFs to super-resolution and vessel segmentation tasks demonstrating competitive performance on standard benchmark datasets in terms of likelihood and conventional metrics.
Summary of the paper: The paper proposes an extension of Normalizing flows to conditional distributions. The paper is well written overall and easy to follow. Basically the conditional prior z|x = z=f_{\phi}(y,x), where x is the conditioning random variable, and we apply the change of variable formula to get the density of y|x . For example in super resolution y is the high res image and x is the low res. image. To sample from the models authors propose to use f^{-1}_{\phi}(z;x). The conditional modules are natural extensions of invertible blocks used in the literature (coupling layers, split priors, conditional coupling, 1x1 conv), where the conditioning is done on some hidden representations of the conditioning variable x (i.e one or multiple layers of NN). Authors propose a dequantization for binary random variables (useful for segmentation applications), where they give an implicit model for the dequantizer (obtain a continuous variable from a discrete binary variable). Author apply the method in two applications super-resolution and vessel segmentation. the method is compared to supervised learning of the corresponds between x and y and to others competitive methods in the literature and shows some advantage. Minor comments : - Formatting the bibliography is messed up and needs some cleaning , Figure 5 is also making formatting issues of the paper. - Figure 1 for sampling it should be f^-1_{\phi } and not f_{\phi} Review: - Figure 2 is hard to get any idea of the sample quality would be good also to put the low resolution input to the algorithm . Also did you use a temperature sampling for the baseline ? otherwise the comparison is not fair. - The Drive database is too small 20 training samples and 20 testing only? can the model be just overfitting? - In the vessel implementation why do you drop the scaling modules? - The conditioning for the vessel implementation on x is on two layers , would be great to put all architectures of the models in details , and to show both sampling and training paths - It would be great to add the details of the skip connection used from the network processing x, and how ensure that the flow remains invertible. Overall this is a well written paper and a good addition to normalizing flows methods , some discussion of related works on conditional normalizing flows and more baselines with other competitive methods based on GANs for example would be helpful but not necessary. It would be great to add details of the architectures and on skip connections and how to ensure invertibility for this part in the model .
iclr_2020_Hke-WTVtwr
Sequential word order is important when processing text. Currently, neural networks (NNs) address this by modeling word position using position embeddings. The problem is that position embeddings capture the position of individual words, but not the ordered relationship (e.g., adjacency or precedence) between individual word positions. We present a novel and principled solution for modeling both the global absolute positions of words and their order relationships. Our solution generalizes word embeddings, previously defined as independent vectors, to continuous word functions over a variable (position). The benefit of continuous functions over variable positions is that word representations shift smoothly with increasing positions. Hence, word representations in different positions can correlate with each other in a continuous function. The general solution of these functions is extended to complex-valued domain due to richer representations. We extend CNN, RNN and Transformer NNs to complex-valued versions to incorporate our complex embedding (we make all code available). Experiments on text classification, machine translation and language modeling show gains over both classical word embeddings and position-enriched word embeddings. To our knowledge, this is the first work in NLP to link imaginary numbers in complex-valued representations to concrete meanings (i.e., word order).
### Summary The authors present a "natural" way of encoding position information into word embeddings and present extensive empirical evidence to support their method. I believe that paper meets the bar for acceptance. ### Details The paper "Encoding word order in complex embeddings" presents a method for making word embeddings position dependent. The idea in a nutshell is to map each discrete position , n, to a value ` A exp( freq_{word, dim} × n )` . So a word embedding is a collection of complex valued signals sampled at discrete points. The frequency is dependent on each word and each dimension in general. The authors motivate/justify this particular formulation via their Claim 1, which argues that their particular formulation uniquely satisfies two intuitive constraints. Although one of those constraints (i.e. linearly witnessed Position-free offset) almost completely specifies the solution. The experiments in the paper are fairly thorough and cover text classification, machine translation and language modeling. Through the comparative experiments the complex embeddings we can see that the formulation in this paper outperforms existing SOTA methods, sometimes with significantly difference such as a difference of 1.3 BLEU point on the MT task. I would have liked to say that the ablation are similarly conclusive but there seem to be a problem in the table, which eroded my confidence: 1. The number of parameters in rows 5 and 8 (w/t encoding positions, share / not-share respectively) are reported to be 9.38M and 8.33M which has to be wrong. Similar problem happens with other pairs. And now I am not sure whether the results were also swapped or not. Still the results in general trend in the right direction. ### Possible improvements to the paper 1. The main weakness of the paper is that the authors repeatedly mention that encoding the position as a multiplicative factor which is multiplied to the frequency gives leads to a more decoupled/interpretable embedding but their experiments are solely focused on accuracy measurement. I would have liked to see the authors carry out more experiments to see whether the frequency parameters really are interpretable? For example, - What is the histogram of the frequencies ? Are some of them negative? - Which word has the highest frequencies (pooled over all dimensions) in absolute term? Does it make sense that that word's meaning is so position dependent? For example, positions can capture subjects versus objects in english, but they will more reliably reflect the subject versus verb distinction in hindi. - Are the word frequencies by themselves predictive of anything? For example, what happens if the word embedding amplitudes are tied across words or dimensions? We expect the performance to be bad but how bad? These kinds of ablations / qualitative analysis will really make the paper more informative and interesting. Right now it just seems like yet another paper where the capacity of the model is increased and the accuracy increases. Specially because the delta improvement over the fixed positional embeddings of (Transformer-TPE Vaswani et al. 2017) is so limited. ### Edit after the author response The authors have made the required corrections and added the necessary analysis. One interesting outcome was that the "word-sharing amplitude schema" seems to drop so little in performance, it's almost like all the information can be coded in just the phase vectors alone. It will be nice if the authors release their trained phase embeddings as well, for the words.
iclr_2020_HygsuaNFwr
We propose order learning to determine the order graph of classes, representing ranks or priorities, and classify an object instance into one of the classes. To this end, we design a pairwise comparator to categorize the relationship between two instances into one of three cases: one instance is 'greater than,' 'similar to,' or 'smaller than' the other. Then, by comparing an input instance with reference instances and maximizing the consistency among the comparison results, the class of the input can be estimated reliably. We apply order learning to develop a facial age estimator, which provides the state-of-the-art performance. Moreover, the performance is further improved when the order graph is divided into disjoint chains using gender and ethnic group information or even in an unsupervised manner.
This paper presents an order learning method and applies it to age estimation from facial images. It designs a pairwise comparator that categorizes ordering relationship between two instances into ternary classes of greater than, similar, and smaller than. Instead of directly estimating the class of each instance, it learns pairwise ordering relationship between two instances. For age estimation from facial images, it uses a Siamese network as a feature extractor of the pairwise comparator, concatenates feature maps of dual inputs (test instance and one of the multiple reference instances that are selected from the training data) and applies it to a ternary classifier. Given the softmax probability scores of the classifier results, it estimates the final class as the one that maximizes a consistency rule computed over indicator functions. The comparator is trained to minimize a comparator loss computed over the softmax probability scores. The paper also provides an extended version for multiple disjoint chains, where each chain may correspond to a higher-level attribute class, for example, gender or ethnic group. When there is no supervision available, it randomly partitions the training set. and iteratively updates reliability scores and chain memberships. Novelty-wise, I consider that the proposed solution to be satisfactorily innovative and on a different vein than the existing methods. One concern about the method is that it imposes the geometric ratio (log distance) between the class distances in age estimation, considering the difference between 5 and 10-year-old instances is easier to detect than 65 to 70-year-old instances as stated in the paper. However, as far as it is understood from the provided discussion compared SOTA methods are not retrained or fine-tuned in the same manner. This raises the question of whether the slight performance improvement is a result of the geometric ratio, in particular when computing the cumulative score CS. Another concern is that the performance is not stellar as the presented method underperforms in comparison to DRF (Shen et al. 2018) on MORPH II. The presentation of the results for FG-Net is neither complete nor included in the main paper, and results for CLAP2016 are missing. Discussion of the results Table 8 does not seem fair as it is not clear whether MV is retrained keeping the geometric ratio in mind. It might be the case that even for 15-69, MV might attain lover MAEs. The paper also misses the latest SOTA, for instance, BridgeNet [1] is mentioned in a sentence, yet its results are omitted from the Tables. BridgeNet, according to its results, is on par with the proposed method. Also, the numbers reported in this paper and the numbers in [1] have a discrepancy, which causes confusion. [1] Wanhua Li et al. Bridgenet: A continuity aware probabilistic network for age estimation, CVPR 2019.
iclr_2020_HkgtJRVFPS
We propose a novel approach for preserving topological structures of the input space in latent representations of autoencoders. Using persistent homology, a technique from topological data analysis, we calculate topological signatures of both the input and latent space to derive a topological loss term. Under weak theoretical assumptions, we can construct this loss in a differentiable manner, such that the encoding learns to retain multi-scale connectivity information. We show that our approach is theoretically well-founded and that it exhibits favourable latent representations on a synthetic manifold as well as on real-world image data sets, while preserving low reconstruction errors.
Summary ------------- The paper proposes an approach, based on persistent homology (PH), to preserve certain topological structures of the input in the latent representations learned by an autoencoder. This is realized via an additional (i.e., in addition to reconstruction) loss term (optimized over mini-batches) which requires differentiating through the PH computation. While this has been done before (e.g., Chen et al., Hofer et al.), the authors have certainly put an interesting spin on this. The theoretical part of the work deals with the issue of using mini-batches for PH computation and whether this computation is close to the computation on the full point cloud. Experiments and comparisons on multiple datasets are presented to demonstrate that the approach, e.g., preserves nesting relationships in the input. The paper is nicely written and the content is very well presented. There are questions here and there (see below), but I do think they can be answered. Major comments/remarks: ------------------------------------- My first question relates to the issue that the loss only incorporates 0-dim. information. The authors do remark that higher-dim. features can be included, but the results were similar. However, after thinking about this issue quite some time, I am curious if it is possible to obtain "zero" of the topological loss (so this term is perfectly optimized), but the encoder introduces, e.g., cavities in the data which were not present in the input (e.g., 1-dim. holes). Also, can you show formally (maybe this is trivial and I am not seeing it) that L_t = 0 would lead to 0 distance between the corresponding diagrams w.r.t. some common metric? A more formal treatment of the implications of the loss in Eq. (2) would certainly help. Another question that immediately comes to mind is whether the computation of VR PH in the input space (e.g., CIFAR 10) makes sense, as the authors rely on ||.||_2 if I understood this correctly. I would argue that the topology of the input is basically unknown, especially for images and computing Euclidean distances among images, or vectorized images, does not make sense. For the nice results on the SPHERES data set it does, as the spheres are defined exactly using ||.||_2. If the VR PH in 0-dim. of the input is enforced upon the representations in the AE bottleneck, but the input topology is not captured well, then you might be enforcing something that you possibly do not want. Apart from that, it is known that the Euclidean distance degenerates quickly in high dimensional spaces, e.g., Aggrawal et al. On the Surprising Behavior of Distance Metrics in High Dimensional Space Maybe this is also contributing to the fuzzy visualization of CIFAR-10 in Fig. 3 (apart from the low-dim. of the bottleneck)? Also, maybe the authors could work out (in greater detail) the differences between their results from Thm.1/2 and the results of Chazal et al., in "Subsampling Methods for Persistent Homology". In my point of view, the results in the paper only hold if you would consider just a single batch, right? I mean, if the loss is computed from the batch, and a gradient update is performend, Z^{m} will changes (as the encoder changes as a result of the update), while the input does not. Finally, how were the KL divergence measures in Table 1 computed, as you need a density estimate of the input as well, not just for the representation space, right? Is this not a very crucial issue in the input space? If so, how reliable are the numbers presented for KL_{0.01},etc., given that the differences are sometimes extremely small. Minor comments ----------------------- Sec. 6: We presented a topological autoencoders -> We presented a topological autoencoder Overall, I think this is a nicely done paper, but with quite some question marks at many places. I do think this is always the case for something new, though, and actually a good thing.
iclr_2020_BJlS634tPr
Differentiable architecture search (DARTS) provided a fast solution in finding effective network architectures, but suffered from large memory and computing overheads in jointly training a super-net and searching for an optimal architecture. In this paper, we present a novel approach, namely Partially-Connected DARTS, by sampling a small part of super-net to reduce the redundancy in exploring the network space, thereby performing a more efficient search without comprising the performance. In particular, we perform operation search in a subset of channels while bypassing the held out part in a shortcut. This strategy may suffer from an undesired inconsistency on selecting the edges of super-net caused by sampling different channels. We solve it by introducing edge normalization, which adds a new set of edge-level hyper-parameters to reduce uncertainty in search. Thanks to the reduced memory cost, PC-DARTS can be trained with a larger batch size and, consequently, enjoy both faster speed and higher training stability. Experiment results demonstrate the effectiveness of the proposed method. Specifically, we achieve an error rate of 2.57% on CIFAR10 within merely 0.1 GPU-days for architecture search, and a state-of-the-art top-1 error rate of 24.2% on ImageNet (under the mobile setting) within 3.8 GPU-days for search. Our code has been made available at https://www.dropbox.com/ sh/on9lg3rpx1r6dkf/AABG5mt0sMHjnEJyoRnLEYW4a?dl=0.
--- revised score. rebuttal clears my concerns. --- Summary: The paper proposes a partially connected differential architecture search (PC-DARTS) technique, that uses a variant of channel dropout for each node's output feature maps, and a weighted summation of concatenating all previous nodes. Searched architecture on CIFAR-10 and ImageNet seems to outperform the one discovered by the original DARTS, however, the results are not directly comparable due to the slight change of search space. Introducing this edge normalization is a novel contribution, but it is more like a trick to have a better search space rather than the PC-DARTS itself. My main concerns are about the incremental novelty and experiments are heavily done on one search run, especially the search space is not the same as baseline DARTS. I do not think the current version is ready for ICLR, but I am looking forward to seeing the authors' rebuttal and I am willing to revise my review accordingly. Main concerns - Incremental novelty about channel sampling. Doing edge normalization in the PC-DARTS is indeed novel, however, the channel sampling (abbr. PC for partial channel connection) is not. Dropout is widely adopted in all deep learning training since AlexNet. In NAS with parameter sharing, Pham et al. already exploit the channel dropout as shown in ENAS function "def drop_path"(https://github.com/melodyguan/enas/blob/master/src/cifar10/image_ops.py). It is true that previous works treated like one hyper-parameters and do not provide deeper insight about this term, but it is not correct to say in Section 3.4 "Channel sampling ... has never been studied in prior work". In my perspective, the key difference of channel sampling is the retained channel number is always fixed to K, the non-selected channels are not zeroed, where the dropout usually has only a probability K / total_channel and non-selected feature is multiplied to a zero constant. Thus, I suggest authors provide additional experiments as in Table 3 to compare the original drop-path with proposed channel sampling. Considering the test error drops from 3% to 2.67% while using PC, it will be more convincing to show the original drop path with probability K / total_channel yields a smaller drop to evidence the effectiveness of proposed sampling. - Proposed edge normalization is not a new sampling policy but a new search space. To my understanding, this edge normalization is effectively a change to the search space rather than the sampling policy, and generalize to many other policies as well, and can be a substantial contribution to the NAS community. However, under current experiments setting, it is hard to isolate the improvement is from this new space or the channel sampling, as detailed later. - About the motivation. Throughout the paper, in abstract, introduction, section 3.2 and section 4.4, the authors claim that the larger batch size is particularly important for the stability of architecture search which is not well-studied and lack of references. From Table 4, it is hard to tell the stability is from the larger batch size or the proposed partial channel sampling. - Questions about experiments 1. Experiments comparing to the baseline is not fair. As in Section 4.2, the CIFAR-10 search is different from the original DARTS and P-DARTS in the following manner. The batch size is changed from 64(in DARTS)/96(in P-DARTS) to 256, super-net is freezed for the first 15 epochs, and introducing the edge normalization parameter \beta_{i,j} increase the search space. With all these changes, it is quite hard to isolate the effectiveness of proposed PC-DARTS. Two possible simple experiments to compare is, using the original DARTS space and training set, 1) do not update the \beta but use a fixed initialization that all \beta is the same (to mimic original DARTS concatenation); 2) add \beta to original DARTS as well and re-run 1). It is completely reasonable to me the contribution of this paper is introducing a novel edge-normalization that is simple and effective to improve the DARTS based approach. If so, the authors could revise the conclusion easily. However, in the least scenario, the experiment comparison should be in a fair way. 2. In original DARTS, error drop from 3% for the first-order gradient to 2.76% while using the second-order one, will this trend occurs with PC-DARTS? 3. Robustness Recent work about evaluating neural architecture search reveals that NAS algorithms are sensitive to random initialization[1,2] and the search space [3], this in general leads to a notorious reproducibility problem of current NAS and shows it is not reasonable to only compare final performances on proxy tasks over **one** searched architecture. However, in the stability study in Section 4.4.3, multiple runs are still over the same architecture discovered in earlier experiments. In Section 4.4.2, the paper mentioned the search runs multiple times, yet the reported results in Table 3 are against the single run, as indicated by CIFAR-10 no PC- no EN error 3.00 +- 0.14, which is identical to the results in Table 1 DARTS (1st-order). Could the authors report the results with at least 3 different initializations, and possibly release the seeds? It would significantly strengthen the effectiveness of the proposed approach. Minor comments - According to Section 4.4.1 and Figure 3, change the K from 1 to 8, the search cost drops significantly. Does this mean the batch size in the ablation study is changing all the time? How could we know if the test-error is reduced due to the sampling ratio or to the batch size? Typos 1. Table 2, caption below the table, \dag is not aligned with the one in the used table. --- reference --- [1] Li and Talwalker, Random search and reproducibility of neural architecture search, UAI’19 [2] Sciuto et al., Evaluating the search phase of neural architecture search, arxiv’19 [3] Radosavovic et al., On Network Design Spaces for Visual Recognition, ICCV'19.
iclr_2020_BklEF3VFPB
Domain adaptation tackles the problem of transferring knowledge from a labelrich source domain to an unlabeled or label-scarce target domain. Recently domain-adversarial training (DAT) has shown promising capacity to learn a domaininvariant feature space by reversing the gradient propagation of a domain classifier. However, DAT is still vulnerable in several aspects including (1) training instability due to the overwhelming discriminative ability of the domain classifier in adversarial training, (2) restrictive feature-level alignment, and (3) lack of interpretability or systematic explanation of the learned feature space. In this paper, we propose a novel Max-margin Domain-Adversarial Training (MDAT) by designing an Adversarial Reconstruction Network (ARN). The proposed MDAT stabilizes the gradient reversing in ARN by replacing the domain classifier with a reconstruction network, and in this manner ARN conducts both feature-level and pixel-level domain alignment without involving extra network structures. Furthermore, ARN demonstrates strong robustness to a wide range of hyper-parameters settings, greatly alleviating the task of model selection. Extensive empirical results validate that our approach outperforms other state-of-the-art domain alignment methods. Additionally, the reconstructed target samples are visualized to interpret the domain-invariant feature space which conforms with our intuition.
###Summary### This paper proposes Max-margin domain adversarial training (MDAT) to tackle the problem of transferring knowledge from a rich-labeled source domain to an unlabeled target domain. This is achieved by designing an adversarial reconstruction network. The proposed MDAT stabilizes the gradient by replacing the domain classifier with a reconstruction network. The motivation of the proposed network is based on the observations that the traditional domain-adversarial training is vulnerable in the following aspects:1) the training procedure of the domain discriminator is unstable, 2) it only considers the feature-level alignment, 3) it lacks the interpretable explanation for the learned feature space. In the proposed method, the Adversarial Reconstruction Network (ARN) consists of a shared feature extractor, a label predictor, and a reconstruction network. The reconstruction network only focuses on reconstructing samples on the source domain and pushing the target domain away from a margin. The feature extractor tries to confuse the decoder by learning to reconstruct samples on the target domain. The paper performs experiments on several domain adaptation tasks on digit datasets. The experimental results demonstrate the effectiveness of the proposed results over several baselines such as DANN, ADDA, CyCADA, CADA, etc. The paper also provides empirical analyses such as t-SNE embedding, plotting the loss, etc. to illustrate the effectiveness of the proposed approach. ### Novelty ### The model proposed in this paper is extended from the domain adversarial training approach. To stabilize the gradient, the model replaces the domain classifier with a reconstruction network. In this way, the discriminator only discriminates the reconstructed data from the source domain. This idea is interesting and provides some novelty. ###Clarity### Overall, the paper is well organized and logically clear. The claims are well-supported by the experiments. The images are well-presented and well-explained by the captions and the text. ###Pros### 1) The paper proposes a Max-margin based approach to tackle domain adaptation. Instead of leveraging the domain discriminator to discriminate the source from the target, this paper utilizes a reconstructor to push the target domain far away from the margin. The idea is interesting and heuristic to the domain adaptation research community. 2) The experimental results on digit benchmark demonstrate the effectiveness of the proposed method over other baselines including the most state-of-the-art ones. 3) The paper provides many analyses to demonstrate the effectiveness of the proposed method. ###Cons### 1) The experimental part of this paper is weak. The paper only provides experimental results on the digit recognition experiments, which is not enough to demonstrate the effectiveness and robustness of the proposed approach. Further experimental results on image recognition or NLP task is desired. It will be also interesting to see how does the proposed method perform on large-scale datasets such as DomainNet and Office-Home dataset: DomainNet: Moment Matching for Multi-Source Domain Adaptation, ICCV 2019. http://ai.bu.edu/DomainNet/ Office-Home: Deep Hashing Network for Unsupervised Domain Adaptation, CVPR 2017. http://hemanthdv.org/OfficeHome-Dataset/ 2) The organization and presentation of this paper should be polished. Based on the summary, cons, and pros, the current rating I am giving now is weak reject. I would like to discuss the final rating with other reviewers, ACs.
iclr_2020_rJlcLaVFvB
Hierarchical Sparse Coding (HSC) is a powerful model to efficiently represent multi-dimensional, structured data such as images. The simplest solution to solve this computationally hard problem is to decompose it into independent layerwise subproblems. However, neuroscientific evidence would suggest inter-connecting these subproblems as in the Predictive Coding (PC) theory, which adds top-down connections between consecutive layers. In this study, a new model called 2-Layers Sparse Predictive Coding (2L-SPC) is introduced to assess the impact of this inter-layer feedback connection. In particular, the 2L-SPC is compared with a Hierarchical Lasso (Hi-La) network made out of a sequence of Lasso layers. 2L-SPC and a 2-layers Hi-La networks are trained on 4 different databases and with different sparsity parameters on each layer. First, we show that the overall prediction error generated by 2L-SPC is lower thanks to the feedback mechanism as it transfers prediction error between layers. Second, we demonstrate that the inference stage of the 2L-SPC is faster to converge than for the Hi-La model. Third, we show that the 2L-SPC also accelerates the learning process. Finally, the qualitative analysis of both models dictionaries, supported by their activation probability, show that the 2L-SPC features are more generic and informative.
This work proposed a new model called Sparse Deep Predictive Coding, which introduced top-down connections between consecutive layers, to improve the solutions to hierarchical sparse coding (HSC) problems. Instead of decomposing the HSC problem into independent subproblems, the proposed model added a new term to the loss function, which represents the influence of the latter-layer on the current layer. #Pros: -- The proposed model adopted the idea from predictive coding and came up with a relatively novel idea for HSC problems. -- The experiments are solid. The experiments evaluated the proposed methods with different hyper-parameter settings and three real-world datasets. -- The figures in the result sections are well designed and concise. #Cons: -- The mathematical description of the main problem and the proposed model is not clear. For example, the dimensionality for the variables in Eq.(1) is not clarified. -- The test procedure is not clear. How the internal state variables are obtained for the test set is not clarified. -- The proposed model was only compared with a basic Hierarchical Lasso network. There are not any state-of-art methods included as baseline methods. #Detailed comments: (1) The proposed model is named as sparse DEEP predictive coding, however, the experiments only considered the SDPC and Hi-La networks with 2 layers. I am wondering if a deeper structure will improve the performance? (2) For the structure shown in Fig.1, the decoding dictionaries are $D^T_i$, but I am confused why the encoding dictionaries are reciprocal to encoding dictionaries. Does it come from the optimization updates shown in Eq.(3)? (3) According to Eq.(1), $x$ is a vector and $D$ is a 2d matrix. However, the real inputs in the experiments are images and $D$ is a convolutional filter with 4 dimensions. How are the matrices reshaped? (4) For section 2.2 and 2.3, the number/index of samples is not shown in the loss function for training. The loss should be over the whole training set. Besides, the test procedure is not clarified. (4) The number of iterations using FISTA of SDPC and Hi-La networks is shown to compare the rate of convergence. However, considering both models are solving a lasso-type regression problem, I would suggest using coordinate descent for optimization. (5) For the main result of prediction error, why is the “global prediction error” more important than the reconstruction error? Is the first-layer prediction error the reconstruction error? If yes, Fig.2 shows that Hi-La has a lower prediction error compared to SDPC for the first layer. (6) Two minor comments on writing: (a) It would be better to have a separate section for 2.5 since it describes the dataset and is not related to the proposed model. (b) A typo of “neuronal implementation” exists in the introduction section.
iclr_2020_HylpqA4FwS
Recurrent neural networks (RNNs) are particularly well-suited for modeling longterm dependencies in sequential data, but are notoriously hard to train because the error backpropagated in time either vanishes or explodes at an exponential rate. While a number of works attempt to mitigate this effect through gated recurrent units, skip-connections, parametric constraints and design choices, we propose a novel incremental RNN (iRNN), where hidden state vectors keep track of incremental changes, and as such approximate state-vector increments of Rosenblatt's (1962) continuous-time RNNs. iRNN exhibits identity gradients and is able to account for long-term dependencies (LTD). We show that our method is computationally efficient overcoming overheads of many existing methods that attempt to improve RNN training, while suffering no performance degradation. We demonstrate the utility of our approach with extensive experiments and show competitive performance against standard LSTMs on LTD and other non-LTD tasks.
Update: the authors have addressed my issues below and I have raised my score to 8. *** This paper on recurrent neural networks goes back to Rosenblatt's continuous time dynamics model and uses a discretised version of that equation (equation (1) in the paper) to build an incremental version of the RNN, called incremental RNN, where the transition from hidden state h_{k-1} at step k-1 to next step's h_k is done using small incremental updates until the system achieves equilibrium. It claims that it manages to solve the vanishing gradient problem by keeping all gradients \frac{\partial h_k}{\partial h_{k-1}} equal to minus identity matrix. The algorithm is then extensively evaluated on a large number of tasks and compared with plain RNNs, LSTMs, and two recently published papers on antisymmetric RNN and FastRNN. I need to admit that after reading the paper twice, I am not sure I understand how the method works exactly (how does inserting intermediary steps and variables g_0, g_1, ... g_T enable the system to reach equilibrium: is there an iterative evaluation until convergence?) and more worringly, how the single-step SiRNN differs from a normal RNN with an extra residual connection? According to equation (5), for T=1 and g_0=0, we have: g_T = g_1 = \eta_k^1 ( \phi (U (h_{k-1}) + W x_k + b) - \alpha h_{k-1} ). If the gradients are vanishing in the normal RNN, why would they not vanish here for T=1? Propositions 1 and 2 are for the case K=\infinity, and I could not understand the proof of theorem 1 that shows why \frac{\partial h_k}{\partial h_{k-1}} = - I. This seems to be the major contribution of the paper and should be given prominence. What is missing is clear explanation, like (Bengio et al, 1994), of the identity gradient and of how the algorithm works. These questions could be solved by including code or pseudo-code explaining how to actually implement incremental RNNs. There are also several important papers recently published that have approached the problems of continous-time dynamics and relaxation of hidden state to equilibria. * The paper does not mention at all Neural ODEs [2] [3] where the state flows in a continuously differentiable way thanks to the continuous-time residual network ODE formulation. Moreover, isn't the idea of inserting a relaxation to equilibrium using ODEs already implemented in the ODE-RNNs [3]? * How do incremental updates related to Adaptive Computation Time [4]? For this reason, I am currently tending to reject the paper, but am open to change my score upon clarifications and links to other similar work. Additional remarks: The first paragraph of the paper explains the Elman RNN, not RNNs in general. Please cite [1] alongside Bengio et al (1994) for the problem of the vanishing gradient. Define alpha in equation (1) Notation k and K is very confusing Blue vs. green on Figure 2 is hard to read, and where is the new initialisation? Why do you add h_k^K to h_{k-1} in equation (8)? I thought it was g_k^K? Keep the same colours for all experiments in figure 4. [1] Hochreiter (1991) "Untersuchungen zu dynamischen neuronalen Netzen" [2] Chen, Rubanova, Bettencourt & Duvenaud (2018) "Neural Ordinary Differential Equations" [3] Rubanova, Chen & Duvenaud (2019) "Latent odes for irregularly-sampled time series" [4] Graves (2016) "Adaptive computation time for recurrent neural networks"
iclr_2020_SJgCEpVtvr
Graph neural networks (GNN) such as GCN, GAT, MoNet have achieved stateof-the-art results on semi-supervised learning on graphs. However, when the number of labeled nodes is very small, the performances of GNNs downgrade dramatically. Self-training has proved to be effective for resolving this issue, however, the performance of self-trained GCN is still inferior to that of G2G and DGI for many settings. Moreover, additional model complexity make it more difficult to tune the hyper-parameters and do model selection. We argue that the power of self-training is still not fully explored for the node classification task. In this paper, we propose a unified end-to-end self-training framework called Dynamic Self-traning, which generalizes and simplifies prior work. A simple instantiation of the framework based on GCN is provided and empirical results show that our framework outperforms all previous methods including GNNs, embedding based method and self-trained GCNs by a noticeable margin. Moreover, compared with standard self-training, hyper-parameter tuning for our framework is easier.
The paper proposes an approach for learning graph convolutional networks for inferring labels on the nodes of a partially labeled graph when only limited amount of labeled nodes are available. The proposal is inspired from Graph convolution Networks with the idea of overcoming the major drawback of these models that lies of their behavior in case of limited coverage of the labeled nodes, which implies using deeper versions of the model leading at the price of what the authors call the over-smoothing problem. The main idea here consists in relying on self training to get a better coverage of labeled nodes enabling learning with less deep models, this translates to a simple and intuitive algorithm. Using self training is not new in GCN but the way it is used here, computing adaptively a threshold for incorporating pseudo labels and using weights according to the confidence off predictions is new. Experimental results are reported on citation datasets and compared with many baselines show similar results as baselines when the coverage increases up to 50 labeled nodes /class, but the method brings significant improvements when the coverage is low (e.g. only few, <20, labels /class). Although the difference with previous approaches do not look like a huge step, the method seems to be quite justified empirically and achieve real good results wrt state of the art.
iclr_2020_B1esx6EYvr
We look critically at popular self-supervision techniques for learning deep convolutional neural networks without manual labels. We show that three different and representative methods, BiGAN, RotNet and DeepCluster, can learn the first few layers of a convolutional network from a single image as well as using millions of images and manual labels, provided that strong data augmentation is used. However, for deeper layers the gap with manual supervision cannot be closed even if millions of unlabelled images are used for training. We conclude that: (1) the weights of the early layers of deep networks contain limited information about the statistics of natural images, that (2) such low-level statistics can be learned through self-supervision just as well as through strong supervision, and that (3) the low-level statistics can be captured via synthetic transformations instead of using a large image dataset.
Update 11/21 With the additional experiments (testing a new image, testing fine-tuning of hand-crafted features), additions to related work, and clarifications, I am happy to raise my score to accept. Overall, I think this paper is a nice sanity check on recent self-supervision methods. In the future, I am quite curious about how these mono-image learned features would fare on more complex downstream tasks (e.g., segmentation, keypoint detection) which necessarily rely less on texture. Summary This paper seeks to understand the role of the *number of training examples* in self-supervised learning with images. The usefulness of the learned features is evaluated with linear probes at each layer for either ImageNet or CiFAR image classification. Empirically, they find that a single image along with heavy data augmentation suffices for learning the first 2-3 layers of convolutional weights, while later layers improve with more self-supervised training images. The result holds for three state-of-the-art self-supervised methods, tested with two single-image training examples. In my view, learning without labels is an important problem, and it is interesting what can be learned from a single image and simple data augmentation strategies. Comments / Questions It seems to me that for completeness, Table 4 should include the result of training a supervised network on top of random conv1/2 and Scattering network features, because this experiment is actually testing what we want - performance of the features when fine-tuned for a downstream task. So for example, even if a linear classifier on top of Scattering features does poorly, if downstream fine-tuning results in the same performance as another pre-training method, then Scattering is a perfectly fine approach for initial features. Could the authors please either correct this logic or provide the experiments? Further, it seems that the results in Table 4 might be a bit obscured by the size of the downstream task dataset. I wonder if the learned features require fewer fully supervised images to obtain the same performance on the downstream task? Can the authors clarify how the neural style transfer experiment is performed? The method from Gatys et al. requires features from different layers of the feature hierarchy, including deeper layers. Are all these features taken directly from the self-supervised network or is it fine-tuned in some way? While I appreciate the computational burden of testing more images, it does feel that Image A and B are quite cherry-picked in being very visually diverse. Because of this, it seems like a precise answer to what makes a good single training image remains unknown. I wonder how feasible it is to find a proxy metric that corresponds to the performance on downstream tasks which is expensive to compute. It might be interesting to try to generate synthetic images (or modify real ones) that are good for this purpose and observe their properties. I disagree with the claim of practicality in the introduction (page 2, top). While training on one image does reduce the burden of number of images, the computational burden remains the same. And as mentioned above, it doesn’t seem likely that *any* image would work for this method. Finally, more images are needed to learn the deeper layers for the downstream task anyway. The paper is well-written and clear.
iclr_2020_rkeIq2VYPr
Determinantal point processes (DPPs) is an effective tool to deliver diversity in multiple machine learning and computer vision tasks. Under the deep learning framework, DPP is typically optimized via approximation, which is not straightforward and has some conflicts with the diversity requirement. We note, however, there have been no deep learning paradigms to optimize DPP directly since it involves matrix inversion that may result in computational instability. This fact greatly hinders the use of DPP on some specific objective functions where DPP would otherwise serve as a term to measure the feature diversity. In this paper, we devise a simple but effective algorithm to optimize the DPP term directly through expressing with L-ensemble in the spectral domain over the gram matrix, which is more flexible than learning on parametric kernels. By further taking into account additional geometric constraints, our algorithm seeks to generate valid sub-gradients of the DPP term in cases where the DPP gram matrix is not invertible (no gradients exist in this case). In this sense, our algorithm can be easily incorporated with multiple deep learning tasks. Experiments show the effectiveness of our algorithm, indicating promising performance for practical learning problems.
Summary: the authors introduce a method to learn a deep-learning model whose loss function is augmented with a DPP-like regularization term to enforce diversity within the feature embeddings. Decision: I recommend that this paper be rejected. At a high level, this paper is experimentally focused, but I am not convinced that the experiments are sufficient for acceptance. **************************** My main concerns are as follows: - Many mathematical claims should be more carefully stated. For example, the authors extend the discrete DPP formulation to continuous space. It is not clear to me, based on the choice of the kernel function embedding, that the resulting P_k(X) is a probability (Eq. 1). If it is not (using a DPP-based formulation as a regularizer does not require a distribution), the authors should clarify that fact; more generally, the authors should be more careful throughout the paper (for example, det=0 if features are proportional, not necessarily equal; the authors inconsistently switch between DPP kernel L and marginal kernel K throughout computations.) - The authors do not describe their baselines for several experiments. In tables 1, 2, 3, the baseline is never described (I assume it's the same setup without regularization); I did not find a description of DCH (Tab 4) in the paper (Deep Cauchy Hashing?). The mAP-k metric should also be defined. Furthermore, the authors do not report standard deviations for their experiments. - A key consideration when using DPPs is their compulational cost: most operations involving them require SVD (which seems to be used in this work), matrix inversion, and often both. This, unsurprisingly, limits the applications of DPPs, and has driven a lot of research focused on improving DPP overhead. I would like to see more discussion in this paper focused on to which extent the DPP's computational overhead remains tractable, and which methods were used (if any) to alleviate the computational burden. - Finally, the paper itself appears to be somewhat incomplete: sentences are missing or incomplete (Section 4), and numbers are missing in some tables (Table 5). *********************** Questions and comments for the authors: - When computing the proper sub-gradient, are you computing the subgradient as inv(L + \hat L)? - You state that by avoiding matrix inversion, your method is more feasible. However, it seems like your method requires SVD, which is also O(N^3); could you please provide more detail for this? - Could you report number of trials and standard deviations for your experiments? - Do you have any insight into why DPPs do more poorly than the DCH baseline in Table 4 for mAP-k metrics? - You might be able to save space by decreasing the space allocated to the exponentiated quadratic kernel.
iclr_2020_rkxs0yHFPH
Event-based neuromorphic systems promise to reduce the energy consumption of deep neural networks by replacing expensive floating point operations on dense matrices by low energy, sparse operations on spike events. While these systems can be trained increasingly well using approximations of the backpropagation algorithm, this usually requires high precision errors and is therefore incompatible with the typical communication infrastructure of neuromorphic circuits. In this work, we analyze how the gradient can be discretized into spike events when training a spiking neural network. To accelerate our simulation, we show that using a special implementation of the integrate-and-fire neuron allows us to describe the accumulated activations and errors of the spiking neural network in terms of an equivalent artificial neural network, allowing us to largely speed up training compared to an explicit simulation of all spike events. This way we are able to demonstrate that even for deep networks, the gradients can be discretized sufficiently well with spikes if the gradient is properly rescaled. This form of spike-based backpropagation enables us to achieve equivalent or better accuracies on the MNIST and CIFAR10 datasets than comparable state-of-the-art spiking neural networks trained with full precision gradients. The algorithm, which we call SpikeGrad, is based on accumulation and comparison operations and can naturally exploit sparsity in the gradient computation, which makes it an interesting choice for a spiking neuromorphic systems with on-chip learning capacities.
EDIT After Rebuttal: My understanding of the contributions of this paper has improved. I now increase my score to a weak accept. This paper proposes a new backpropagation algorithm learning algorithm "SpikeGrad" for Spike-based neural network paradigm. Simulating this algorithm on a classical hardware would require a lot of time-steps. To circumvent this, they show how to construct a corresponding artificial neural net (that can be trained using the traditional gradient based algorithms) which is equivalent to the spiking neural net. Using this equivalence they simulate a large scale SNN on many real-world dataset (first paper to do so). In particular, they use MNIST and CIFAR-10 for this purpose. They show that training a fixed architecture using their method is comparable to other prior work which uses high-precision gradients to train them. They also show how to exploit sparsity of the gradient in the back propagation for SNN. This paper is hard-to-follow for someone not familiar with the background material. In particular, without looking at prior literature it was hard to understand that "integrate and fire neuron model" is essentially the feedforward mechanism for the SNN. I would suggest the authors make this a bit more explicit. Moreover, it would serve the structuring of the paper to have a formal "Preliminaries" section, where all known stuff goes. It was hard to discern what is new in this paper, and what is from prior work and these are mixed in section 2. For instance, section 2 states "SpikeGrad" algorithm; but the main contribution (ie., the back propagation algorithm) only appears in the middle of this section. Likewise, I think section 3 can be arranged better. In particular, the equivalence is a "formal" statement and thus, could be stated as a theorem followed by a proof. It will also make it explicit as to what does it mean by an "equivalent" network. In fact, it is still not clear to me at this point what that statement means. Could you please elaborate this in the rebuttal? Regarding the conceptual contribution of this paper, if I understood things correctly, the main claim is that they give a new way to train SNN whose performance on MNIST and CIFAR-10 is comparable to other works. The second contribution is that they give the equivalence between ANN and SNN (point above). It is also unclear to me what the point regarding the sparse gradient in the backpropagation in the experimental section is trying to make? Could you please clarify this in the rebuttal as well? At this point, the writing of this paper leaves me with many unanswered questions that needs to be addressed before I can make a more informed decision. Please provide those in the rebuttal and based on those will update my final score. But with my current understanding of this paper, I think this does not meet the bar. The contributions in this paper do not seem too significant.
iclr_2020_rJgzzJHtDB
Deep networks were recently suggested to face the odds between accuracy (on clean natural images) and robustness (on adversarially perturbed images) (Tsipras et al., 2019). Such a dilemma is shown to be rooted in the inherently higher sample complexity ) and/or model capacity (Nakkiran, 2019), for learning a high-accuracy and robust classifier. In view of that, give a classification task, growing the model capacity appears to help draw a win-win between accuracy and robustness, yet at the expense of model size and latency, therefore posing challenges for resource-constrained applications. Is it possible to co-design model accuracy, robustness and efficiency to achieve their triple wins? This paper studies multi-exit networks associated with input-adaptive efficient inference, showing their strong promise in achieving a "sweet point" in cooptimizing model accuracy, robustness and efficiency. Our proposed solution, dubbed Robust Dynamic Inference Networks (RDI-Nets), allows for each input (either clean or adversarial) to adaptively choose one of the multiple output layers (early branches or the final one) to output its prediction. That multi-loss adaptivity adds new variations and flexibility to adversarial attacks and defenses, on which we present a systematical investigation. We show experimentally that by equipping existing backbones with such robust adaptive inference, the resulting RDI-Nets can achieve better accuracy and robustness, yet with over 30% computational savings, compared to the defended original models.
The paper exploited input-adaptive multiple early-exits, an idea drawn from the efficient CNN inference, to the field of adversarial attack and defense. It is well-motivated by the dilemma between the large model capacity required by accurate and robust classification, and the resulting model complexity as well as inference latency. Overall, this paper presents an interesting perspective, with strong results. The usage of input-adaptive inference reduces the average inference complexity, without conflicting the "larger capacity" assumption for co-winning robustness and accuracy. Since no literature has discussed the attacks for a multi-exit network, the authors constructed three attack forms, and then utilized adversarial training to defend correspondingly. The design of Max-Average Attack is particularly smart - to balance between "benefiting all" and "maximally boosting one" (its result is also convincingly good). The authors presented three groups of experiments, from relatively heavy networks (ResNet38), to very compact ones (MobileNet-V2). It is especially meaningful to see their strategy work on MobileNet too (though the computational saving is a bit less, no surprise). The authors also did due diligence in ablation study and comparing with recent alternatives. Several points that could be addressed to potentially improve the paper: - The authors want to make it clearer that: their "triple win" is not about constructing a light-weight model that is both accurate and robust. It's instead about given an accurate + robust, yet heavy-weight model, how to reduce its AVERAGE computational load per sample inference, by routing "easier" examples to earlier exits. - Can the authors think of and construct more diverse and stronger attacks for RDI-Nets? For example, it would be interesting to attacking RDI-Nets (e.g., defended by Max-Average) with randomized weighted combinations of single attacks? Note that, at inference time, the same "randomized combination" cannot be also adopted as defense, because an input always wants to exit the earliest possible for efficiency gains. - The advantage over ATMC is not obvious: slightly lower TA, slightly higher ATA, and slightly more parameters. Could the authors try to align their parameters more closely (to the extent possible)? - A missing related work: "Shallow-Deep Networks: Understanding and Mitigating Network Overthinking", ICML 2019. It also discussed how to append early exits to pre-trained backbones.
iclr_2020_Hke_f0EYPH
Recent works have developed several methods of defending neural networks against adversarial attacks with certified guarantees. We propose that many common certified defenses can be viewed under a unified framework of regularization. This unified framework provides a technique for comparing different certified defenses with respect to robust generalization. In addition, we develop a new regularizer that is both more efficient than existing certified defenses and can be used to train networks with higher certified accuracy. Our regularizer also extends to an 0 threat model and ensemble models. Through experiments on MNIST, CIFAR-10 and GTSRB, we demonstrate improvements in training speed and certified accuracy compared to state-of-the-art certified defenses.
At a first glance, this paper proposed an interesting refinement of interval bound propagation (IBP). However, it has a major flaw in empirical evaluation, and the proposed "theory" and "bounds" are also questionable and have many issues. In short, the main results of the paper in Figure 1 and Table 2 are problematic and not the right comparison, so they cannot justify the claim that the proposed method outperforms other state-of-the-art baselines like IBP. Specifically, when comparing to IBP, the certified error should be computed by IBP; however the verification algorithm used in this paper performs extremely poor on IBP based models (giving vacuous bounds like 0%, and the authors are able to outperform this 0%). Under fair comparison metrics (Table 12), the proposed method is worse than the IBP baseline in almost all settings. I will explain the reason the proposed method does not work in detail below. Besides the empirical results, the "theory" developed in this paper also has several fundamental weakness (will discuss in detail below) and are lack of solid connections to robustness and the proposed "bounds"; the proposed "bounds" are questionable and are not sound bounds, and they can hardly be justified theoretically; so it is not surprising that they cannot outperform IBP, which is based on rigorous minimax robust optimization and sound over-approximations of neural networks. On the positive side, the authors considered the ensemble of multiple IBP trained models, as well as extending IBP to L0 norm setting. Both of them are valid (but small) contributions, but they are not sufficient. Also, overall the writing of the paper is great and easy to follow and understand. I really do not want to make the authors of this paper upset, especially, the main author might be the first time submitting a paper to ICLR or a undergraduate student new to this field. However I have to say this paper has significant flaws and should not be published. Especially, the wrong evaluation methodology used in this paper can be very misleading for new comers to this field, and misguide future research. I encourage the authors to read my detailed comments below and learn from the failure of the proposed method. If the authors can rephrase this paper significantly (especially, removing the entire section 3), and emphasize on the contributions of ensemble or L0 perturbation, it might become a good paper for a next venue. My suggestions for improvements are below: 1. Be honest with your findings and do not try to hide the weakness of your method, and do not overclaim. Especially, the authors are aware of the problem that IBP based models are better if evaluated under IBP bounds (in Table 12), but still make strong and wrong claims that the proposed method outperforms other state-of-the-art methods in certified accuracy in Introduction. 2. For the ensemble part, consider more "smart" ensembles rather than directly adding them together. For example, we can consider balancing the accuracy and certified error of each model and choose a blend of them. IBP is a strong method, and an ensemble of IBP can yield the best defense. 3. For the L0 robustness with IBP, it is not a significant contribution alone since it only converts the L0 norm to interval bounds at the very first layer. However the authors can consider more interesting settings, like adversarial patches or masks (https://arxiv.org/pdf/1712.09665.pdf), which can be dealt with similar techniques. 4. When evaluating a certified defense method, it is also good to conduct PGD attacks to the networks, to see how tight the certified bounds are. If the authors attack the models in Table 1, we can actually see that IBP based models can perform much better than the proposed method. From this, the authors should have realized that the verification method they used is not appropriate to evaluating IBP. 5. Evaluation on only 200 test data points is not sufficient. Certified accuracy is computed on the entire test set (10,000 examples) in almost all previous certified defense papers (Wong et al., 2018; Mirman et al., 2018; Gowal et al., 2018; Wang et al., 2018). The authors should use a proper implementation of verification algorithm, like DiffAI (https://github.com/eth-sri/diffai), convex adversarial polytope (https://github.com/locuslab/convex_adversarial) or symbolic interval (https://github.com/tcwangshiqi-columbia/symbolic_interval). In my experience, on a single GPU they can verify small models over entire dataset (10,000 examples) within a few minutes; large models may take a few hours, but still quite reasonable. The verification method used in this paper is lesser-known and was probably implemented poorly and inefficiently. It is better to use a mature and well accepted library. 6. A Minor issue: the first paper that proposed IBP training is Mirman et al., ICML 2018 (where the "box" domain was used for training), not Gowal et al. 2018. So some sentences in Introduction and Related works are not accurate. Now let's discuss the issues in this paper in detail, and let's focus on the empirical comparisons to IBP first. The authors made the main claim based on Table 2 and Figure 1, where the "certified accuracy" (or most commonly referred to as "verified accuracy") for models trained using the proposed method seems to be higher than other methods, especially IBP. "Certified accuracy" is a lower bound of accuracy under any norm bounded perturbations (given a certain epsilon). Conversely, attack based methods like PGD give an upper bound, as there can be stronger attacks that further decrease accuracy. There are many neural network verification methods to obtain certified accuracy; some of them can be particularly weak on certain models (giving vacuous lower bounds like 0%). Generally, you choose the best possible (and computationally feasible) verification method to verify the robustness of a model. For example, if verification algorithm A gives a certified accuracy of 10%, but algorithm B gives 90% for the same model, we should use 90%. As an analogy in the adversarial attack setting, you pick the strongest possible attack to evaluate robustness: a model has high accuracy under weak FGSM attacks is not necessarily robust; conversely, a model has low certified accuracy (even 0%) does not necessarily mean it is vulnerable, as the verification method can be particularly weak on this model. In the original IBP training paper (Gowal et al., 2018), the certified error is computed efficiently using IBP, and the error is about 8% for MNIST (epsilon=0.3), and 68% for CIFAR (epsilon=8/255). My first hand experience on IBP can confirm that it is very easy (without too much tuning effort) to get 10% certified error for MNIST and 73-75% for CIFAR, even using small models. These numbers translate to 90% certified accuracy on MNIST (eps=0.3) and 25-27% certified accuracy on CIFAR (eps=8/255). However, in Table 2 and Figure 1 of the paper the authors show 0% (!) certified accuracy for IBP trained models for both MNIST eps=0.3 and CIFAR eps=8/255, and their method outperforms this 0%. Unfortunately, the verification method ("cnncert") used in this paper performs extremely poor on IBP trained models (giving vacuous bounds like 0%); IBP trained models should be certified using IBP bounds to give non-vacuous results. What Table 2 and Figure 1 really show is the weakness of their verification method used, rather than the true robustness of the model. What we really want to show here is how robust the models are, not how good a verification method is, so we need to use the best possible verification method; for IBP trained models, using IBP for verification is almost mandatory since it not only gives tight bounds but is also much more efficient. The authors are aware of this problem - in appendix, Table 12 (a table never discussed anywhere), they listed IBP certified error for IBP trained models. The MNIST numbers for IBP trained models are close to those on IBP paper (90% at eps=0.3), significantly better than their method in Table 2 (68%) or Table 12 (79%). The CIFAR numbers for IBP (22.5% certified accuracy at 8/255 in Table 12) are apparently de-tuned (in my experience IBP can easily do at least 25%, and Gowal et al. reported 32%), yet it is still better than the proposed method (less than 20% in Table 2 and 12). So under the right metrics (IBP trained models certified using IBP), even if the IBP models are detuned, they can outperform the proposed method by a large margin. The proposed method only makes IBP worse under the right metrics. Now let's understand why the proposed method cannot improve IBP. The bounds themselves have a few issues: 1. The "s" bound is not a sound bound for interval analysis anymore, because it uses the wrong center z_nom (the correct center is (l+u)/2 if you propagate the "center" and "difference" along the network, as an alternative implementation of IBP). The author claims that it is fine since we don't need sound bounds thanks to their "theory", however the "theory" itself is implausible, as will explained below. Although this tampered "s" bounds may empirically help to improve robustness, it is not theoretically sound; training a sound bound helps to obtain better certified accuracy. 2. The "v" bound is claimed to capture second derivative of activation function. However, first of all, for ReLU the second order derivative does not exist at all. The author also argue that "v" is a finite difference based bound, however it is also not accurate since when the bounds propagate to later layers both "s" and "v" can become large, and this can be a very bad "finite difference". 3. I do agree "v" somewhat regularizes linearity (assuming the "finite difference" is partially working). However, linearity does not guaranteed to produces good robustness, nor it is necessary. In fact, we should not impose unnecessary regularization to neural networks, since any regularization restricts its learning power. In some papers on empirical defense, linearity sometimes can help to reduce PGD error; however in the certified setting, a direct surrogate to certifiable robustness like IBP usually produces the best results. The addition of unnecessary regularizations mostly makes results worse, unless you have a very good reason and demonstrate strong empirical evidence that it can significantly outperform the baseline. See https://arxiv.org/pdf/1807.09705.pdf for a case study on the failure of over-regularization. I think the reason the authors still get a somewhat verifiable model is that the "s" bound sort of propagates a bound that is not sound but carries some similarity to IBP. IBP is a strong method so even tampering it a little bit, you can still get something. The "v" bound implicitly regularizes the norm of weight matrices, which helps to gain better certified accuracy only under convex relaxation based verification methods. I believe simply IBP+L1 regularization can achieve similar results as the proposed method, under the *wrong* evaluation metric in Table 2 and Figure 1. Under the correct evaluation metric, we shouldn't add this regularization term at all as it harms performance. The "theory" developed in section 3 is unconvincing and cannot support the "bounds". There a few problems: 1. The "theory" does not help us to find a good regularizer. When the authors argue that the gradient needs to be close to an "optimal" regularizer, we don't have the optimal regularizer at hand and have no idea how to approach it. Also the inverse Hessian used for distance metric in (6) is never known, so it is impossible to say which gradient is good and which is bad. 2. The assumption that lambda is close to zero is almost never true, yet Proposition 1 and 2 strongly depends on it. In the paper the authors use lambda=0.5 (and other similar numbers) and never decay it to zero. So the proposed training method cannot be supported by the "theory". 3. The "theory" makes weak or no connection to robustness guarantees; (4) is a classical results for the connection between test error and global Lipschitz constant, and the connection between this bound and our goal (robust classifier under adversaries) is too general and too weak. A more direct formulation, like minimax robust optimization will be a much better surrogate. 4. The connection between the theory and the proposed bounds is vague; the authors claim "the gradient of a regularizer rather than its bound validity determines certified test loss", and under this sense, I can use any arbitrarily loss function and call it a "regularizer". For example, I can use a "regularizer" that encourages BAD robustness, and it still fits into the authors explanation. This is like someone publishes a proof showing that P=NP, yet you can use the same argument to show P != NP. This is embarrassing. The bottom line: I am not saying the propositions in this paper are technically wrong (under the strong assumptions the authors proposed); at least their derivations are straightforward and simple enough to check within a few minutes. However, they are too weak to guide us to find a good training method, too far-fetched to our goal of obtaining good robustness guarantee, and too general that you can use them to prove both sides. So I don't think the "theory" is useful, and the proposed "bounds" guided by the theory has also failed to improve the baseline. Sorry for the long comments and I hope they can be helpful for the authors. ****** Reply to general author response: The comparison in the "certifier" table is misleading. "CNN-Cert-Zero" seems to be a special case of CROWN, with a special setting of lower bounds for ReLU. CROWN allows any slope between 0 to 1 as the lower bound, and "CNN-Cert-Zero" is just a special case of that. Most importantly, the main issue with the paper is not the verifier used; the main issue is that the proposed method performs worse than baseline under correct evaluation, and the proposed "theory" is distracting or wrong. The new empirical results still do not address any of my concerns - IBP still significantly outperforms the proposed method. For MNIST, Gowal et al. reported over 90% verified accuracy (the proposed method is 75%); the IBP results provided in author response has 0% verified accuracy at , which seems to be a problem or bug. For the CIFAR=8/255 case, the authors keep detuning IBP models and obtain an IBP baseline with less than 20% verified accuracy, yet the IBP model reported in literature (Gowal et al., 2018) can perform over 30%. The proposed method only performs around 20% and the performance gap is huge. ****** Conclusions after author response After reading the author response, I am still keep my score of reject since the paper contains major technical errors. In a word, the theory is distracting or wrong, and the empirical results provided are intentionally misleading (the proposed method cannot outperform baseline under the right evaluation metrics). The author response does not address any of my concerns raised, yet the authors insisted that their "theory" is useful (which is apparently not true according to all reviewers) and provided more confusing and misleading results. I have written a long review with detailed reasons and hope the authors can understand why the proposed method fails, but it seems they completely ignored it and did not learn anything from it. This is quite disappointing.
iclr_2020_HJlnC1rKPB
Recent trends of incorporating attention mechanisms in vision have led researchers to reconsider the supremacy of convolutional layers as a primary building block. Beyond helping CNNs to handle long-range dependencies, Ramachandran et al. (2019) showed that attention can completely replace convolution and achieve state-of-the-art performance on vision tasks. This raises the question: do learned attention layers operate similarly to convolutional layers? This work provides evidence that attention layers can perform convolution and, indeed, they often learn to do so in practice. Specifically, we prove that a multi-head self-attention layer with sufficient number of heads is at least as expressive as any convolutional layer. Our numerical experiments then show that the phenomenon also occurs in practice, corroborating our analysis. Our code is publicly available 1 .
The paper claims that 1. multi-head self-attention(MHSA) is at least as powerful as convolutions by showing that a CONV can be cast as a special case of MHSA and 2. that in practice, MHSA often mimic convolutional layers. These claims are interesting and timely, given that there has been a fair amount of recent work that have explored the use of self-attention(SA) on image tasks, either by composing SA with convolutions or replacing convolutions altogether with self-attention (examples of each are referenced in the paper). So should these claims be true, they would give theoretical evidence that SA can completely replace convolutions. However, I think that the claims are exaggerated and misleading. 1. The theory shows an arguably contrived link between self-attention and convolution. Theorem 1 says that a convolution can be seen as a special case of MHSA, and the constructive proof (that chooses SA parameters to derive a convolution) shows a correspondence between the output of each head of MHSA and a D_out by D_in linear transform applied to the D_in features of a single pixel, with attention weight given entirely to this pixel (i.e. hard attention). The derivation relies heavily on the use of a relative encoding scheme that sets W_qry=W_key = 0 (usually referred to as W^Q, W^K in the self-attention literature, the linear maps applied to the queries and keys) i.e. the attention weights do not depend on the key/query values, but only their relative positions. Moreover, the softmax temperature (an interpretation of 1/alpha) is set arbitrarily close to 0 to make the softmax saturate and attain hard-attention. With these two constraints, I am sceptical as to whether you can really say that you are implementing self-attention. In standard practice when MHSA is used, W^Q and W^K are never set to zero, and the scale of the logits for the self-attention weights are controlled by normalising them with sqrt(D_k) (or sqrt(D_k/N_h), depnding on how you choose to deal with multiple heads). Furthermore, the derivation only holds for when stride=1 and padding=“SAME”, such that the spatial dimensions of the input (H & W) remain unchanged. In fact the padding is not really dealt with in the derivation, and it is unclear whether the result can generalise to convolutions with stride > 1, making the claim “MHSA layer … is at least as powerful as any convolutional layer” problematic. Hence although I think the derivation is mathematically correct, I think that the link that the derivation makes between convolutions and MHSA is somewhat contrived and not a useful observation in practice. I expect MHSA with learned W_qry and W_key will behave differently to when they are set to 0, and it would be much more interesting/relevant to see how their behaviour compares with convolutions in this more realistic setting. 2. The heavy dependence of the experiments on the quadratic encoding, the aforementioned contrived form of MHSA that was used to derive the link between convolutions and MHSA, makes the results not very relevant and the claim that "MHSA often mimic convolutional layers" rather misleading. It could be more relevant if quadratic encoding can replace standard MHSA parametererisations with learned W_qry and W_key, but I’m not convinced that this is the case. Although Figure 4 suggests that this SA with quadratic encoding gives similar test performance to ResNet18, I think that CIFAR-10 classification is too simple a task to claim that quadratic encoding can replace standard SA with learned W_qry and W_key, and I think results can look very different for harder problems e.g. ImageNet, MSCOCO - explored in Ramachandran et al - made possible because they use local SA as opposed to full SA. Experiments on these problems would be much more interesting and relevant. Note that the experiments using the learned relative positional encoding have “attention scores (are) computed using only the relative positions of the pixels and not the data” (I’m guessing this means W_qry=W_key=0 again). Hence the qualitative similarities between MHSA and convolutions only hold for the rather restricted case where I get the impression that self-attention has been unrealistically constrained only to increase its chance of behaving similarly to convolutions. Also the comparison in Figure 4 and Table 1 is being used to support the claim that self-attention can be as “powerful” as convolutions, but I think this is misleading because both quadratic and learned SA uses full SA, where each pixel attends to all pixels - this means the time & memory complexity of the algorithm is O((HW)^2), whereas for convolutions it is O(HW). So the expressiveness of SA that matches convolutions for this particular problem comes at a significant cost, to the extent that for bigger problems (ImageNet, MSCOCO) full SA is not feasible due to its quadratic memory requirement, whereas convolutions don’t face this problem. I think this should be pointed out more explicitly in the text, and think the claim that “self-attention is at least as powerful as convolutions” should be replaced with a more moderate statement such as “self-attention defines a family of functions that contains convolutions (of stride 1)” Summary: Although the writing of the paper is clear and the derivation is mathematically correct as far as I can see, the link between self-attention and convolutions in the paper are fairly contrived, hence the contribution of the paper to the field is not so significant in my honest opinion. ******************** I appreciate the authors' response, and understand that the maths suggests a single head of MHA (in the original form) cannot exactly emulate a general convolution. But empirically, the localised attention patterns do seem to suggest that each head can behave similarly to a restricted form of convolution, where similar weights are given to the receptive field (the local patch) in the neighbourhood each input pixel. Perhaps an analysis of what special case of convolution each head can emulate would be interesting, given the empirically observed similarities in the qualitative behaviour. With the more justified nuance of the findings of the paper, and together with the authors' significant efforts to make the evaluation more relevant and thorough, I will increase my score to "weak accept".
iclr_2020_HJeOekHKwr
Generative adversarial networks, or GANs, commonly display unstable behavior during training. In this work, we develop a principled theoretical framework for understanding the stability of various types of GANs. In particular, we derive conditions that guarantee eventual stationarity of the generator when it is trained with gradient descent, conditions that must be satisfied by the divergence that is minimized by the GAN and the generator's architecture. We find that existing GAN variants satisfy some, but not all, of these conditions. Using tools from convex analysis, optimal transport, and reproducing kernels, we construct a GAN that fulfills these conditions simultaneously. In the process, we explain and clarify the need for various existing GAN stabilization techniques, including Lipschitz constraints, gradient penalties, and smooth activation functions.
This paper provides a unified theoretical framework for regularizing GAN losses. It accounts for most regularization technics especially spectral normalization and gradient penalty and explains how those two methods are in fact complementary. So far this was only observed experimentally but without any theoretical insight. The result goes beyond that as the criterion could be applied to general convex cost functional. The main general theorem is Theorem 1 which states 3 conditions on the optimal critic and 2 others on the generator. The paper is mainly concerned by the conditions on the optimal critic and show that the first 2 conditions can be achieved by the Spectral normalization, while the last one can be achieved by some gradient penalty. The paper is clearly written, well structured and pleasant to read. I have the following two remarks: - Proposition 8 provides a way to ensure condition 2 holds (beta-smoothness). It requires spectral normalization and smooth activation functions. In practice, while the spectral normalization is important, the choice of the activation is not in general 1-smooth (Leaky-relu for instance). Does it really matter in practice? Some illustrative experiments could be beneficial to better understand what's happening. - Is it that hard to obtain generators that satisfy condition G1 and G2, it seems to be a natural consequence on the regularity of the mapping f? If that is the case, it might be worth better explaining how this is challenging. Limitations: The paper considers only the setting where the optimal critic is reached and therefore it is still unclear if the analysis carries on to the training procedures used in practice (non-optimal critic). The authors recognize this limitation and leave it for future work. Overall, I feel that the paper provides good insights on what regularization is important for training gans and why. For that reason, I think this paper should be accepted. ------------------------------------------------------------------------------------------------------------ Revision: I think the paper provides a good theoretical contribution in terms of interpreting many of the tricks used for improving GAN training. In fact the paper also suggests some new regularization methods (prop 13 for conditions D3) which would constrain the RKHS norm of the critic. The authors show how it is related to gradient penalty, in a particular case, but the result also suggests something more general. For instance [1], consider an abstract RKHS space containing deep networks and provide an upper-bound on the rkhs norm of such networks in terms of the spectral norm of their weights and a lower-bound in terms of its Lipschitz constant. I do agree with reviewer 1 that a better discussion of the connection to [2] should be included since that paper was interested in ensuring weak continuity of the loss, which can be thought of as a first requirement to get more regularity of the cost functional. I still think the paper is worth being accepted and raised my score to 8 as I think the authors addressed the major concerns that were raised. [1] A. Bietti, G. Mialon, D. Chen, and J. Mairal. A Kernel Perspective for Regularizing Deep Neural Networks. [2] Michael Arbel, Dougal Sutherland, Mikołaj Binkowski, and Arthur Gretton. On gradient regularizers for MMD GANs. In Advances in Neural Information Processing Systems, pp. 6700–6710, 2018.
iclr_2020_H1eJAANtvr
Deep learning based approaches have been widely used in various urban spatiotemporal forecasting problems, but most of them fail to account for the unsmoothness issue of urban data in their architecture design, which significantly deteriorates their prediction performance. The aim of this paper is to develop a novel clustered graph transformer framework that integrates both graph attention network and transformer under an encoder-decoder architecture to address such unsmoothness issue. Specifically, we propose two novel structural components to refine the architectures of those existing deep learning models. In spatial domain, we propose a gradient-based clustering method to distribute different feature extractors to regions in different contexts. In temporal domain, we propose to use multi-view position encoding to address the periodicity and closeness of urban time series data. Experiments on real datasets obtained from a ride-hailing business show that our method can achieve 10%-25% improvement than many stateof-the-art baselines.
Summary: This paper proposes a clustering attention-based approach to handle the problem of unsmoothness while modeling spatio-temporal data, which may be divided into several regions with unsmooth boundaries. With the help of a graph attention mechanism between vertices (which correspond to different regions), the CGT model is able to model the (originally unsmooth) cross-region interactions just like how Transformers are applied in NLP tasks (where words are discrete). Experiments seem to suggest a big improvement when compared to baselines. Pros: +This should be one of the first works that apply a graph transformer alike method in this domain, and specifically on the unsmoothness problem. + Since the dataset is not publically available, there aren't many prior works to compare the CGT to. However, at least compared to the one prior work [1] that the authors point to in Section 4, the RMSE results achieved CGT does seem to be significantly better. ======================================== However, I still have some questions/concerns on the paper, detailed below. 1) The current organization of the paper, as well as its clarity, can (and should) be significantly improved. I didn't completely understand the approach on my first two passes, and I **had** to read the code published by the authors. Here are some issues that I found: - For one, Figure 2 is not quite helpful as it's too messy with font size too small. A similar problem is with Figure 4 which, without further clarification (e.g., of what "atrous aggregation" exactly mean), is very hard to interpret. - The notations are very inconsistent and messy: i) In Eq. (1), you should use a symbol different from $\mathbf{X}$ to refer to the "predictions". Since you are applying $f(\cdot)$ on $\mathbf{X}_{t-T_x+1:t}$, you should not get the "exact same" target sequence. That's your target. Maybe use $\hat{\mathbf{y}}$, which you used in Eq. (8). ii) In Figure 3, what is the orange line? In addition, I only saw two blue lines in the figure, but the legend seems to suggest there are four of them... iii) The notations used in Figure 4 are somewhat confusing. For example, what does "f->1" mean? (I later found through Eq. (2) that it means transform to 1 dimension; but the small plots in Figure 4 suggest f is a "magnitude" of the feature.) In addition, there are two $H_1$ in Figure 4 with clearly different definitions. iv) The authors used $\mathcal{G}_{\theta_k}(x_i)$ in Eq. (3) without defining it. The definition actually came much later in the text in Eq. (6). I suggest moving the usage of the clustering assignment (i.e., Eq. (3)) to after Eq. (6). v) What does $[\cdot || \cdot]$ mean (cf. Eq. (4))? (The code seems to suggest it's concatenation?) vi) The authors first used $h_{x_i}$ in Eq. (3) to denote the output of the CAB module. Then letter $h$ is then re-used in Eq. (4) and (5) with completely different meanings. For instance, the $W_kh_i$ in Eq. (4) correspond to line 48 of the code "model.py". (By the way, nowhere around Eq. (4) did the authors explain how $h_i$ is produced, such as taking the mean over the batch dimension, etc.). vii) In Section 2.6, you denote the "optimal vertex cluster scheme" with letter $C$, which is used in Eq. (2). Similar for parameter $a_k$ and atrous offset $a$. - This not a very big problem (as it seems somewhat inevitable), but I think there are too many acronyms in the paper. I think it'd be great if the authors can take care of these issues, as clarity in math and descriptions are critical to the presentation of such an involuted method. It would also be useful to clearly define the dimensionality of all the variables (e.g., you defined $V$ in Section 2.1, but never used it again in later subsections). 2) Regarding the usage of the multi-view position encoding, the authors claimed that it "provides unique identifiers for all time-steps in temporal sequences". However, if you consider $x=7$ and $x=14$, then $PE_i(7)=PE_i(14)$ for all $i=1,2,3,4$ with $PE_5(7) \approx PE_5(14)$. Doesn't this invalidate the authors' claim? Also, doesn't this mean that the proposed MVPE only works on sequences with length <= 7? (In comparison, the design of positional encoding in the original Transformer doesn't have this problem.) (You didn't show how you implemented and initialized the position encoding in the uploaded code, so I may be missing some assumptions here.) 3) In line 48 of the code (https://github.com/CGT-ICLR2020/CGT-ICLR2020/blob/master/model.py#L48), why did you take the mean over the batch dimension? Shouldn't different samples in a minibatch be very different? Does a (potentially completely independent) sample in a batch affect another sample? A similar problem occurs for Eq. (9): Why do you require clusterings of two different samples $b_1, b_2$ to be similar? (Where these samples can come from quite different times and years of the data?) 4) In the experiments, you "sampled 10 input time-steps" due to computational resources. Typically, in Transformer-based NLP tasks the sequence lengths can be over 500, with much higher dimensionality (e.g., 512); but you are only using sequence length 10 and dimensions <= 16 (in your code, you used "self.dec_io_list = [[5,8,16,16],[16,16,16,16],[16,8,8,8]]"). What is the bottleneck for the computation of your approach? (I noticed there are more than 1K vertices in city A, which may be a costly factor indeed.) How much memory/compute does the CGT method consume? How does using a longer sequence affect the performance of CGT? 5) You performed an ablation study on MVPE. Did you simply remove MVPE, or did you use the conventional PE from the original Transformer paper (Vaswani et al. 2017)? (If the latter, I'm very surprised that MVPE is so much better than PE. In that case, you may want to try MVPE on NLP tasks to see if it also improves SOTA.) 6) How did you measure unsmoothness in Figure 6? It doesn't seem like a quantifiable property to me. You should discuss this in the experiment section. ----------------------------------- Minor questions/issues that did not affect the score: 7) There are some strange phrases/sentences in the paper. For example, the first sentence of the 2nd paragraph of Section 1: "we will show **throughout the paper** that urban spatiotemporal prediction task suffers from..." 8) Why use an encoder-decoder architecture at all? Why can't we train the model like in language modeling tasks, where we want to predict the next token? In other words, you can simply use a decoder-side CGT, and mask the temporal self-attention as in the Transformers. ----------------------------------- In general, I think this paper proposed a valuable approach that seems to work very well on the spatio-temporal dataset they used (which unfortunately is private). However, as I pointed out above, I still have numerous issues with the paper's organization and clarity, as well as some doubts over the methodology and the experiment. I'm happy to consider adjusting my score if the authors can resolve my concerns satisfactorily. [1] http://www-scf.usc.edu/~yaguang/papers/aaai19_multi_graph_convolution.pdf
iclr_2020_H1lBYCEFDB
Most neural networks are trained using first-order optimization methods, which are sensitive to the parameterization of the model. Natural gradient descent is invariant to smooth reparameterizations because it is defined in a coordinate-free way, but tractable approximations are typically defined in terms of coordinate systems, and hence may lose the invariance properties. We analyze the invariance properties of the Kronecker-Factored Approximate Curvature (K-FAC) algorithm by constructing the algorithm in a coordinate-free way. We explicitly construct a Riemannian metric under which the natural gradient matches the K-FAC update; invariance to affine transformations of the activations follows immediately. We extend our framework to analyze the invariance properties of K-FAC appied to convolutional networks and recurrent neural networks, as well as metrics other than the usual Fisher metric.
This paper is concerned with tractable (approximate) forms of natural gradient updates for neural networks, in particular with the recent K-FAC approximation, which applies a set of approximation (layer-wise independence, Kronecker structure for affine maps) in order to obtain a Hessian that can be computed and inverted efficiently. K-FAC has been introduced for MLPs, and has previously been generalized to convolutional and certain recurrent NNs. The stated goal of this paper is to provide a mathematical re-formulation of K-FAC in terms of Riemannian metrics. While K-FAC has been developed as approximation to the exact natural gradient update, they come up with a different Riemannian metric, definition of space, etc., such that in the end, K-FAC is the exact natural gradient for that. The authors here also obtain a more precise answer to invariance properties and, given some heavy maths, what they claim to be more elegant proofs of previously known properties of K-FAC. The paper uses very heavy math, well "over my head" and likely most ICLR attendees. Along with me, they'll ask the obvious question of what this is good for. As far as I can see, there is nothing really new being proposed here in terms of practical consequences. The authors also do not make much effort to explain why their viewpoint is useful, say to obtain practically relevant insights in future work. So, as far as I am concerned, I do not see why this work should be of much relevance to ICLR, which is not an abstract maths conference. A final comment is that people have for a very long time tried to use second-order optimization for MLPs. The aspect that always was tricky there, is that SGD is *stochastic*, and the second-order info is hard to estimate from a mini-batch. The sets of approximations of K-FAC are pretty extreme, but they may just be needed to make things work in the end, because they may stabilize that "stochastic inverse Fisher info matrix" enough to not make the optimization process fail altogether. Now, all theoretical arguments, like "invariance to this and that", always ignore the crucial fact that you are talking about a stochastic estimate over a mini-batch, and your theory is always for E_x[...] "being the truth". It is not, it is just over a small mini-batch. I am not saying the additional theoretical insight from this work here (over previous K-FAC work), whatever it may be in the end, is not useful. I am just saying I'd be a lot more confident if the authors would specifically address the stochastic property.
iclr_2020_ByxhOyHYwH
Few-shot classification is a challenging task due to the scarcity of training examples for each class. The key lies in generalization of prior knowledge learned from large-scale base classes and fast adaptation of the classifier to novel classes. In this paper, we introduce a two-stage framework. In the first stage, we attempt to learn task-agnostic feature on base data with a novel Metric-Softmax loss. The Metric-Softmax loss is trained against the whole label set and learns more discriminative feature than episodic training. Besides, the Metric-Softmax classifier can be applied to base and novel classes in a consistent manner, which is critical for the generalizability of the learned feature. In the second stage, we design a task-adaptive transformation which adapts the classifier to each few-shot setting very fast within a few tuning epochs. Compared with existing fine-tuning scheme, the scarce examples of novel classes are exploited more effectively. Experiments show that our approach outperforms current state-of-the-arts by a large margin on the commonly used mini-ImageNet and CUB-200-2011 benchmarks.
# Summary This paper deals with few-shot learning from a metric-learning perspective. The authors propose replacing the softmax loss, i.e. softmax + cross-entropy loss, with a so-called "metric-softmax" loss which imitates a Guassian kernel RBF over class templates/weights. This loss is used in both stages of training on base and on novel classes and the authors argue that it helps learning more discriminative feature while preserving consistency between train and test time. Secondly the authors advance a task-adaptive transformation for stage 2 that maps the features from the previously learned feature extractor to a space which is easier to learn. The contributions are evaluated on the standard mini-ImageNet benchmark and on CUB-200-2011 individually and in domain shift mode. # Rating Although some of the results in the paper might look impressive, my rating for this work is reject for the following reasons (which will be detailed below): 1) the main contribution, metric-softmax loss, is not novel. It has been used and described in multiple works in the past 1-2 years. 2) a part of the evaluations and comparisons do not follow the usual protocol and are not fair 3) the second contribution, Fast Task Adaptation (FTA), is not well described and it's unclear in what it actually consists, how does it work and how was it trained exactly. # Strong points - This paper deals with a highly interesting and relevant topic for ICLR. # Weak points ## Contributions - This work ignores a large body of research in few-shot learning and metric learning aiming to improve the efficiency per training sample and feature discrimination. The proposed loss can be traced back to Goldberger et al. [i] in NCA (Neighborhood Component Analysis). Prototypical Networks are derived from this work and hence similar with the metric-softmax loss. Qi et al. [ii] point out that when h and W are l2-normalized (using the notations from this submission, eq. 7) maximizing their inner-product or cosine-similarity is equivalent to the minimization of the squared Euclidean distance between them. This leads to the loss from [ii], [iii], also known as Cosine Classifier and which also is accompanied by a scaling factor or temperature as here. Other related works on improving softmax and selecting the representative weights for a class include: center loss [iv], ring loss[v], L-GM loss[vi]. In this light, the metric-softmax is actually not novel and can be found in several other contributions from last year. - In my opinion, it is difficult from the paper to understand what is actually the fast adatpation module. The authors describe $g$ as "simply a zero-offset affine transformation" $g(h)= M^T h$. In the implementation details we do not find out more about this module and we don't have more insights on what is it doing inside, other than a toy hand-drawn example in Figure 3. I find it difficult to assess. ## Experiments - The authors evaluate 3 backbone architectures, Conv-4, ResNet-10 and ResNet-12. For the former they use 84 x 84 images, while for the latter they use 224 x 224 images. The larger images are not standard in the few-shot ImageNet evaluation protocol. Data augmentation (jittering, flipping, etc.) is used here, while in most works it is not. Chen et al. are the first ones to introduce larger images and data augmentation and acknowledge that the large scores are due to this. Testing out new configuration is not a problem as long as the baselines are evaluated in the same conditions. However, in this case they are not and this is not visible in the captions of the tables and descriptions in the paper. Training a network with data augmented images and/or higher resolution images and comparing to baselines without data augmentation and images with 7 times less pixels, for sure does not allow seeing the true impact of the proposed method. I would advise to either evaluate in the usual mini-ImageNet settings, either implement a few representative and easy to train baselines, e.g. ProtoNets, Cosine Classifier[iii] in the same conditions as here and compare against. This should provide a better idea on the effectiveness of the proposed methods. ## Other comments - the scores for baseline methods are seemingly taken from the paper of Chen et al. who trained them themselves. This should be mentioned in the paper and in the caption # Suggestions for improving the paper: 1) Review the experimental section and make sure at least some of the baselines are trained in similar conditions as the proposed method or alternatively evaluate the proposed methods in standard mini-ImageNet settings 2) Provide additional insights, experiments and implementation details for FTA to make it easier to understand, there are some examples in the references below. # References [i] J. Goldberger et al., Neighbourhood components analysis, NIPS 2005 [ii] H. Qi et al., Low-Shot Learning with Imprinted Weights, CVPR 2018 [iii] S. Gidaris and N. Komodakis, Dynamic Few-Shot Visual Learning without Forgetting, CVPR 2018 [iv] W. Wen et al., A Discriminative Feature Learning Approach for Deep Face Recognition, ECCV 2016 [v] Y. Zeng et al., Ring loss: Convex Feature Normalization for Face Recognition, CVPR 2018 [wi] W. Wan et al., Rethinking Feature Distribution for Loss Functions in Image Classification, CVPR 2018
iclr_2020_SyxS0T4tvS
Language model pretraining has led to significant performance gains but careful comparison between different approaches is challenging. Training is computationally expensive, often done on private datasets of different sizes, and, as we show, hyperparameter choices have significant impact on the final results. We present a replication study of BERT pretraining (Devlin et al., 2019) that carefully measures the impact of many key hyperparameters and training data size. We find that BERT was significantly undertrained, and can match or exceed the performance of every model published after it. Our best model achieves state-of-the-art results on GLUE, RACE, SQuAD, SuperGLUE and XNLI. These results highlight the importance of previously overlooked design choices, and raise questions about the source of recently reported improvements. We release our models and code.
This paper presents a replication study of BERT pretraining and carefully measures the impact of many key hyperparameters and training data size. It shows that BERT was significantly undertrained and propose an improved training recipe called RoBERTa. The key ideas are: (i) training longer with bigger batches over more data, (ii) removing NSP, (iii) training over long sequences, and (iv) dynamically changing the masking pattern. The proposed RoBERTa achieves/matches state-of-the-art performance on many standard NLU downstream tasks. The in-depth experimental analysis of the BERT pretraining process in this paper answers many open questions (e.g., the usefulness of NSP objective) and also provide some guidance in how to effectively tweak the performance of pretrained model (e.g., large batch size). It also further demonstrates that the BERT model, once fully tuned, could achieve SOTA/competitive performance compared to the recent new models (e.g., XLNet). The main weakness of the paper is that it is mainly based on further tuning the existing BERT model and lacks novel contribution in model architecture. However, the BERT analysis results provided in this paper should also be valuable to the community. Questions & Comments: • It is stated that the performance is sensitive to epsilon in AdamW. This reminds us of the sensitivity of BERT pretraining to the optimizers. Since one of the main contributions of this paper is the analysis of the BERT pretraining process, more experimental analysis on the optimizer should also be included. • It is stated that (page 7) the submission to GLUE leaderboard uses only single-task finetuning. Is there any special reason for restraining it to single-task finetuning if earlier results demonstrates multi-task finetuning is better? Of course, it is valuable to see the great performance achieved by single-task finetuning for RoBERTa. But there should be no reason that it is restricted to be so. An additional experimental results with multi-task finetuning should also be added.
iclr_2020_SygkSkSFDB
This work examines the convergence of stochastic gradient algorithms that use early stopping based on a validation function, wherein optimization ends when the magnitude of a validation function gradient drops below a threshold. We derive conditions that guarantee this stopping rule is well-defined and analyze the expected number of iterations and gradient evaluations needed to meet this criteria. The guarantee accounts for the distance between the training and validation sets, measured with the Wasserstein distance. We develop the approach for stochastic gradient descent (SGD), allowing for biased update directions subject to a Lyapunov condition. We apply the approach to obtain new bounds on the expected running time of several algorithms, including Decentralized SGD (DSGD), a variant of decentralized SGD, known as Stacked SGD, and the stochastic variance reduced gradient (SVRG) algorithm. Finally, we consider the generalization properties of the iterate returned by early stopping.
The paper studies the problem of the number of first-order-oracle calls for the SGD type of algorithms to find a stationary point of the objective function. The main results in the paper are built upon a new, general framework to analyze the SGD type of algorithms. The main framework can be summarized as follows: At each iteration, the algorithm receives h_t, a (potentially biased) estimator of the gradient at a given point x_t, and performs a simple update x_{t + 1} = x_t - \eta * h_t. The framework says that as long as the norm (V_t) of \Delta_t = h_t - v_t (where v_t is an unbiased estimator of the true gradient with bounded variance) satisfies a particular Lyapunov-type inequality, then the algorithm can find an epsilon-stationary point as long as epsilon is not too small. The analysis of the framework is quite standard, one only needs to write the decrement in function value at each iteration into the following three terms: the norm of the true gradient of the function, \delta_t: the difference between v_t and the true gradient (so E[\delta_t] = 0) and \Delta_t: the difference between the received gradient h_t and v_t. The authors showed some application of this framework in Stacked SGD and decentralized SGD. The main intuitions of these applications are (1). \Delta_t comes from the synchronization difference of the nodes when computing the gradient. (2). The shrinking of V_t is due to the (better) synchronization at each iteration. (3). The increment of V_t is due to the gradient update. Overall, I find the general framework quite interesting and potentially useful for future research and could be used as a guide for choosing the proper algorithm in distributed computation. The bounds in this paper are also in principle tight. The only question I have about this result is the dependency of m (the number of iterations between each evaluation of the gradient norm of the underlying function). (1). How can this (the evaluation of the gradient norm of the underlying function)) be done in a decentralized environment? What is the computation overhead? (For example in DSGD, how can we compute \bar{x}_t?) (2). It seems that the computation cost (number of IFO) scales quadratically with respect to m. What is the intuition for this scaling? It appears to me that the scaling should be linear or better (the worst case is that within the "m" iterations, only one iteration has gradient >= epsilon). The authors should elaborate more on this point.
iclr_2020_rJxYMCEFDr
Deep neural networks represent data as projections on trained weights in a high dimensional manifold. This is a first-order based absolute representation that is widely used due to its interpretable nature and simple mathematical functionality. However, in the application of visual recognition, first-order representations trained on pristine images have shown a vulnerability to distortions. Visual distortions including imaging acquisition errors and challenging environmental conditions like blur, exposure, snow and frost cause incorrect classification in first-order neural nets. To eliminate vulnerabilities under such distortions, we propose representing data points by their relative positioning in a high dimensional manifold instead of their absolute positions. Such a positioning scheme is based on a data point's second-order property. We obtain a data point's second-order representation by creating adversarial examples to all possible decision boundaries and tracking the movement of corresponding boundaries. We compare our representation against first-order methods and show that there is an increase of more than 14% under severe distortions for . We test the generalizability of the proposed representation on larger networks and on 19 complex and real-world distortions from CIFAR-10-C. Furthermore, we show how our proposed representation can be used as a plug-in approach on top of any network. We also provide methodologies to scale our proposed representation to larger datasets.
The authors propose an approach to reduce the sensitivity of neural networks to visual distortions. To do so, they modify the representation of a data point within a network, using its relative position (to other points) in representation space rather than its absolute position (which is measured as distance of a point to the decision boundary of each class). They then evaluate their approach on common corruptions from the CIFAR-C dataset. The paper addresses an important problem, however I do not find the high-level motivation behind the proposed approach or the experimental results sufficiently convincing. In particular, conceptually, it is completely unclear why the second order representations should be more resilient to visual distortions. Even in the sample illustration provided in Figure 2, it is evident that the clusters in this new representation space are not invariant to visual distortion, or even are significantly more invariant than the first-order representation. At a more fundamental level, given that the accuracy of the original network does drop, it is tautological that the distance of a point to the decision boundaries is decreasing under distortion (and thus is sensitive to it). Experimentally, my chief concerns are: 1. When the authors evaluate the proposed second-order representations, they use networks with additional layers which do not seem to be present in the original baseline. Prior work [Hendrycks and Dietterich, 2019] has shown that model capacity has a marked influence on the corruption robustness of a network. Thus, it is unclear where the improvement here is coming solely from the additional capacity compared to the original model. The baseline network that the authors compare to should also include the additional layers. 2. The results reported for some of the baselines seem inconsistent with prior work. a) For example, the authors state that the results for the Hossain et al. [2018] baseline are similar to NR1, which is worse than the original network. However, Hossain et al. report an improvement for the same corruptions and the same dataset in their paper. Where is this inconsistency coming from? Do the authors train with the DCT filtering or is it only applied at test time? If it is the latter, it could explain why the authors fail to reproduce the baseline correctly. b) For the adversarially robust network baseline, why did the authors choose an FGSM adversary? FGSM is not typically used to train state-of-the-art robust models because of the existence of much stronger attacks (such as PGD). How was the eps used to train the network chosen? In prior work, Kang et al. [2019; arxiv:1908.0801] evaluate both L2 and Linf robust models and show improvements over the baseline for several common corruptions. This seems to suggest that the robust model baseline reported in this paper is not accurate/representative. 3. Moreover, given that the representation size scales with number of classes, the proposed method should be evaluated on datasets with more classes such as CIFAR-100 or ImageNet. Improvements demonstrated in these settings with dimensionality reduction would be more convincing. Overall, my main reservations are: a) the lack of a conceptual justification for the proposed approach, and b) issues with the experimental evaluation, particularly in the reported baselines and how they seem to contradict prior work. Thus, I recommend rejection. Other comments: - The right-side of Figure 1 is essentially the same map flipped. - Why was the MSE loss chosen for J? Given that the network was presumably trained with cross-entropy, this choice seems somewhat arbitrary. Are the results consistent for cross entropy loss as well? The authors should include these results in the appendix. - Figure 4 is very hard to read---the authors should change the plotting style to make the results more legible. - For the results in Figure 5, the authors should once again compare to adding extra layers to the baseline networks as well. - In Table 1, why does the performance of NR improve when the features are based only on gradients for the last layer?
iclr_2020_SJgdnAVKDH
Self-training is one of the earliest and simplest semi-supervised methods. The key idea is to augment the original labeled dataset with unlabeled data paired with the model's prediction (i.e. the pseudo-parallel data). While self-training has been extensively studied on classification problems, in complex sequence generation tasks (e.g. machine translation) it is still unclear how self-training works due to the compositionality of the target space. In this work, we first empirically show that selftraining is able to decently improve the supervised baseline on neural sequence generation tasks. Through careful examination of the performance gains, we find that the perturbation on the hidden states (i.e. dropout) is critical for self-training to benefit from the pseudo-parallel data, which acts as a regularizer and forces the model to yield close predictions for similar unlabeled inputs. Such effect helps the model correct some incorrect predictions on unlabeled data. To further encourage this mechanism, we propose to inject noise to the input space, resulting in a "noisy" version of self-training. Empirical study on standard machine translation and text summarization benchmarks shows that noisy self-training is able to effectively utilize unlabeled data and improve the performance of the supervised baseline by a large margin.
This paper presents a self-training approach for improving sequence-to-sequence tasks. As a preliminary experiment, this study randomly sampled 100k sentences from WMT 2014 English-German dataset (WMT100K, hereafter), trained a baseline (Transformer) model on WMT100K, and applied self-training methods on the remaining English sentences as the unlabeled monolingual data. After exploring different procedures for self-training, this study uses the fine-tuning strategy: train a model on the supervision data; build pseudo parallel data by predicting translations for all unlabeled data using the trained model; train a new model on the pseudo parallel data; and fine-tune the new model on the supervision data. This strategy alone gave a 3 points improvement of BLEU. This paper hypothesized the reasons behind the performance improvements: beam-search decoding (+0.5 BLEU) and dropout (+1.2 BLEU). The authors argue that the beam-search decoding contributes partially to the performance gains, while the dropout accounts for the most. The authors infer that the dropout causes an implicit perturbation. Exploring different perturbation strategies, synthetic noise (e.g., input tokens are randomly dropped, masked, and shuffled) and paraphrase (round-trip translation, e.g., English-German-English), the authors reported no significant difference between these two strategies. Finally, this paper reports empirical results (on WMT 2014 English-German, FloRes English-Nepali, and English Gigaword (summarization task)) of self-training strategies presented in this paper. This paper concluded that self-training can be an effective method to improve generalization and that the noise injected during self-training plays a critical role for its success. This paper is well structured and well written. It is interesting to see the improvements of the sequence-to-sequence tasks by using the self-training approach. This paper is interesting because it has a close connection with the back-translation approach that has been popular in recent years. Although the hypotheses presented in this paper are interesting, they were not fully validated after all. The analyses on the loss functions, ablation tests, and experiments on the toy task can only bring indirect explanations about why we could observe the performance improvements. This impression is also demonstrated by the fact that this paper uses 'might' five times when explaining interpretations of the experimental results. Having said, I tend to agree that identifying the exact reason is difficult. However, I have two other questions before recommending this paper: (1) whether the baseline model was trained sufficiently, and (2) whether this paper is about self-training strategies or regularization methods. (1) In order to accept the experimental results that the self-training approach can improve the performance, we need to make sure that the baseline model was trained sufficiently. However, the appendix explains, "we basically use the same learning rate schedule and label smoothing as in fairseq examples." I'm not sure whether this training procedure was fair among different models because the baseline model and self-supervised model received totally different number of training instances (100k vs 3.9M). This claim would be stronger if this paper could show an evidence that the baseline model was trained properly by, for example, explaining the stopping criteria for iterations, tuning hyper-parameters (e.g., learning rate) individually for the baseline and self-trained models, showing the mean and variance of BLEU scores with different initializations, and showing the training curve of the baseline model. (2) We can view the self-training strategies presented in this paper as regularization methods. For this reason, I am wondering whether the self-training strategies presented in this paper are only for self-training or general to pure supervised setting. We can easily guess that the performance would drop if we removed the dropout from the baseline method. In contrast, I would like to see whether the synthetic noise could improve the performance of the baseline method alone, behaving as a regularization method. It would be useful to see the performance of the baseline method without the dropout and with the synthetic noise to highlight the effect of the presented strategies under the self-training scenario. Minor comments: It would be useful to see the number of unlabeled instances in Section 3.1. Sectoin 3.2: "This is different from back-translation where new knowledge may originate from an additional backward translation model." I'm not sure whether a backward translation model can introduce new knowledge because the supervision data are usually the same between forward and backward directions. Section 3.3: "at test/decoding time the model does not use dropout" To be precise, the weights are scaled by the dropout rate (p) in the decoding time. Reference: Lample et al. 2018 should be replaced with another paper: Guillaume Lample, Alexis Conneau, Ludovic Denoyer, Marc'Aurelio Ranzato. Unsupervised Machine Translation Using Monolingual Corpora Only. ICLR 2018.
iclr_2020_ryeRn3NtPH
We propose Adversarial Inductive Transfer Learning (AITL), a method for addressing discrepancies in input and output spaces between source and target domains. AITL utilizes adversarial domain adaptation and multi-task learning to address these discrepancies. Our motivating application is pharmacogenomics where the goal is to predict drug response in patients using their genomic information. The challenge is that clinical data (i.e. patients) with drug response outcome is very limited, creating a need for transfer learning to bridge the gap between large pre-clinical pharmacogenomics datasets (e.g. cancer cell lines) and clinical datasets. Discrepancies exist between 1) the genomic data of pre-clinical and clinical datasets (the input space), and 2) the different measures of the drug response (the output space). To the best of our knowledge, AITL is the first adversarial inductive transfer learning method to address both input and output discrepancies. Experimental results indicate that AITL outperforms state-of-the-art pharmacogenomics and transfer learning baselines and may guide precision oncology more accurately.
The paper proposes an adversarial transfer learning network that can handle the adaptation of both the input space and the output space. The paper is motivated by the application of drug response prediction where the source domain is cell line data and the target domain is patient data. Patient data are usually scarce, hence motivating transferring the knowledge learned from the more widely available cell line data to improve the predictive performance based on the patient data. The idea of making use of adversarial networks is to learn a representation of the data points that is invariant to whether the data points come from the source domain or the target domain. Experiments on real-world data over four drugs demonstrate the effectiveness of the proposed methods compared to other methods that are not specifically designed for this scenario. The major utility of the proposed method demonstrated by the paper is its empirical effectiveness on real-world data with four drugs. The reviewer has the following concerns: 1. While the proposed method is capable of handling adaption of the output space between the source domain and the target domain, it makes use of a multi-task subnetwork component, which is a sensible modeling choice but it seems that the idea of leveraging a global discriminator and a class-wise discriminator has been exploited in previous works, as pointed out by the authors in Section 4.2. Therefore, the reviewer is concerned about the modeling contribution made in this paper is somewhat incremental given existing literature. 2. The multi-task subnetwork component itself also seems to be a straightforward and naive way to deal with the different tasks addressed in the source domain and the target domain. For one thing, only the data of the source domain are used to optimize the loss in the source domain and only the data of the target domain are used to optimize the loss in the target domain. While it is understandable that such a decision is due to the lack of ground truth of binary label in the source domain and numeric response in the target domain, ideally, the discriminative process might gain further benefits from borrowing information across the two domains. 3. It is also unclear from the paper how each component of the architecture contributes to the final performance. An ablation study that gets rid of the multi-task subnetwork, global discriminator, and class-wise discriminators (and potential combinations of these components) could help to provide better insights in the importance of each component and determine whether empirically these components do play a role coinciding with the description of the paper. Other issues/questions: 1. Intuitively, what is the purpose of the shared layer g() in section 3.2.2? 2. In the spirit of the class-wise discriminators, will it be helpful to also add a discriminator based on the value of IC50? 3. Concepts and terminology are not well explained. e.g. what is a cell line? The authors should also provide further descriptions of what do the cell line and patient datasets look like early on in the paper. It was not until section 3 it is clear to the readers that in the dataseets consider in this paper, the source domain and the target domain have the same raw feature representation. Furthermore, in section 2, the authors could consider providing a taxonomy in table format to better explain different types of transfer learning and the three approaches to inductive transfer learning.
iclr_2020_BygJKn4tPr
Microsoft Word - Effective Mechanism to Mitigate Injuries During NFL Plays Abstract-NFL(American football),which is regarded as the premier sports icon of America, has been severely accused in the recent years of being exposed to dangerous injuries that prove to be a bigger crisis as the players' lives have been increasingly at risk. Concussions, which refer to the serious brain traumas experienced during the passage of NFL play, have displayed a dramatic rise in the recent seasons concluding in an alarming rate in 2017/18. Acknowledging the potential risk, the NFL has been trying to fight via NeuroIntel AI mechanism as well as modifying existing game rules and risky play practices to reduce the rate of concussions. As a remedy, we are suggesting an effective mechanism to extensively analyse the potential concussion risks by adopting predictive analysis to project injury risk percentage per each play and positional impact analysis to suggest safer team formation pairs to lessen injuries to offer a comprehensive study on NFL injury analysis. The proposed data analytical approach differentiates itself from the other similar approaches that were focused only on the descriptive analysis rather than going for a bigger context with predictive modelling and formation pairs mining that would assist in modifying existing rules to tackle injury concerns. The predictive model that works with Kafka-stream processor real-time inputs and risky formation pairs identification by designing FP-Matrix, makes this far-reaching solution to analyse injury data on various grounds wherever applicable.
This paper introduces a machine learning pipeline for injury prediction in NFL events. The paper discusses several system settings on data streaming and processing, along with model selection and other hyper-parameter tuning details. The problem itself is very important. However, there are several disadvantages of the current status of the paper. The writing of this paper needs to be largely improved. The description is very redundant and the texts are very hard to read. Therefore, it makes the paper much less clear to the reader. Second, the proposed system is not related to the focus of ICLR and it lacks the novelty. Both the data processing and model selection methods are already well-known practicies. Experiments are also not established well enough to demonstrate the advantage of the proposed solution to this problem. Due to the above issues, I think the paper is not ready for publication.
iclr_2020_S1xitgHtvS
Reinforcement learning (RL) combines a control problem with statistical estimation: the system dynamics are not known to the agent, but can be learned through experience. A recent line of research casts 'RL as inference' and suggests a particular framework to generalize the RL problem as probabilistic inference. Our paper surfaces a key shortcoming in that approach, and clarifies the sense in which RL can be coherently cast as an inference problem. In particular, an RL agent must consider the effects of its actions upon future rewards and observations: the exploration-exploitation tradeoff. In all but the most simple settings, the resulting inference is computationally intractable so that practical RL algorithms must resort to approximation. We demonstrate that the popular 'RL as inference' approximation can perform poorly in even very basic problems. However, we show that with a small modification the framework does yield algorithms that can provably perform well, and we show that the resulting algorithm is equivalent to the recently proposed K-learning, which we further connect with Thompson sampling.
The paper at hand presents an alternative view on reinforcement learning as probabilistic inference (or equivalently maximum entropy reinforcement learning). With respect to other formulations of this view (e.g. Levine, 2018; I am referring to the references of the paper here), the paper identifies a shortcoming in the disregard of the agent’s epistemic uncertainty (which seems to refer to the uncertainty with respect to the underlying MDP). It is argued, that algorithms based on the prevailing probabilistic formulation (e.g. soft Q-learning) suffer from suboptimal exploration. The paper thus compares maximum entropy RL to K-learning (O’Donoghue, 2018), which is taken to address the issue of suboptimal exploration due to its temperature scheduling and its inclusion of state-action pair counts in the reward signal. As its technical contribution, the paper re-interprets K-learning via the latent variable denoting optimality employed in Levine (2018) and introduces a theorem bounding the distance between the policies of Thompson sampling and K-learning. Empirical validation of the claims is provided via experiments on an engineered bandit problem and the tabular MDP (i.e. DeepSea from Osband et al., 2017), as well as via soft Q-learning results on the recently suggested bsuite (Osband et al., 2019). I consider this paper a weak reject. This is in light of me finding it very hard to follow the papers main claims and arguments, even though it positions itself as communicating connections (“making sense”) in prior work, rather than presenting a novel algorithm. While this is in part due to the complicated issue and math being discussed (and the paper probably catering to a very narrow audience), the paper in its current state does seem to hinder understanding as well. On the positive side, I do appreciate the intention of the paper, namely to connect RL as probabilistic inference, Thompson sampling and K-learning. In my opinion, this can be taken as a valuable addition to the current understanding of these approaches. Also, I like the experiments as they are specifically constructed to support the claims of the paper. On the negative side, vague language, missing assumptions and lax notation seem to hinder the understanding of the paper to a considerable extend: e.g. it is stated, that “we connect the resulting algorithm with […] K-learning”. However, I do not recognize a new algorithm being provided. Instead the paper argues in favor of K-learning. The assumptions that come with K-learning are not mentioned. The restriction of K-learning to tabular RL is taken to be understood implicitly (whereas RL as probabilistic inference seems applicable with function approximation also, which is not mentioned in the comparison). The paper always talks of shortcomings (plural) of RL as probabilistic inference, but only provides one argument (suboptimal exploration) with respect to this. RL as probabilistic inference is introduced in a different form as in prior literature (i.e. Equation 6), while the derivation in the Appendix spanning the differences in notation being hard to follow due to (maybe minor?) notational issues (e.g. x and y seem to have replaced s’ and a; further down there is a reference to Equation 7, however probably it is meant to be 8 and even that with some leap in notation). The paper would benefit from better proof-reading, where mistakes in a very dense argumentation make it hard to follow (e.g. I do not understand the sentence “The K-learning expectation (7) is with respect to the posterior over Q[…] to give a parametric approximation to the probability of optimality.”) Literature wise, the paper draws heavily from two unpublished papers (Levine, 2018; O’Donoghue, 2018). While this makes it harder to arrive at a high confidence level with respect to the paper’s claims, I would not argue this to be critical. I would consider raising my score, if the authors would improve the accessibility of the paper by polishing the argumentation and notation. Confidence: low. It is very likely, that I have misunderstood key arguments and derivations. Also, I did not attempt to follow all of the technical derivations. ====== post rebuttal comment: I changed the score of my review in light of the rebuttal. The changes made to the paper overall address my concerns. I do consider the additional explanations and re-phrasings as well as the improved notation a nice improvement of the paper. While I did not read all of the appendix, Section 5.1 is much more readable and understandable in the new version. In light of this paper probably being published, I share some typos/inconsistencies I still noticed: p. 4: the solving for -> solving for the p. 7: s_{h+1} -> s' (in Table 3) ? p. 7: table -> Table; tables -> Tables p. 7: (Fix position of K:) \pi_h(s)^K -> \pi_h^K(s) ((also in Appendix)) p. 9: (2x) soft-Q learning -> soft Q-learning; Q Networks -> Q-Networks; Soft Q -> soft Q-learning
iclr_2020_SJgXs1HtwH
Program comprehension is a fundamental task in software development and maintenance processes. Software developers often need to understand a large amount of existing code before they can develop new features or fix bugs in existing programs. Being able to process programming language code automatically and provide summaries of code functionality accurately can significantly help developers to reduce time spent in code navigation and understanding, and thus increase productivity. Different from natural language articles, source code in programming languages often follows rigid syntactical structures and there can exist dependencies among code elements that are located far away from each other through complex control flows and data flows. Existing studies on tree-based convolutional neural networks (TBCNN) and gated graph neural networks (GGNN) are not able to capture essential semantic dependencies among code elements accurately. In this paper, we propose novel tree-based capsule networks (TreeCaps) and relevant techniques for processing program code in an automated way that encodes code syntactical structures and captures code dependencies more accurately. Based on evaluation on programs written in different programming languages, we show that our TreeCaps-based approach can outperform other approaches in classifying the functionalities of many programs.
The paper proposes a neural architecture for summarizing trees inspired by capsule networks from computer vision. The authors re-use a tree convolution from previous work for the bottommost layer, and then propose adaptations to the dynamic routing from capsule networks so that it can be applied to variable-sized trees. The paper applies the proposed architecture to three different program classification datasets, which are in three different languages. The paper reports empirical gains compared to two architectures proposed by previous work. I think that it's interesting to apply the capsule network architecture to tree classification, but unfortunately it doesn't appear that some of the motivation for capsule networks on images didn't seem to transfer neatly to this setting; for example, there is no equivalent of inverse graphics as there is no reconstruction loss (as pointed out by the authors in Section 6.4). Also, the variable-to-static capsule routing indeed appears novel, but I was a bit confused by its internal details. It appears that the outputs of the previous layer which occur most often will get routed (considering lines 6-8 of Algorithm 1 which up-weights each of the $\hat{u}_i$ based on its similarity to $v_j$; the $v_j$ are initially a re-numbered subset of $\hat{u}_i$), without any prior transformation of the previous layer first. It seems to me that this doesn't allow for the prior layer to predict more complex features about the input that the subsequent layer is expected to capture. In fact, for certain code classification tasks, it may be that rare capsule outputs from the initial layer are the most important to preserve. My biggest concern has to do with the empirical results. The source of Dataset C (Mou et al 2016, https://arxiv.org/pdf/1409.5718.pdf) reports 94.0% accuracy in Table 3 on their TBCNN method on the same dataset, whereas this paper reports 79.40% accuracy for TBCNN. I understand that the later result comes from a reimplementation, but it seems fairer to compare against (or additionally report) the results from the original authors of the method. Also, the paper cites ASTNN (Zhang et al 2019, https://dl.acm.org/citation.cfm?id=3339604) in the introduction, and even though that paper reports (in table 2) 98.2% accuracy on Dataset C, the results table of the paper under review does not mention this in the evaluation section. I don't think that a paper necessarily has to achieve empirical results beating all previous ones in order to merit acceptance, but the way that the comparison is currently set up doesn't seem to facilitate a clear comparison of the pros and cons of this method versus other ones in the literature. For the above reasons, I vote to reject the paper. For future submissions, it would be good to see a more comprehensive empirical comparison of the proposed method compared to others, and also to have more explanations about the design of the network.
iclr_2020_SJgn464tPB
In recent years, advances in deep learning have enabled the application of reinforcement learning algorithms in complex domains. However, they lack the theoretical guarantees which are present in the tabular setting and suffer from many stability and reproducibility problems [10]. In this work, we suggest a simple approach for improving stability in off-policy actor-critic deep reinforcement learning regimes. Experiments on continuous action spaces, in the MuJoCo control suite, show that our proposed method reduces the variance of the process and improves the overall performance across most domains.
[Summary] This paper proposes an approach called conservative policy gradients to stabilize the training of deep policy gradient methods. At fixed intervals, the current policy and a separate target policy are evaluated with a number of rollouts. The target policy is then updated to match the current policy only if the current policy is better than the target policy. Experiments show that the proposed method, when applied to TD3, reduces the variance in performance through the training. [Decision] I am not convinced that the proposed method is sound and indeed useful and I vote for rejecting this paper. Experiments show stable performance. However, this stability comes at the cost of extra computation and interaction with the environment (to evaluate the policies). Claims about the method's stability guarantees and overall performance are not supported by theory and experiments. The submitted paper also needs major improvements in presentation. [Explanation] While Proposition 1 provides insights on how the policy evolves, it is too limited to serve as a guarantee. First, performance does not improve or degrade by a constant number. Second, the time it takes for the policy to improve is not captured by the theory. In reality, this time can depend on the hyperparameters or the policy's current performance and might even be unbounded. I do not understand why a characterization of the performance in the limit of time in Proposition 1 is called a stability guarantee while in the rest of the paper stability refers to consistent improvements in the interim performance. Does stability in this proposition mean that the performance will reach a stationary distribution with bounded support? This property is merely a result of the assumptions that the performance evolves by a constant number at bounded times and that it does not exceed [v_min,v_max]. The theory studies the stationary distribution of the target policy's performance but the algorithm uses the online policy to interact with the environment. In Algorithm 5, line 8 (Section D in the Appendix) the target policy is only used for bootstrapping. How can a stable target policy result in more stable performance if it is not used to take actions? The paper claims that the proposed method results in improvements in stability and overall performance. In Figure 3, the proposed method is more stable than the baseline but the overall performance is not better. The proposed method requires more computation and interaction with the environment than the baseline. The experiments do not seem to compare these two methods with the same number of samples or with the same amount of computation. Perhaps the extra computation and samples are better spent on training TD3 for a longer time. I find Section 2 hard to follow. This section describes Value Iteration, Policy Iteration, DQN, and DDPG in detail (with pseudocode) along with their convergence rates. The message that deep RL algorithms generally lack theoretical guarantees can be conveyed by just describing the linear and deep variants of one method. In fact, the algorithm whose stability is analyzed in the next sections, TD3, is not described in Section 2 or anywhere else in the paper. Later in Section 3, DDPG and DQN are described as off-policy Deep RL variants of Value Iteration and Policy Iteration. DQN and DDPG are actually built on Q-learning and Deterministic Policy Gradient (DPG). [Minor comments] In the learning curves in Figure 1, what is the measure of performance and how is it estimated? A description of the plotted measure is necessary to show that the drops in the estimated performance are indeed due to policy degradation rather than poor estimation. -------------------- After rebuttal: I have read the authors' response, the other reviews, and the revision. The revised version has improved presentation, but the proposed method is still introduced as a method with stability guarantees while the proposition in the paper cannot serve as a stability guarantee, and can only provide intuition on the asymptotic performance.
iclr_2020_r1e0G04Kvr
Deep graph generation models have achieved great successes recently, among which however, are typically unconditioned generative models that have no control over the target graphs given an input graph. In this paper, we propose a novel GraphTranslation-Generative-Adversarial-Networks (GT-GAN) that transforms the input graphs into their target output graphs. GT-GAN consists of a graph translator equipped with innovative graph convolution and deconvolution layers to learn the translation mapping considering both global and local features. A new conditional graph discriminator is proposed to classify the target graphs by conditioning on input graphs while training. Extensive experiments on multiple synthetic and realworld datasets demonstrate that our proposed GT-GAN significantly outperforms other baseline methods in terms of both effectiveness and scalability. For instance, GT-GAN performs at least 10 and 15 time faster than GraphRNN and RandomVAE, respectively, when the size of the graph is around 50.
This paper studies a problem of graph translation, which aims at learning a graph translator to translate an input graph to a target graph. The authors propose an adversarial training framework to learn the graph translator, where a discriminator is trained to discriminate between the true target graph and the translated graph, and the translator is optimized by fooling the discriminator. The authors conduct experiments on both synthetic and real-world datasets. The results prove the effectiveness and the efficiency of the proposed approach over many baselines. Strengths: 1. The problem is new and well-motivated. Data translation is an important problem and has been widely studied in many research domains such as computer vision and natural language processing. Despite the importance, the problem has not been thoroughly explored in the graph domain, and most existing studies only focus on standard graph generation problems. In this sense, this paper studies a very new problem, which is quite novel. Moreover, the problem is important, which can have many potential downstream applications on graph data. Overall, the problem is new and well-motivated. 2. The proposed approach is quite intuitive. The paper proposes an adversarial training approach to the problem, where a graph translator is learned based on a graph discriminator. During training, the graph discriminator aims at discriminating between the true target graph and the translated graph conditioned on an input graph, and the graph translator is trained by fooling the discriminator. The graph translator is built on top of an encoder-decoder framework, where the encoder and decoder are parameterized by graph neural networks. Overall, the proposed method is quite reasonable, which is easy to follow. 3. The results are promising. The authors conduct extensive experiments on both synthetic and real-world datasets, and compare the proposed approach against many strong baseline methods for graph generation. The results are quite promising, which prove both the effectiveness and the efficiency of the approach. Weaknesses: 1. The novelty of the proposed approach is limited. The proposed approach is mainly built on top of the adversarial training framework, where a graph neural network is used to parameterize the graph translator. For adversarial training, although it is very intuitive, such a framework has been widely explored in the data translation problem in other domains, such as image style transfer in CV and text style transfer in NLP. Compared with these works, although the proposed approach studies a new problem, the major idea is the same as the existing studies. For the graph encoder and graph decoder, they are designed based on the idea of graph neural networks, where some propagation layers are designed to propagate information across different nodes. Although the propagation layers are specifically designed for the graph translation problem, I feel like they are not so different from existing studies (e.g, message passing neural network, graph U-net). Therefore, from the model-wise, this paper combines several existing ideas, but does not provide new insights or techniques, so the contribution is quite limited. 2. The writing can be further improved. The paper is not very well-written. Some parts of the paper are quite hard to follow, and the intuition behind the approach is not well explained. In section 3.2, it is said that "the approach learns global information by looking for more virtual neighbors regarding the latent relations". Here, it is unclear to me what is a virtual neighbor, and what is a latent relation. The authors try to illustrate their idea in figure 2, but the figure is also quite hard to understand. It would be better if the authors could explain the idea of the encoder in a more intuitive way, or give more concrete examples for illustration. Besides, equation (4) (5) and (7) are also hard to understand. The notations in these equations are quite messy, where multiple indices are used (e.g., i, j, k, l, m, n), and the intuition underlying the equations is not well explained, making it hard to understand how the encoder and the decoder work. Also, there are many typos in the paper. For example: In the directed graph, each node have incoming edge(s) and out-going edge(s) -> In the directed graph, each node has incoming edge(s) and out-going edge(s) in the "node comvolution" layer -> in the "node convolution" layer First, the “node deconvolution” layer are used to generates -> First, the “node deconvolution” layer is used to generate The caption of table 1 says that the table shows the node degree distribution distance, but from the main body of texts, only four metrics are about the distribution distance, which is inconsistent to the caption. Overall, the intuition of the proposed approach is not well explained, and there are many typos to be fixed, so I feel like the writing of the paper should be further improved.
iclr_2020_ryeQmCVYPS
Robustness of convolutional neural networks has recently been highlighted by the adversarial examples, i.e., inputs added with well-designed perturbations which are imperceptible to humans but can cause the network to give incorrect outputs. Recent research suggests that the noises in adversarial examples break the textural structure, which eventually leads to wrong predictions by convolutional neural networks. To help a convolutional neural network make predictions relying less on textural information, we propose defective convolutional layers which contain defective neurons whose activations are set to be a constant function. As the defective neurons contain no information and are far different from the standard neurons in its spatial neighborhood, the textural features cannot be accurately extracted and the model has to seek for other features for classification, such as the shape. We first show that predictions made by the defective CNN are less dependent on textural information, but more on shape information, and further find that adversarial examples generated by the defective CNN appear to have semantic shapes. Experimental results demonstrate the defective CNN has higher defense ability than the standard CNN against various types of attack. In particular, it achieves state-of-the-art performance against transfer-based attacks without applying any adversarial training.
The paper deals with robustness against adversarial attacks. It proposes to blank out large parts of the early convolution layers in a CNN, in an attempt to shift the focus from "texture" to "shape" features. This does seem to improve robustness against adversarial examples, with only a small decrease in general classification performance. The explanation for this, on the other hand, is not really convincing. The idea is simple: adversarial noise introduces high-frequency texture patterns, so destroy those by blanking out a large portion of the neurons in a layer. Quite obviously, this can have an influence - when blanking out 90% of the pixels (as suggested in the paper), the effective sampling resolution goes down by a factor of 3 in each axis, and high-frequency patterns are a lot less likely to be picked up. It does, however, remain unclear why this approach is particularly useful, or why it even works at all. On the one hand, an easier way to surely get rid of those patterns is simply to blur the images accordingly before feeding them to the network. That baseline is missing. On the other hand it is quite an outrageous claim that one can throw away 90% of the responses in the early conv layers with hardly any performance loss. I don't doubt the experiments results, but if one discovers something like that, it needs to be explained. The network has a lot lessless capacity, effectively loses a factor 3 in resolution, but performance seemingly stays almost the same! This brings me to another point. It is never really defined what is meant by "texture" respectively "shape". By reverse-engineering from the paper text I gather that "shape" is simply texture at a significantly lower resolution. But then how does destroying "texture" affect objects with significantly lower size/scale in the image? A few technical questions remain unclear. First, the adversarial noise in the paper looks a lot stronger than normal. It is easily visible, and in that sense not "adversarial". In fact the paper openly states that they "even fool humans". So since the labels are human-annotated, these are in fact not adversarial examples of class A, but examples of a different class wrongly labeled as class A... Another question is how much "texture" is lost, since the paper finds it important to use a different random mask for each filter in a layer. So does that really suppress so much texture? Almost all pixels will be seen by some non-defective filters, so it would seem that the hi-res information is implicitly still there. To really suppress texture it would seem more effective to always use the same mask, but that apparently does not work. Why? What completely confused me was which networks were actually used to generate the adversarial examples. Are these adversarial against the standard CNN or against the defective one? If defective convolutions indeed become popular, then an attacker would obviously know about that and also use a defective network to generate his adversarial examples. One thing I did not understand is the incoherent mixture of datasets. The first experiment with the reshuffled tiles is done on ImageNet. But then the actual experiments regarding robustness against attacks are done only with Cifar-10. Why suddenly switch? And then, many of the visual examples are from TinyImageNet. Why switch again? And on that account, since apparently all of them were processed, is the behaviour consistent across datasets? A note aside, I am not sure it is a good idea to treat additive Gaussian noise in the same way as adversarial patterns. Some level of noise that is at least approximately Gaussian is present in almost all images. So it is actually a good thing if a network learns the magnitude of that noise, so as to separate it from the signal, i.e., the brightness variations that are informative and not Gaussian noise. In that view it is a good thing, not a weakness, if adding noise of the wrong magnitude misleads the network (although, ideally, it should of course flag the image as being out-of-distribution). In summary, I find the results interesting - in particular also the one on tiled and reshuffled images. But I am at this point not convinced by the explanations. If one can indeed throw away 90% of all responses in the low layers then that would be a rather big thing that needs an explanation. Unless the task is easy enough to be solved with 3x lower resolution - but in that case I would expect that simply reducing the resolution would also destroy the adversarial pattern. I am on the fence, but in its current state the paper leaves too many open questions.
iclr_2020_r1lIKlSYvH
In narrow asymptotic settings Gaussian VAE models of continuous data have been shown to possess global optima aligned with ground-truth distributions. Even so, it is well known that poor solutions whereby the latent posterior collapses to an uninformative prior are sometimes obtained in practice. However, contrary to conventional wisdom that largely assigns blame for this phenomena on the undue influence of KL-divergence regularization, we will argue that posterior collapse is, at least in part, a direct consequence of bad local minima inherent to the loss surface of deep autoencoder networks. In particular, we prove that even small nonlinear perturbations of affine VAE decoder models can produce such minima, and in deeper models, analogous minima can force the VAE to behave like an aggressive truncation operator, provably discarding information along all latent dimensions in certain circumstances. Regardless, the underlying message here is not meant to undercut valuable existing explanations of posterior collapse, but rather, to refine the discussion and elucidate alternative risk factors that may have been previously underappreciated.
Summary: This paper is clearly written and well structured. After categorizing difference causes of posterior collapse, the authors present a theoretical analysis of one such cause extending beyond the linear case covered in existing work. The authors then extended further to the deep VAE setting and showed that issues with the VAE may be accounted for by issues in the network architecture itself which would present when training an autoencoder. Overall: 1) I felt that Section 3 which introduces categorizations of posterior collapse is a valuable contribution and I expect that these difference forms of posterior collapse are currently under appreciated by the ML community. I am not certain that the categorization is entirely complete but is nonetheless an excellent step in the right direction. One source of confusion for me was the difference between sections (ii) and (v) --- in particular I believe that (ii) and (v) are not mutually exclusive. Additionally, the authors wrote "while category (ii) is undesirable, it can be avoided by learning $\gamma$". While this is certainly true in the affine decoder case it is not obvious that this is true in the non-linear case. 2) Section 4 provides a brief overview of existing results in the affine case and introduces a non-linear counter-example showing that local minima may exist which encourage complete posterior collapse. 3) On the proof of Proposition 4.1. In A.2.1 you prove that there exists a VAE whose ELBO grows infinitely (exceeding the local maxima of (7)). While I have been unable to spot errors in the proof, something feels odd here. In particular, the negative ELBO should not be able to exceed the entropy of the data which in this case should be finite. I've been unable to resolve this discrepancy myself and would appreciate comments from the authors (or others). The rest of the proof looks correct to me. 4) I felt that section 5 was significantly weaker than the rest of the paper. This stemmed mostly from the fact that many of the arguments were far less precise and less rigorous than those preceding. I think the presentation of this section could be significantly improved by focusing around Proposition 5.1. a) Section 5 depends on the decoder architecture being weak, though this is not clearly defined formally. I believe this is a sensible restriction which enables analysis beyond the setting of primary concern in Alemi et al. (and other related work). b) In the third paragraph, you write "deep AE models can have bad local solutions with high reconstruction [...]". I feel that this doesn't align well with the discussion in this section. In particular, I believe it would be more accurate to say that IF the autoencoder has bad local minima then the VAE is also likely to have category (v) posterior collapse. c) Equation (8) feels a little too imprecise. Perhaps this could be formalized through a bias-variance decomposition of the right hand side similar to Rolinek et al.? d) The discussion of optimization trajectories was particularly difficult to follow. It is inherently difficult to reason about the optimization trajectories of deep auto-encoding models and is potentially dangerous to do so. For example, perhaps the KL divergence term encourages a smoother loss landscape and encourages the VAE to avoid the local stationary points that the auto-encoder falls victim to. e) It is written, "it becomes clear that the potential for category (v) posterior collapse arises when $\epsilon$ is large". This is not clear to me and in fact the analysis seems more indicative of collapse presented in category (ii) (though as mentioned above, I am not convinced these are entirely separate). Similarly, later in this section it is written, "this is more-or-less tantamount to category (v) posterior collapse". I was also unable to follow this reasoning. f) "it is actually the AE base architecture that is effectively the guilty party when it comes to posterior collapse". If the conclusions are to be believed, this only applies to category (v) collapse. g) Unfortunately, I did not buy the arguments surrounding KL annealing at the end of section 5. In particular, KL warm start will change the optimization trajectory of the VAE. It is possible that the VAE has a significantly worse loss landscape than the autoencoder initially and so warm-start may enable the VAE to escape this difficult initial region. Minor: - The term "VAE energy" used throughout is not typical within the literature and seems less explicit than the ELBO (e.g. it overlaps with energy based models). - Equation (4) is missing a factor of (1/2). - Section 3, in (ii), typo: "assumI adding $\gamma$ is fixed", and "like-likelihood". In (v), typo: "The previous fifth categories" - Section 4, end of para 3, citep used instead of citet for Lucas et al. - Section 4, eqn 6 is missing a factor of 1/2 and a log(2pi) term. - Section 5, "AE model formed by concatenating" I believe this should be "by composing". - Section 5, eqn 10, the without $\gamma$ notation is confusing and looks as though the argmin does not depend on gamma. Presumably, it would make more sense to consider $\gamma^*$ as a function of $\theta$ and $\phi$. - Section 5 "this is exactly analogous". I do not think this is _exactly_ analogous and would recommend removing this word.
iclr_2020_Bkgq9ANKvB
Learning with noisy labels is a common problem in supervised learning. Existing approaches require practitioners to specify noise rates, i.e., a set of parameters controlling the severity of label noises in the problem. The specifications are either assumed to be given or estimated using additional approaches. In this work, we introduce a technique to learn from noisy labels that does not require a priori specification of the noise rates. In particular, we introduce a new family of loss functions that we name as peer loss functions. Our approach then uses a standard empirical risk minimization (ERM) framework with peer loss functions. Peer loss functions associate each training sample with a certain form of "peer" samples, which evaluate a classifier' predictions jointly. We show that, under mild conditions, performing ERM with peer loss functions on the noisy dataset leads to the optimal or a near optimal classifier as if performing ERM over the clean training data, which we do not have access to. To our best knowledge, this is the first result on "learning with noisy labels without knowing noise rates" with theoretical guarantees. We pair our results with an extensive set of experiments, where we compare with state-of-the-art techniques of learning with noisy labels. Our results show that peer loss functions based method consistently outperforms the baseline benchmarks, as well as some recent new results. Peer loss provides a way to simplify model development when facing potentially noisy training labels, and can be promoted as a robust candidate loss function in such situations.
This paper proposed peer loss function for learning with noisy labels, combining two areas learning with noisy labels and peer prediction together. The novelty and the significance are both borderline (or below). There are 4 major issues I have found so far. References: Looking at section 1.1 the related work, the references are a bit too old. While I am not sure about the area of peer prediction, in the area of learning with noisy labels (in a general sense), there were often 10 to 15 papers from every NeurIPS, ICML, ICLR and CVPR in recent years. The authors didn't survey the literature after 2016 at all... Nowadays most papers focus on sample selection/reweighting and label correction rather than loss correction in this area, but there are still many recent papers on designing more robust losses, see https://arxiv.org/abs/1805.07836 (NeurIPS 2018 spotlight), https://openreview.net/forum?id=rklB76EKPr and references therein. Note also that some label-noise related papers may not have the term label noise or noisy labels in the title, for example, https://openreview.net/forum?id=B1xWcj0qYm (ICLR 2019). Motivation: The motivating claim "existing approaches require practitioners to specify noise rates" is wrong... Many loss correction methods can estimate the transition matrix T (which is indispensable in any loss correction) without knowing the noise rate, when there are anchor points or even no anchor points in the noisy training data. See https://arxiv.org/abs/1906.00189 (NeurIPS 2019) and references therein. See also the public comment posted by Nontawat when a special symmetric condition is assumed on the surrogate loss function. Novelty: The paper introduced peer prediction, an area in computational economics and algorithmic game theory, to learning with noisy labels. This should be novel (to the best of my knowledge) and I like it! However, the obtained loss is very similar to the general loss correction approach, see https://arxiv.org/abs/1609.03683 (CVPR 2017 oral). This fact undermines the novelty of the paper, significantly. The authors should clarity the connection to and the difference from the loss correction approach. Significance: The proposed method focuses on binary classification, otherwise the paper will be much more significant! Note that the backward and forward corrections can both be applied to multi-class classification. Moreover, similar to many theory papers, the experiments are too simple, where single-hidden-layer neural networks were trained on 10 UCI benchmark datasets. I have to say this may not be enough for ICLR that should be a more deep learning conference.
iclr_2020_Byx_YAVYPH
Machine learning has shown growing success in recent years. However, current machine learning systems are highly specialized, trained for particular problems or domains, and typically on a single narrow dataset. Human learning, on the other hand, is highly general and adaptable. Never-ending learning is a machine learning paradigm that aims to bridge this gap, with the goal of encouraging researchers to design machine learning systems that can learn to perform a wider variety of inter-related tasks in more complex environments. To date, there is no environment or testbed to facilitate the development and evaluation of never-ending learning systems. To this end, we propose the Jelly Bean World testbed. The Jelly Bean World allows experimentation over two-dimensional grid worlds which are filled with items and in which agents can navigate. This testbed provides environments that are sufficiently complex and where more generally intelligent algorithms ought to perform better than current state-of-the-art reinforcement learning approaches. It does so by producing non-stationary environments and facilitating experimentation with multi-task, multi-agent, multi-modal, and curriculum learning settings. We hope that the Jelly Bean World will prompt new interest in the development of never-ending learning, and more broadly general intelligence.
Summary This paper introduces a new environment for testing lifelong or never-ending learning. The goal of the environment is to act as a new benchmark testbed for challenging existing agents and models across areas of research, encouraging and pushing new research towards solving challenges in curriculum learning, exploration, representation learning, and continual learning. The contributions in this paper extend upon previous work by building an easily controllable environment generator with key necessary features for lifelong learning including: non-stationarity, multiple task specification, and multiple sets of observable features. Review The paper highlights many key characteristics of an environment that are challenging to current RL models. This focus on building a benchmark upon which further research can measure performance is important. I find the proposed environment to be incredibly intriguing and would find it valuable to the field of lifelong learning (or continual learning or never-ending learning, etc.). I think the size and scope of the environment generator is impressive, showing a considerable amount of engineering effort has gone into its design. The largest overarching issue that I would like to point out is the limited study of modelling choices. I am not an expert on applied Reinforcement Learning, so I can make very few claims about the validity of the chosen network architecture or use of the PPO policy-gradient algorithm for this environment. However, it is critical, in my view, that a paper introducing a new environment studies these effects itself; demonstrating how various degrees of learning capacity or wider ranges of learning algorithms behave in the given environment. If a slightly larger network architecture trivially solves each task in this environment, can this still be considered a benchmark task? A key result in the paper that I would like to see further investigated (even with only a different network architecture) would be Figure 6, the comparison between scent, vision, and vision+scent. It is unclear to me why the scent features would be so challenging to learn from and specifically why they would harm the representation so permanently. A deeper study using only the scent features would be valuable to me. In its current state, it appears that these feature provide no additional information and are thus not necessary to include in the environment; breaking one of the primary motivating features of JBW: the multi-modality. I recognize that the paper comments on the orthogonality of the scent playing a role, and notes that further results are included on a not-yet-available website (presumably to maintain anonymity). However, I would like to see these results included in the appendix of the paper so I could better assess the utility of the scent features. Perhaps an additional result showing the average reward versus the cosine distance (or other measure of orthogonality) between "jellybeans" and "onions" would additionally motivate the utility of the scent features. The paper empirically investigates the use of curriculum learning to accelerate learning for a particular task. The paper then claims that curriculum learning improves learning speed, but ultimately does improve final performance. This demonstration is intended to showcase the use of the proposed environment (JBW) for curriculum learning. However, there are few key issues with this empirical study. First, the paper shows the reward rate of 3 different curricula but does mention the metric used to compare the agents during the time the curricula is active. It is implied that the metric is the reward rate of each individual agent; however, each agent has a unique reward function making comparisons between agents impossible. Curriculum #2 can only receive positive rewards while Curriculum #1 can only receive negative rewards. Naturally this means that Curriculum #2 must have strictly greater or equal reward rate over Curriculum #1. Even in the case that the final objective specifies the metric used, these are still highly non-comparable entities. A suggestion to improve this result would be to run each curricula for 100k as a "pretraining" phase, then to restart the agents to the same state in the environment and measure their performance from there. The case study measuring the effects of non-stationarity of the rewards does not provide sufficient evidence that the proposed environment contributes a novel ability to investigate non-stationarity. First, the given study of non-stationarity focuses solely on an alternating reward function, clearly demonstrating the problem of catastrophic forgetting. While this is a motivating demonstration, it is not novel and the issue of catastrophic forgetting in our models has been known since at least the 90s (e.g. French 1999 and related). Carefully and scientifically investigating such an issue is best done in a far less complex environment where more precise results can be drawn. Further, the ability to oscillate a reward function in this way is not unique to this environment and can be trivially done in most environments. Secondly, it is unclear if JBW allows for non-stationarity in the transition probabilities in the MDP. This is a critical component to non-stationarity and would be a necessary feature for me to claim non-stationarity is widely supported in the environment. The paper starts with a motivating conversation about environment complexity, with interesting insights into measuring the complexity of an environment based on the complexity of the policy used to solve that environment. However this conversation is ignored until the conclusion of the paper, where the paper claims to have built an environment of greater complexity than already existing environments. Without any supporting evidence in the body of the paper, it is impossible to verify the validity of this statement, and it is still an open question to me whether this claim is even falsifiable in the first place. As a concrete counter-claim, I would claim that the Minecraft environment (Malmo) has similar or higher complexity to the proposed environment in most aspects. Minecraft has a far greater diversity of objects, a third dimension of movement, adversarial components, hunger and health, etc. each of which adding a large level of complexity not achievable in the proposed environment. This is not to say that I expect the proposed environment to contain these features, but rather to point out that claims of greater complexity may be ill-founded. Additional Comments (not affecting score) I do slightly question if ICLR is the appropriate venue for such work. While I recognize that the scope of this conference has shifted considerably over the past few years, this paper (as written) does not further understanding or study of learning representations. I believe a more careful demonstration of the representation induced by characteristics of the environment is within easy reach of the paper, but is not currently presented. ----------- After the author response, reading other reviews/responses, and looking at the edited draft: I am convinced of the utility of the domain, the scope of the engineering effort put into building, and the ease with which it can be configured by the user to test many applicable settings (partial observability, stochasticity in transitions and rewards, etc.). I remain slightly skeptical of the amount of benefit the proposed provides over the Malmo environment for any of the settings discussed in the case-studies. I specifically feel my concerns about the stochasticity in the transitions and environment complexity have been well addressed. My concerns about the curriculum learning demonstration are partially addressed to a point where I am satisfied. My concerns about the modeling choice are also partially satisfied, with one lingering concern. I am unclear if the environment is trivially solvable by using more computation resources (e.g. bigger networks). However, after reconsideration I decided this concern bares less weight than I previously considered. All this considered, I am changing my rating from 3 -> 6.
iclr_2020_Bkxe2AVtPS
Training with larger number of parameters while keeping fast iterations is an increasingly adopted strategy and trend for developing better performing Deep Neural Network (DNN) models. This necessitates increased memory footprint and computational requirements for training. Here we introduce a novel methodology for training deep neural networks using 8-bit floating point (FP8) numbers. Reduced bit precision allows for a larger effective memory and increased computational speed. We name this method Shifted and Squeezed FP8 (S2FP8). We show that, unlike previous 8-bit precision training methods, the proposed method works out-of-the-box for representative models: ResNet-50, Transformer and NCF. The method can maintain model accuracy without requiring fine-tuning loss scaling parameters or keeping certain layers in single precision. We introduce two learnable statistics of the DNN tensors -shifted and squeezed factors that are used to optimally adjust the range of the tensors in 8-bits, thus minimizing the loss in information due to quantization.
There has been a great deal of interest and research into reduced numerical precision of weights, activations and gradients of neural networks. If, for example, 16 bit floating point can be used instead of 32 bit floating point, then the memory bandwidth is halved along with significant gains in computational performance. In this work the authors propose an 8-bit floating point format (denoted S2FP8) for tensors. In general, computing activations and gradients with such low precision at training time, has generally proved challenging without a number of tricks such as scaling the loss of each minibatch to within a reasonable range. Such ``tricks'' can be difficult to tune for each problem. The key idea here is that for each tensor of 8-bit numbers, two 32 bit floating point statistics are recorded as well. These determine (in log-space) a scale and an offset for the 8-bit numbers (eq 1). This means that in this format tensors of significantly different scales can be well-represented (although larger scales necessarily implies low precision). Matrix multiplications are done in FP32 precision and then converted in S2FP8 format. This requires an additional step to accumulate the summary statistics of each tensor in order to convert from FP32 to S2FP8 (the mean and the max of the tensor elements in log space). The weights of the network are stored in FP32 and the gradients and activations are computed in S2FP8 and used to update the weights. They test this approach in ResNet, a small transformer and a MLP for collaborative filtering. They find it reaches similar performance to FP32 where standard FP8 format has worse performance or results in Nan's. Improvements in computational efficiency, both at training and inference, are active areas of research and this work contributes a novel approach using summary statistics. However, there are a several ways this work could be improved. 1. There is no comparisons with bfloat16, which is becoming a widely used approach to lower precision and is gaining significant hardware support [1]. 2. Discussion and analysis regarding the need to gather summary statistics after each matrix multiplication (or other tensor operation). It is claimed that this brings minimal HW complexity, but this doesn't seem well justified. For a large tensor, this additional reduction to compute statistics may be expensive (in memory bandwidth and computation), particularly since this is done with FP32. 3. Even with the current implementation on a GPU, it should be possible with kernel fusions to gain some significant savings in memory bandwidth (and therefore computational speed), but there is no attempt anywhere to show any runtime benefit on current hardware. Minor issues: Some captions are very terse and the figures would benefit from a clearer explanation (e.g. figure 6). [1] https://en.wikipedia.org/wiki/Bfloat16_floating-point_format
iclr_2020_rkecl1rtwB
The performance of graph neural nets (GNNs) is known to gradually decrease with increasing number of layers. This decay is partly attributed to oversmoothing, where repeated graph convolutions eventually make node embeddings indistinguishable. We take a closer look at two different interpretations, aiming to quantify oversmoothing. Our main contribution is PAIRNORM, a novel normalization layer that is based on a careful analysis of the graph convolution operator, which prevents all node embeddings from becoming too similar. What is more, PAIRNORM is fast, easy to implement without any change to network architecture nor any additional parameters, and is broadly applicable to any GNN. Experiments on real-world graphs demonstrate that PAIRNORM makes deeper GCN, GAT, and SGC models more robust against oversmoothing, and significantly boosts performance for a new problem setting that benefits from deeper GNNs.
Summary It is known that GNNs are vulnerable to the oversmoothing problem, in which feature vectors on nodes get closer as we increase the number of (message passing type graph convolution layers). This paper proposed PairNorm, which is a normalization layer for GNNs to tackle this problem. The idea is to pull apart feature vectors on a pair of non-adjacent nodes (based on the interpretation of Laplace-type smoothing by NT and Maehara (2019)). To achieve this approximately with low computational complexity, PairNorm keeps the sum of distances of feature vectors on all node pairs approximately the same throughout layers. The paper conducted empirical studies to evaluate the effectiveness of the method. PairNorm improved the prediction performance and enabled make GNNs deep, especially when feature vectors are missing in the large portion of nodes (the SSNC-MV problem). Decision I want to recommend to accept the paper because, in my opinion, this paper contributes to deepening our understanding of graph NNs by giving new insights into what causes the oversmoothing problem and which types problem (deep) graph NNs can solve. The common myth about graph NNs is that they cannot make themselves deep due to the oversmoothing. Therefore, oversmoothing is one of the big problems in the graph NN field and has been paid attention from both theoretical and empirical sides. This paper found that the deep structures do help to improve (or at least worsen) the predictive performance when the significant portion of nodes in a graph does not have input signals. To the best of our knowledge, this is the first paper that showed the effectiveness of deep structures in citation network datasets (Deep GCNs [Li et al., 2019] successfully improved the prediction performance of (residual) graph NNs using as many as 56 layers for point cloud datasets). The proposed method is theoretically backboned, easy to implement, and applicable to (theoretically) any graph NNs. Taking these things into account, I would like to judge the contribution of this paper is sufficiently significant to accept. Minor Comments - Table 3. Remove s in the entry for GAT-t2 Citeseer 0%. Questions - Can we interpret PairNorm (or the optimization problem (6)) from the viewpoint of graph spectra? - Although the motivation of Centering (10) is to ease the computation of TPD, I am curious how this operation contributes to performance. Since the constant signal does not have information for distinguishing nodes, eliminating it by Centering might result in emphasizing the signal component for nodes classification tasks. From a spectral point of view, Centering corresponds to eliminating the lowest frequency of a signal. - Figures 3 and 7 have shown that GCN and GAT did not perform well compared to SGC when the layer size increases. The authors discussed that this is because GCN and GAT are easier to overfit. However, SGC chose the hyperparameter $s$ from $\{0.1,1,10,50,100\}$, whereas the authors examined a single $s$ for GCN and GAT. Therefore, I think there is another hypothesis that simply the choice $s$ was misspecified. If this is the case, I am interested in the effect of $s$ on predictive performance. [Li et al., 2018] Li, Qimai, Zhichao Han, and Xiao-Ming Wu. "Deeper insights into graph convolutional networks for semi-supervised learning." Thirty-Second AAAI Conference on Artificial Intelligence. 2018.
iclr_2020_B1g8VkHFPH
Fine-tuning from pre-trained ImageNet models has become the de-facto standard for various computer vision tasks. Current practices for fine-tuning typically involve selecting an ad-hoc choice of hyper-parameters and keeping them fixed to values normally used for training from scratch. This paper re-examines several common practices of setting hyper-parameters for fine-tuning. Our findings are based on extensive empirical evaluation for fine-tuning on various transfer learning benchmarks. (1) While prior works have thoroughly investigated learning rate and batch size, momentum for fine-tuning is a relatively unexplored parameter. We find that the value of momentum also affects fine-tuning performance and connect it with previous theoretical findings. (2) Optimal hyper-parameters for fine-tuning in particular the effective learning rate are not only dataset dependent but also sensitive to the similarity between the source domain and target domain. This is in contrast to hyper-parameters for training from scratch. (3) Reference-based regularization that keeps models close to the initial model does not necessarily apply for "dissimilar" datasets. Our findings challenge common practices of finetuning and encourages deep learning practitioners to rethink the hyper-parameters for fine-tuning.
This submission studies the problem of transfer learning and fine tuning. This submission proposes four insights: Momentum hyperparameters are essential for fine-tuning; When the hyperparameters satisfy some certain relationships, the results of fine-tuning are optimal; The similarity between source and target datasets influences the optimal choice of the hyperparameters; Existing regularization methods for DNN is not effective when the datasets are dissimilar. This submission provides multiple experiments to support their opinion. Pros: + This submission provides interesting facts that are omitted in previous research works. + This submission examines the previous theoretical results in empirical setting and finds some optimal hyperparameter selection strategies. + This submission provides many experiment results of fine-tuning along with its choice of hyperparameters that could be taken as baselines in future researches. Cons: - All experiments results are based on same backbone, which makes all discoveries much less reliable. More experiments on other backbones are necessary. Furthermore, this submission claims that the regularization methods such as L2-SP may not work on networks with Batch Normalization module. But there is no comparison on networks without BN. - Providing a complete hyperparameter selecting strategy for fine-tuning could be an important contribution of this submission. I suggest authors to think about it. - This submission claim that the choice of hyperparameters should depend on similarity of different domains. But this submission does not propose a proper method for measure the similarity or provide detailed experiments on previous measurements. - It seems that the MITIndoors Dataset is not similar with ImageNet from the semantic view. This submission does not provide similarity measurement between these datasets. Why the optimal momentum is 0? - The effective learning rate and ‘effective’ weight decay are not first given in this submission. This makes the novelty of this submission relatively weak. Authors only test these strategies in fine-tuning setting and find that they also work with a different initialization. - It seems that merely searching for learning rate and weight decay hyperparameters (as Kornblith et al. (2018) did) on a fixed momentum is Ok if there is a most effective relationship between learning rate and momentum. So the discoveries in the first part that a 0 momentum can be better is based on a careless search of learning rates? - This submission omits that Kornblith et al. (2018) also referred to the fact that the momentum parameter of BN is essential for fine-tuning and provided a strategy in section A.5. Discussion about this strategy will make this submission more complete. This submission gives important discoveries about the hyperparameter choice in the fine-tuning setting. But there are several flaws in this submission. I vote for rejecting this submission now but I expect authors to improve the submission in the future version.
iclr_2020_rJgVwTVtvS
Gradient perturbation, widely used for differentially private optimization, injects noise at every iterative update to guarantee differential privacy. Previous work first determines the noise level that can satisfy the privacy requirement and then analyzes the utility of noisy gradient updates as in non-private case. In this paper, we explore how the privacy noise affects the optimization property. We show that for differentially private convex optimization, the utility guarantee of both DP-GD and DP-SGD is determined by an expected curvature rather than the minimum curvature. The expected curvature represents the average curvature over the optimization path, which is usually much larger than the minimum curvature and hence can help us achieve a significantly improved utility guarantee. By using the expected curvature, our theory justifies the advantage of gradient perturbation over other perturbation methods and closes the gap between theory and practice. Extensive experiments on real world datasets corroborate our theoretical findings.
This paper proposes a quantity called expected curvature to analyze the convergence of gradient perturbation based methods that achieves differential privacy. Comparing to minimum curvature, which was used in previous convergence analyses, expected curvature better captures the properties of the optimization problem, and thus offers an explanation for the advantage of gradient perturbation based methods over objective perturbation and output perturbation. Using expected curvature is a pretty interesting idea, and having more refined convergence bound is useful. I have the following questions. 1. It seems to me that the convergence bound is similar to previous bound, except that \mu is replaced by \nu and one log(n) disappears. Could you explain more intuitively how hard the new analysis is? Is it similar to just replacing any \mu by \nu in the previous analysis? And why do they differ by log(n)? 2. How are the experiments (in terms of setup and results) differ from those in Iyengar et al? It seems to me the paper is proposing a method for convergence analysis and the DP algorithms remain the same. I feel like in Iyengar et al, there is no clear difference between gradient perturbation and objective perturbation. Maybe I was wrong about that, but could you elaborate more? 3. I agree that the expected curvature better captures the convergence of gradient-based DP methods. Yet I don’t see clearly how this can be used to show that they have more advantages than objective perturbation. Is it possible that the analyses of objective perturbation can also be improved (maybe with other techniques)? Since all we have are upper bounds, I feel like it is a bit early to conclude that it is less powerful. You mentioned that “That is because DP makes the worst-case assumption on query function and output/objective perturbation treat the whole learning algorithm as a single query to private dataset. ” I didn’t follow this part. I feel like DP is always making a worst-case assumption; even for gradient perturbation, you need to add noise to protect the worst case. Could you elaborate more on that? 4. My understanding is that for different dataset the curvatures would be different. I think it might be interesting if you plot something similar to Figure 1 for other datasets and compare how they match with the training curve. Do you expect them to look very different on different dataset / optimization problem?
iclr_2020_Hkxzx0NtDB
We propose to reinterpret a standard discriminative classifier of p(y|x) as an energy based model for the joint distribution p(x, y). In this setting, the standard class probabilities can be easily computed as well as unnormalized values of p(x) and p(x|y). Within this framework, standard discriminative architectures may be used and the model can also be trained on unlabeled data. We demonstrate that energy based training of the joint distribution improves calibration, robustness, and out-of-distribution detection while also enabling our models to generate samples rivaling the quality of recent GAN approaches. We improve upon recently proposed techniques for scaling up the training of energy based models and present an approach which adds little overhead compared to standard classification training. Our approach is the first to achieve performance rivaling the state-of-the-art in both generative and discriminative learning within one hybrid model.
This work is an attempt to bridge the gap between discriminative models, which currently obtain the state of the art on most classification problems, and generative models, which (through a model of the marginal p(x)) have the potential to shine on many tasks beyond generalization to a hold-out set with minimal shift in distributions: out of distribution detection, better generalization out of distribution, unsupervised learning etc. While much of the current work is related to normalizing flows / invertible neural networks, the authors here propose a quite simple but appealing method: A standard neural classifier is taken and the softmax is layer chopped off and replaced by an energy based model, which models the joint probability p(x,y) instead of the posterior p(y|x). The advantage is an additional degree of freedom in the scale of the logit vector, which is would have been otherwise normalized by the softmax layer and now can now model the data distribution. The downside is the loss in ease of training. Whereas (discriminative) deep networks can be easily trained by gradient descent on a cross-entropy objective, the partition function in the energy model makes this un tractable. This is addressed through sampling, similar to (Welling & Teh, 2011). One of the biggest achievements reported by the authors is that the performance on discriminative tasks is not hurt (much) by adding the generative model. There is only a 3 point gap between Wide-ResNet and the proposed model (92.9% vs. 95.8%) … but on what dataset? 3 datasets are mentioned in the experimental section, but table 1 does not mention on which datasets the accuracy is reported. My guess is that this is a mean or mixture, since GEM performances of 96.7% and 72.2% are reported for SVHN and CIFAR10, respectively, but this should be made clearer. On out of distribution detection, could the authors comment on the histograms in table 2, in particular the difference between the new measure (AM JEM) compared to JEM log p(x) on CelebA? The proposed measure does not seem to fare well here. Although the method does not outperform the gold standard of adversarial training, I found the models robustness to adversarial examples quite appealing, given that it was not trained for this objective (which also means that it does not require an adaptation to a norm). I was very impressed by Figure 6 showing distal adversarial initialized from random images, showing pretty clear images of the modelled class. The modelled variations require more investigation to verify whether we have a collapse for each class, but the results look very promising. The paper is well written and easy to understand. A couple of details on the training procedure are missing in the experimental part. It is stated that, both, p(y|x) and the generative part p(x), are optimized, but how are these exactly integrated? Given the difficult in training this model reported in the paper, this seems to be particularly important. I also appreciated the description of the limitations of the algorithm, and the details in the appendix (ICLR should go back to unlimited paper lengths, btw.). More information on complexity (training times etc.) should also be helpful.
iclr_2020_SJl28R4YPr
It is valuable yet remains challenging to apply neural networks in logical reasoning tasks. Despite some successes witnessed in learning SAT (Boolean Satisfiability) solvers for propositional logic via Graph Neural Networks (GNN), there haven't been any successes in learning solvers for more complex predicate logic. In this paper, we target the QBF (Quantified Boolean Formula) satisfiability problem, the complexity of which is in-between propositional logic and predicate logic, and investigate the feasibility of learning GNN-based solvers and GNN-based heuristics for the cases with a universal-existential quantifier alternation (so-called 2QBF problems). We conjecture, with empirical support, that GNNs have certain limitations in learning 2QBF solvers, primarily due to the inability to reason about a set of assignments. Then we show the potential of GNN-based heuristics in CEGAR-based solvers, and explore the interesting challenges to generalize them to larger problem instances. In summary, this paper provides a comprehensive surveying view of applying GNN-based embeddings to 2QBF problems, and aims to offer insights in applying machine learning tools to more complicated symbolic reasoning problems.
This paper explores how graph neural networks can be applied to test satisfiability of 2QBF logical formulas. They show that a straightforward extension of a GNN-based SAT solver to 2QBF fails to outperform random chance, and argue that this is because proving either satisfiability or unsatisfiability of 2QBF requires reasoning over exponential sets of assignments. Instead, they show that GNNs can be useful as a heuristic candidate- or counterexample- ranking model which improves the efficiency of the CEGAR algorithm for solving 2QBF. This is a clear, well-written, and well-structured paper, and I support accepting it to ICLR. That being said, I am not as familiar with the literature on neural solvers for logic problems, so I base my review on the content within the paper more than its context in the field. I can’t find much to fault with the writing and arguments. The GNN architecture for 2QBF (Section 2) is simple, elegant, and well-motivated as a minimal extension of successful SAT solvers. The arguments in Section 3 are convincing, and make a good case for why an algorithm such as CEGAR is necessary. Finally, the metrics in Section 4 are clearly interpretable and well-justified. A couple questions and concerns: In Section 3, The amount of training data (up to 160 pairs of formulas for predicting satisfiability) seems to be very small for a machine learning problem. By comparison, Selsam et al. 2019 says they train their GNN SAT solver on “millions of problems” (Section 5). Is there a good reason for using a much smaller dataset, given that 2QBF is a harder class of problem? Section 4.2: how are the TraunU, TrainS, TestU, and TestS datasets generated? In Section 4.6, are the models re-trained on these new distributions, or on the data described in Section 4.2? (If the latter, how does the GNN perform if re-trained on the larger-spec data?) And minor points on clarity: * “-” for the baseline seems a bit awkward; consider spelling out “vanilla”? * Are all the numbers in the tables iteration counts, unless specified otherwise? It would help to restate this in the captions. Similarly, I wonder if there could be more informative names for GNN1, GNN2, GNN3, and GNN4?
iclr_2020_rkguLC4tPB
An important property of image classification systems in the real world is that they both accurately classify objects from target classes ("knowns") and safely reject unknown objects ("unknowns") that belong to classes not present in the training data. Unfortunately, although the strong generalization ability of existing CNNs ensures their accuracy when classifying known objects, it also causes them to often assign an unknown to a target class with high confidence. As a result, simply using low-confidence detections as a way to detect unknowns does not work well. In this work, we propose an Unknownaware Deep Neural Network (UDN for short) to solve this challenging problem. The key idea of UDN is to enhance existing CNNs to support a product operation that models the product relationship among the features produced by convolutional layers. This way, missing a single key feature of a target class will greatly reduce the probability of assigning an object to this class. UDN uses a learned ensemble of these product operations, which allows it to balance the contradictory requirements of accurately classifying known objects and correctly rejecting unknowns. To further improve the performance of UDN at detecting unknowns, we propose an information-theoretic regularization strategy that incorporates the objective of rejecting unknowns into the learning process of UDN. We experiment on benchmark image datasets including MNIST, CIFAR-10, CIFAR-100, and SVHN, adding unknowns by injecting one dataset into another. Our results demonstrate that UDN consistently outperforms state-of-the-art methods at rejecting unknowns -up to 20 point gains in accuracy, while still preserving the classification accuracy.
This paper is about a novel method to detect unknown samples which are of a different class than the trained ones. The idea is to use an output subnet which use a fully connected layer and a binary tree which encode the product relationship instead of the sum currently used in state of the art method (particularly the softmax with low confidence). The binary tree is made of split nodes which are responsible to produce a probability distribution from the root to each leaf. The max path i.e. the path with the largest probabilities determines the class of the input and can be used to measure how confident the classifier is about the classification decision. Combine multiple subnets and the tool obtained is able to do complex predictions and maintain a good generalization performance. The method also uses an information theory based regularization which decrease the probability of having subnets with uniform probability distribution i.e. a large entropy. Experiments on CIFAR-10 and MNIST against CIFAR-100, SVHN show that the method has an improved rejection accuracy while maintaining a good classification accuracy on the test set. The topic of the paper is interesting and the approach seems to be solid, however the experiments are not so convincing. They are limited on two very easy datasets and do not show if the method is able to scale when more difficult and more realistic amount of classes are considered (like in CIFAR-100, SVHN or maybe ImageNet). Given that a simple dataset like these require 9 hour of training, it is also not clear how much the method is able to scale computationally and if it is applicable realistically. Moreover the presentation could be improved as figure 1 and its section 2 are complex and not easy to follow at several points. Hence, I’m leaning towards rejecting this paper. In particular: - It would be interesting to see experiments where the number of classes is higher than the ones in MNIST and CIFAR-10. CIFAR-100 and SVHN would be a good testbed for such case. - How is the complexity of the method and how does it scale with the number of training classes? - It is stated that the method bring a 25% percentage points in the accuracy of unknown rejection detection, however table 1 shows a large improvement only in the case of SVHN. Hence the claims seems a bit off compared to the measured data. Moreover, using only CIFAR-10 is insufficient to back up the claim. - Why the related work section is at the end of the paper? It confuses the reader and would be more useful to be put after the introduction section.
iclr_2020_B1gX8kBtPr
Training neural networks to be certifiably robust is critical to ensure their safety against adversarial attacks. However, while promising, state-of-the-art results with certified training are far from satisfactory. Currently, it is very difficult to train a neural network that is both accurate and certifiably robust. In this work we take a step towards addressing this challenge. We prove that for every continuous function f , there exists a network n such that: (i) n approximates f arbitrarily close, and (ii) simple interval bound propagation of a region B through n yields a result that is arbitrarily close to the optimal output of f on B. Our result can be seen as a Universal Approximation Theorem for interval-certified ReLU networks. To the best of our knowledge, this is the first work to prove the existence of accurate, interval-certified networks.
The paper aims to show that there exist neural networks that can be certified by interval bound propagation. It claims to do this by showing that for any function f, there is a neural network g (close to f) such that interval bound propagation on g obtains bounds that are almost as good as one would get by applying interval bound propagation to f. There are a couple issues with this paper. The first is that the main theorem does not imply the claimed high-level take-away in the paper in any practically relevant regime. This is because while the paper does show that there is a network that approximates f well enough, the size of the network in the construction is exponential (for instance, one of the neural network components in the construction involves summing over all hyperrectangles lying within a grid). The result is only plausibly practically relevant if there is a polynomially-sized network for which the bound propagation works. The second issue is the comparison to related work, which omits both several key techniques in the literature and two related papers on universal approximation. On universal approximation papers, see these two: https://arxiv.org/abs/1904.04861 and https://arxiv.org/abs/1811.05381. They consider a different proof strategy but more plausibly yield networks of reasonable size. On key techniques in the literature: "typically by employing methods based on mixed integerlinear programming, SMT solvers and bound propagation" ignores several major techniques in the field: convex relaxations (SDP + LP), and randomized smoothing. Similarly, "specific training methods have beenrecently developed which aim to produce networks that are certifiably robust" again ignores entire families of techniques; Raghunathan et al. train SDP relaxations to perform well, while Cohen et al. train randomized smoothing to work well. Importantly, randomized smoothing is not about creating an over-approximation of the network but explicitly constructs networks that are smooth. "some of the best results achieved on the popular MNIST (LeCun et al., 1998) and CIFAR10(Krizhevsky, 2009) datasets have been obtained with the simple Interval approximation (Gowalet al., 2018; Mirman et al., 2019)" -Is this actually true? I just looked at Mirman et al. and it doesn't seem to explicitly compare to any existing bounds. My impression is that randomized smoothing (Cohen et al.) currently gets the best numbers, if we're allowed to train the network. If not allowed to train the network, Raghunathan et al. (Neurips 2018) performs well and is not compared to in the Mirman paper. These omissions must be addressed as the introduction and related work is misleading in its current state. Finally, some writing issues: >>> While the evidence suggests "no", we prove that for realisticdatasets and specifications, such a network does exist and its certification can beestablished by propagating lower and upper bounds of each neuron through thenetwork One cannot "prove" something for "realistic datasets", since "realistic datasets" is not a formal assumption. Please fix this. >>> "the most relaxed yet computationally efficient convex relaxation" What does most relaxed mean? Most relaxed would most straightforwardly mean outputting the entire space as the set of possibilities at each point, which is clearly not intended.
iclr_2020_Hkl4EANFDH
Regularization-based continual learning approaches generally prevent catastrophic forgetting by augmenting the training loss with an auxiliary objective. However in practical optimization scenarios with noisy data and/or gradients, it is possible that stochastic gradient descent can inadvertently change critical parameters. In this paper, we argue for the importance of regularizing optimization trajectories directly. We derive a new co-natural gradient update rule for continual learning whereby the new task gradients are preconditioned with the empirical Fisher information of previously learnt tasks. We show that using the conatural gradient systematically reduces forgetting in continual learning. Moreover, it helps combat overfitting when learning a new task in a low-resource scenario.
This paper amends the gradient update rule for continual learning using a natural-gradient-style formulation in order to regularise the trajectory during learning to forget previous task(s) less. They show experiments where this 'co-natural gradient' update rule improves some baselines. They also provide experiments showing the benefits of this update rule for low-resource finetuning settings. Although the idea seems reasonable and interesting, I feel like this paper needs work both in the theory and experiments. Figure 1 is a nice visualisation of the key take-away point of the paper. Theory: the authors take the natural-gradient updates from batch learning and just modify it so that the KL term is now for the previous task(s) instead of the current one. Although this may seem reasonable, I would appreciate some analysis as to what this implies or means. Experiments: - Split CIFAR from Chaudhry et al. (2018b) uses 10 tasks, why does this paper use 20 tasks? - Previous works usually find that for EWC, large values of the \lambda hyperparameter provide best results. This corresponds to lower forgetting of previous tasks. The hyperparameter range in Appendix A.2.3 is only over small values of \lambda (by orders of magnitude). - Why do the authors only allow 1 epoch per task for Split CIFAR? This probably results in early stopping: the new tasks are not able to reach their new optimal points (with or without regularised trajectories). This seems to go against the intuition provided by Figure 1, where the authors are showing that changing the trajectory results in a better local minimum being found. In fact, by adding another regularisation term, it is unsurprising that co-natural gradient updates have less forgetting, as the extra regularisation term probably means the trained parameters are even closer to the previous parameters. ------------------- EDIT: Score changed to 'Weak Accept' following discussion with the authors.
iclr_2020_HkxBJT4YvB
We consider the challenge of estimating treatment effects from observational data; and point out that, in general, only some factors based on the observed covariates X contribute to selection of the treatment T , and only some to determining the outcomes Y . We model this by considering three underlying sources of { X, T, Y } and show that explicitly modeling these sources offers great insight to guide designing models that better handle selection bias. This paper is an attempt to conceptualize this line of thought and provide a path to explore it further. In this work, we propose an algorithm to (1) identify disentangled representations of the above-mentioned underlying factors from any given observational dataset D and (2) leverage this knowledge to reduce, as well as account for, the negative impact of selection bias on estimating the treatment effects from D. Our empirical results show that the proposed method (i) achieves state-of-the-art performance in both individual and population based evaluation measures and (ii) is highly robust under various data generating scenarios.
Summary: The authors consider the problem of estimating average treatment effects when observed X and treatment T causes Y. Observational data for X,T,Y is available and strong ignorability is assumed. Previous work (Shalit et al 2017) introduced learning a representation that is invariant in distribution across treatment and control groups and using that with treatment to estimate Y. However, authors point out that this representation being forced to be invariant still does not drive the selection bias to zero. A follow up work (Hassanpour and Greiner 2019) - corrects for this by using additional importance weighting that estimates the treatment selection bias given the learnt representation. However, the authors point out even this is not complete in general, as X could be determined by three latent factors, one that is the actual confounder between treatment and outcome and the other that affects only the outcome and the other that affects only the treatment. Therefore, the authors propose to have three representations and enforce independence between representation that solely determines outcome and the treatment and make other appropriate terms depend on the respective latent factors. This gives a modified objective with respect to these two prior works. The authors implement optimize this joint system on synthetic and real world datasets. They show that they outperform all these previous works because of explicitly accounting for confounder, latent factors that solely control only outcome and treatment assignment respectively. Pros: This paper directly addresses the problems due to Shalit 2017 that are still left open. The experimental results seems convincing on standard benchmarks. I vote for accepting the paper. I don't have many concerns about this paper. Cons: - I have one question for the authors - if T and Y(0),Y(1) are independent given X is assumed, then how are we sure that the composite representations (of the three latent factors) are going to necessarily satisfy ignorability provably ?? I guess this cannot be formally established. It would be great for the authors to comment on this.
iclr_2020_HJeEP04KDH
Recent work has shown that quantization can help reduce the memory, compute, and energy demands of deep neural networks without significantly harming their quality. However, whether these prior techniques, applied traditionally to imagebased models, work with the same efficacy to the sequential decision making process in reinforcement learning remains an unanswered question. To address this void, we conduct the first comprehensive empirical study that quantifies the effects of quantization on various deep reinforcement learning policies with the intent to reduce their computational resource demands. We apply techniques such as post-training quantization and quantization aware training to a spectrum of reinforcement learning tasks (such as Pong, Breakout, BeamRider and more) and training algorithms (such as PPO, A2C, DDPG, and DQN). Across this spectrum of tasks and learning algorithms, we show that policies can be quantized to 6-8 bits of precision without loss of accuracy. We also show that certain tasks and reinforcement learning algorithms yield policies that are more difficult to quantize due to their effect of widening the models' distribution of weights and that quantization aware training consistently improves results over post-training quantization and oftentimes even over the full precision baseline. Additionally, we show that quantization aware training, like traditional regularizers, regularize models by increasing exploration during the training process. Finally, we demonstrate usefulness of quantization for reinforcement learning. We use half-precision training to train a Pong model 50% faster, and we deploy a quantized reinforcement learning based navigation policy to an embedded system, achieving an 18× speedup and a 4× reduction in memory usage over an unquantized policy.
This paper investigates the impact of using a reduced precision (i.e., quantization) in different deep reinforcement learning (DRL) algorithms. It shows that overall, reducing the precision of the neural network in DRL algorithms from 32 bits to 16 or 8 bits doesn't have much effect on the quality of the learned policy. It also shows how this quantization leads to a reduced memory cost and faster training and inference times. I don't think this paper contributes with many novel results in the field, with most results being known or expected. The result that is interesting, in my opinion, is not properly explored. The paper is well-written but it is a bit repetitive. It seems to me that the first 3 pages could be compressed in 1, as the same information is introduced over and over again. With respect to the results being known, quantization is known to succeed in supervised learning tasks. In a deep reinforcement learning algorithm, when you apply post-training quantization in a deep reinforcement learning algorithm, mainly when that algorithm uses a value function (e.g., A2C or DQN), the problem is reduced to a regression problem. It is no different than a supervised learning problem. One has the original network’s prediction and they need to match that prediction. The complexities introduced in the reinforcement learning problem (bootstrapping, exploration, stability) don’t exist anymore as they arise during training. Thus, it doesn’t seem to me that these results are novel or surprising. In a sense it is neat to see that eventual errors do not compound, but that’s it. If I were to write this paper I would make this set of experiments much shorter just as a sanity check. One thing that I feel is missing is a notion of the impact of the quantization not in the rewards accumulated but in the policy/value function. How often does the quantized agent take a different action than the original agent, for example? Does it happen often but only when it doesn’t matter, or is it rare? The quantization during training is potentially interesting. It was not properly explored though. I wonder if the quantization during training has a regularization effect, which is known to improve agent’s performance in reinforcement learning (e.g., Cobbe et al., 2018, Farebrother et al., 2018). Does the agent generalize better when using a network with fewer bits of precision? How does this change impact training? These are all questions that could potentially make the results in this paper novel (i.e., quantization as a form of regularization), but as it is now, the results are not that surprising. Importantly, there are important details missing in the paper that make it hard for me to evaluate the validity of the results presented. Are the results reported over multiple runs? What is the version of the Atari games used, is it the one with stochasticity? How much variance do we have if we replicate this process over different networks that perform well? These are questions I would like to see answered because they also inform us about the impact of the proposed idea. For example, if by repeating this experiment multiple times one observe a high variance, it might mean that different models might be impacted in different ways. The results in the “real-world” (Pong is not real-world) are not that surprising as well. Basically they show that if one uses a network with lower precision training and inference are faster, which, again, is not surprising. There’s also an important distinction in the results that is not discussed in the paper: DQN estimates a value function while methods such as PPO directly estimate a policy. The reason DQN might have a wider distribution is exactly because it is estimating a different objective. These are important details that should be acknowledged and discussed in the paper. In my opinion, for this paper be relevant, it should have a very thorough evaluation of these different dimensions of reinforcement learning algorithms, with explicit discussions about it. Variance, the impact of quantization during learning, the distinction between parametrizing policies versus value functions, etc. Finally, there are some aspects of the presentation of this paper that could also be improved. Aside from typos, below are some other comments on the presentation. - There’s no such thing as Atari environment, it is either Arcade Learning Environment (Bellemare et al., 2013) or Atari games. - I’d introduce/explain quantization in the beginning of the second paragraph of the Introduction for those not familiar with the term. - No references are provided for the environments used. You should refer to Bellemare et al.’s (2013) work as well as Brockman et al.’s (2016). - Is it really necessary to explain Fp16 quantization as it is done now, with even a picture of two bytes? I’d expect most readers are familiar with how numbers are represented in a computer. - The equation for Uniform Affine Quantization is pretty much the same as the one in the Section Quantization Aware Training. All these “repetitions”, or discussions that are common-knowledge give the impression that the paper is trying to fill all the pages without necessarily having enough content. - The references are not standardized (e.g., sometimes names are shortened, sometimes they are not) and the paper “Efficient inference engine on compressed deep neural network” is cited twice. References: Marc G. Bellemare, Yavar Naddaf, Joel Veness, Michael Bowling: The Arcade Learning Environment: An Evaluation Platform for General Agents. J. Artif. Intell. Res. 47: 253-279 (2013) Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, Wojciech Zaremba: OpenAI Gym. CoRR abs/1606.01540 (2016) Karl Cobbe, Oleg Klimov, Christopher Hesse, Taehoon Kim, John Schulman: Quantifying Generalization in Reinforcement Learning. CoRR abs/1812.02341 (2018) Jesse Farebrother, Marlos C. Machado, Michael Bowling: Generalization and Regularization in DQN. CoRR abs/1810.00123 (2018) ------ >>> Update after rebuttal: I stand by my score after the rebuttal. The rebuttal did acknowledge some points I made to me the paper took a gradient update towards the right direction. I don't think the paper is quite there yet though. It is repetitive, spending too much time with basic concepts, and it still ignores small details that matter (e.g., calling it Atari Arcade Learning). I strongly recommend the authors to follow my recommendations closely and then submit the paper again to a next conference. The discussion about generalization is potentially interesting, going beyond the regularization for exploration aspect. A better discussion about quantization during learning is also essential. The first three pages could probably be compressed by half.
iclr_2020_S1g7tpEYDS
Variational Autoencoders (VAEs) provide a theoretically-backed and popular framework for deep generative models. However, learning a VAE from data poses still unanswered theoretical questions and considerable practical challenges. In this work, we propose an alternative framework for generative modeling that is simpler, easier to train, and deterministic, yet has many of the advantages of the VAE. We observe that sampling a stochastic encoder in a Gaussian VAE can be interpreted as simply injecting noise into the input of a deterministic decoder. We investigate how substituting this kind of stochasticity, with other explicit and implicit regularization schemes, can lead to an equally smooth and meaningful latent space without having to force it to conform to an arbitrarily chosen prior. To retrieve a generative mechanism to sample new data points, we introduce an ex-post density estimation step that can be readily applied to the proposed framework as well as existing VAEs, improving their sample quality. We show, in a rigorous empirical study, that the proposed regularized deterministic autoencoders are able to generate samples that are comparable to, or better than, those of VAEs and more powerful alternatives when applied to images as well as to structured data such as molecules.
This paper propose an extension to deterministic autoencoders. Motivated from VAEs, the authors propose RAEs, which replace the noise injection in the encoders of VAEs with an explicit regularization term on the latent representations. As a result, the model becomes a deterministic autoencoder with a L_2 regularization on the latent representation z. To make the model generalize well, the authors also add a decoder regularization term L_REG. In addition, due to the encoder in RAE is deterministic, the authors propose several ex-post density estimation techniques for generating samples. The idea of transferring the variational to deterministic autoencoders is interesting. Also, this paper is well-written and easy to understand. However, in my opinion, this paper needs to consider more cases for autoencoders and needs more rigorous empirical and theoretical study before it can be accepted. Details are as follow: 1. The RAEs are motivated from VAEs, or actually CV-VAEs as in this paper. More precisely, the authors focus on VAEs with a constant covariance Gaussian distribution as the variational distribution and a Gaussian distribution with the identity matrix as the covariance matrix as the model likelihood. However, there might be many other settings for VAEs. For example, the model likelihood can be a Gaussian distribution with non-constant covariance, or even some other distributions (e.g. Multinomial, Bernoulli, etc). Similarly, the variational distribution can be a Gaussian distribution with non-constant covariance, or even some more complicated distributions that do not follow the mean-field assumption. Any of these more complex models may not be easily transferred to the RAE models that are mentioned in this paper. Perhaps it is better if the authors can consider RAEs for some more general VAE settings. 2. Perhaps the authors needs more empirical study, especially on the gain of RAE over CV-VAE and AE. a) As the motivated model (CV-VAE) and the most related model in the objective (AE), they are not appearing in the structured input experiment (Section 6.2). It will be great if they can be compared with in this experiment. b) The authors did not show us clearly whether the performance gain of RAE over VAE, AE and CV-VAE is due to the regularization on z (the term L_z^RAE) or the decoder regularization (the term L_REG) in the experiments. In table 1, the authors only compare the standard RAE with RAE without decoder regularization, but did not compare with RAE without the regularization on z (i.e. equivalent to AE + decoder regularization) and CV-VAE + decoder regularization. The authors would like to show that the explicit regularization on z is better than injecting the noise, hence the decoder regularization term should appear also in the baseline methods. It is totally possible that perhaps AE + decoder regularization or CV-VAE + decoder regularization perform better than RAE. c) The authors did not show how they tune the parameter \sigma for CV-VAE. Since the parameter \beta in the objective of RAE is tunable, for fair comparison, the authors needs to find the best \sigma for CV-VAE in order to get the conclusion that explicit regularization is better than CV-VAE. d) Although the authors mention that the 3 regularization techniques perform similarly, from Table 1, it is still hard to decide which one should we use in practice in order to get a performance at least not too much worse compared to the baseline methods. RAE-GP and RAE-L2 perform not well on CelebA while RAE-SN perform not well on MNIST, compared to the baseline methods. We know that the best performance over the 3 methods is always comparable to or better than the baselines, but not none of the single methods do. It is better if the authors can provide more suggestions on the choice for decoder regularization for different datasets. 3. The authors provided a theoretical derivation for the objective L_RAE (Equation 11), but this is only for the L_GP regularization. Besides, this derivation (in Appendix B) has multiple technique issues. For example, in the constraints in Equation 12, the authors wrote ||D_\theta(z1) - D_\theta(z2)|| < epsilon for all z1, z2 ~ q_\phi(z | x), this is impossible for CV-VAE since this constraint requires D_theta() to be bounded while q_\phi(z | x) in CV-VAE has an unbounded domain. Moreover, in the part (||D_\theta(z1) - D_\theta(z2)||_p=\nabla D_\theta(\tilde z)\cdot ||z_1-z_2||_p) of Equation 13, \nabla D_\theta(\tilde z) is a vector well the other two terms are scalars, which does not make sense. There are many other issues as well. Please go through the proof again and solve these issues. Questions and additional feedback: 1. Can the authors provide more intuitions why do you think the explicit regularization works better compared to the noise injection? Can you provide a theoretical analysis on that? 2. Can the authors provide some additional experiments as mentioned above? Also, can the authors provide more details about how do they tune the parameters \beta and \lambda? ======================================================================================================== After the rebuttal: Thanks the authors for the detailed response and the additional experiments. I agree that the additional experiment results help to support the claims from the authors, especially for the CV-VAE for the structured data experiments and the AE + L2 experiment. So I think now the authors have more facts to support that RAE is performing better compared to the baseline methods. Therefore, I agree that after the revision, the proposed method RAE is supported better empirically. So I am changing my score from "weak reject" to "weak accept". But I still think the baseline CV-VAE + regularization is important for Table 1 and the technical issues in the theoretical analysis needs to be solved. Hope the authors can edit them in the later version.
iclr_2020_HyeuP2EtDB
Humans can learn task-agnostic priors from interactive experience and utilize the priors for novel tasks without any finetuning. In this paper, we propose ScoringAggregating-Planning (SAP), a framework that can learn task-agnostic semantics and dynamics priors from arbitrary quality interactions with sparse reward and then plan on unseen tasks in zero-shot condition. The framework finds a neural score function for local regional state and action pairs that can be aggregated to approximate the quality of a full trajectory; moreover, a dynamics model that is learned with self-supervision can be incorporated for planning. Many previous works that leverage interactive data for policy learning either need massive on-policy environmental interactions or assume access to expert data while we can achieve the similar goal with pure off-policy imperfect data. Instantiating our framework results in a generalizable policy to unseen tasks. Experiments demonstrate that the proposed method can outperform baseline methods on a wide range of applications including gridworld, robotics tasks and video games.
The paper proposes a framework (Scoring-Aggregating-Planning (SAP)) for learning task-agnostic priors that allow generalization to new tasks without finetuning. The motivation for this is very clear - humans can perform much better than machines in zero-shot conditions because humans have learned priors about objects, semantics, physics, etc. This is achieved by learning a scoring function based on the final reward and a self-supervised learned dynamics model. Overall, the paper is very clear and easy to follow. The presented task is realistic and important, and the paper seems to address it in a reasonable approach. However, the evaluation seems lacking to me - the evaluation convinced me that SAP works, but I am not convinced that it works better than existing approaches (see below), and especially did not convince me that it is better in the zero-shot test environment. The (anonymized) website contains nice videos that support the submission. Questions for the authors: 1. Page 3, 3rd paragraph of Section 3: the paper says that "The proposed formulation requires much less information and thus more realistic and feasible" - I agree that this is more realistic, but is it really more feasible? The requirement of much less information makes the proposed formulation much more sparse. 2. A basic assumption in the SAP framework is that a local region score is a sum of all the sub-regions. As phrased in the paper: "in the physical world, there is usually some level of rotational or transnational invariance". I'm not sure that this assumption makes sense neither in the Mario case or in other tasks, e.g., robotics. Doesn't it matter if you have a "turtle" right in front of you (which means that the turtle is going to hit you), or below you (which means that you are going hit the turtle)? 3. A question about the planning phase - page 5 says: "We select the action sequence that gives us the best-aggregated score and execute the first action". Do you select the entire sequence of actions in the new environment in advance? Can the agent observe the new state after every action, and decide on the next action based on the actual step that the action has reached, rather than on the state that was approximated in advance? In other words - what happens if the first action in the new test environment yields an unexpected state, that was not predicted well by the dynamics model; does the agent continue on the initial planned trajectory (that ignores the "surprise"), or does it compute its next action based on the unexpected state? 4. Experiments: in Gridworld and Mario - are there any stronger baselines in the literature, or reductions of known baselines to the zero-shot scenario? Are the chosen "Human Priors", BC-random and BC-SAP just strawmen? Since the main goal of this paper is the zero-shot task, what would convince me is a state-of-the-art model that does possibly *better than SAP on the training level*, but *worse than SAP in generalizing to the new level*. Additionally, are there other baselines that specifically address the zero-shot task in the literature? Minor (did not impact score): Page 2, 1st paragraph: "... we show that how an intelligent agent"... Page 3, 3rd paragraph: "... in model-free RL problem" - missing an "a" or "problem*s*"? Page 3, 3rd paragraph: ". Model based method ..." - missing an "a" as well? Page 4, 1st paragraph:: "... utilizing the to get the ..." Page 4, last row: missing a dot after the loss equation, before the word "In". Page 7, Table 1: "BC-random" is called "BC-data" in the text. Aren't they the same thing?
iclr_2020_SJxRKT4Fwr
Many real-world applications involve multivariate, geo-tagged time series data: at each location, multiple sensors record corresponding measurements. For example, air quality monitoring system records PM2.5, CO, etc. The resulting time-series data often has missing values due to device outages or communication errors. In order to impute the missing values, state-of-the-art methods are built on Recurrent Neural Networks (RNN), which process each time stamp sequentially, prohibiting the direct modeling of the relationship between distant time stamps. Recently, the self-attention mechanism has been proposed for sequence modeling tasks such as machine translation, significantly outperforming RNN because the relationship between each two time stamps can be modeled explicitly. In this paper, we are the first to adapt the self-attention mechanism for multivariate, geo-tagged time series data. In order to jointly capture the self-attention across different dimensions (i.e. time, location and sensor measurements) while keep the size of attention maps reasonable, we propose a novel approach called Cross-Dimensional Self-Attention (CDSA) to process each dimension sequentially, yet in an order-independent manner. On three real-world datasets, including one our newly collected NYCtraffic dataset, extensive experiments demonstrate the superiority of our approach compared to state-of-the-art methods for both imputation and forecasting tasks.
This paper proposes a Transformer-based model with cross-dimensional self-attention for multivariate time series imputation and forecasting. The authors consider time series with observations collected at different locations (L), for different measurements (M), and at different timestamps (T). The authors describe 4 different self-attention mechanisms based on how the three dimensions, and it turns out the proposed Decomposed approach achieves the best performance and has moderate model complexity among the four. Experiments on several traffic and air quality datasets show the superiority of the proposed model. Overall the problem and the proposed model are well motivated. Handling time series data with missing values is quite important, and the authors design the novel cross-dimensional attention mechanism which is reasonable and performs well. Especially, the authors compare several recent RNN-based models. The proposed method outperforms these strong baselines in imputation and long-term forecasting tasks. However, I do have a few questions and concerns. It would be quite helpful to see the training time and model size comparisons with baselines, which is to validate the claim in the introduction that `replacing the conventional RNN-based models to speed up training`. The proposed model treats all three dimensions equally, with direct attention of every two variables, and is independent with the order. Though effective and mathematically clean, I am not sure the temporal/spatial smoothness and dependencies in time series are properly modeled in this way -- as time series is not the same as embedded sequences in NLP. This may explain why the performance on short-term forecasting is relatively unsatisfying. It seems that the proposed model is designed for missing completely at random (implied by the statement `... due to unexpected sensor damages or communication errors` from the introduction, and the experimental settings on adding missing values). Many missing variables in time series may be missing at random or even not at random. About the experiments: Two datasets of the three mentioned in the main paper have M=1, which degrade the proposed model from 3-dimensional to 2-dimensional. Why forecasting experiment is conducted on METR-LA, while using the imputation for forecasting is conducted on a different dataset and without comparing other forecasting baselines? What are the metrics used in Tables 6, 7, 9? Several metrics are used (RMSE, MSE, MAPE, MAE, MRE) while results on different datasets are shown in different but not all metrics. Is there any reason to cherry-pick metrics for different experiments? In Table 4, the proposed method's results in RMSE is consistently better than shown in MAE compared with other baselines. Any explanations would be useful. The overall idea is relatively easy to follow, while some detailed descriptions should be added or clarified. When taking health-care data as an example of geo-tagged time series, could you explain or provide references? Figure 2 only demonstrates 3 attention mechanisms, and the Shared should also be included. S(i,j) is used in Section 3.1 without explicit formal definition. Please clarify the sentence about \sigma below Equation (4). Please refer to the section number explicitly in Supp if used. (E.g., on Pages 5 and 7.) On which dataset and what settings are the results in Table 1 computed? The numbers are helpful, but it would be better if the results are computed based on the hyperparameters (e.g., T, L, M, d_V, etc). Minor typos: Page 3, Paragraph of RNN-based data imputation methods: `...indistinguishable. so that...` Page 3, Section 3.1, `..where Then, ...` Page 14, A' is used to denote the reshaped tensor, while \tilde is used in the main paper. Page 16, `During testing,`
iclr_2020_rJg7BA4YDr
Turing complete computation and reasoning are often regarded as necessary precursors to general intelligence. There has been a significant body of work studying neural networks that mimic general computation, but these networks fail to generalize to data distributions that are outside of their training set. We study this problem through the lens of fundamental computer science problems: sorting and graph processing. We modify the masking mechanism of a transformer in order to allow them to implement rudimentary functions with strong generalization. We call this model the Neural Execution Engine, and show that it learns, through supervision, to numerically compute the basic subroutines comprising these algorithms with near perfect accuracy. Moreover, it retains this level of accuracy while generalizing to unseen data and long sequences outside of the training distribution.
This paper investigates an interesting problem of building a program execution engine with neural networks. The authors proposed a transformer-based model to learn basic subroutines, such as comparison, find min, and addition, and apply them in several standard algorithms, such as sorting and Dijkstra’s. Pros: 1. The method achieves generalization towards longer sequences than the sequences in the training set in several algorithms. 2. The method represents numbers in binary form and the visualization shows that it learns embeddings from fixed-range integer numbers in a well-structured manner. 3. The learned NEE subroutines are tested in a variety of standard algorithms, such as multiple sorting algorithms and Dijkstra’s shortest path algorithm. The experiments further demonstrate that several NEE subroutines can be composed together in complex algorithms. Cons: 1. NEE mostly focuses on learning low-level subroutines such as number comparison or addition. Therefore, it has to be used along with conventional subroutines, and cannot completely replace the full execution in complex algorithms, which have sophisticated control logic, such as if/else and for-loops. When the transformer model is used alone in the sorting task (Sec. 4.1), the performance degrades substantially as the sequence length gets longer. 2. Although the method achieved some degree of strong generalization, it lacks a formal way to verify the correctness of the learned subroutines, as opposed to prior work on program synthesis (Cai et al. 2017) that can prove the generalization of their model with recursion. 3. The method relies on detailed execution traces for supervised learning which can be costly to obtain. Questions: 1. Confusing sentence: “Can we retain high attention resolution by restricting the model to only observe the first scenario repeatedly?” Can you elaborate on what you meant here? 2. From Figure 1, it seems that the model with dot product attention generalizes better in longer sequences than the one with all modifications. What’s the reason? 3. I would like to better understand the limitation of these learned NEE subroutines in long sequences. For instance, in Figure 8 and Figure 9, how would the model perform beyond the lengths of the sequences tested here? Would the performance maintain at 100% or decrease gradually as the sequences get even longer? 4. I am curious to know how this method could be extended to support more complex number systems, such as float numbers, and more complex data structures beyond sequences, such as binary trees and priority queues. I’d love to hear what the authors have to say about this. 5. I'd also like to know if the number embeddings learn in different algorithms would exhibit different structures (by examining the visualization of number embeddings learned in different tasks). 6. As NEE focuses on learning the basic subroutines while NPI aims to learn the high-level program executions, I think that it’d be very interesting to see how these two can combine their complementary strengths to build a complete neural-based execution engine. Typos: Select he node --> Select the node
iclr_2020_HJgKYlSKvr
In this paper we present, to the best of our knowledge, the first method to learn a generative model of 3D shapes from natural images in a fully unsupervised way. For example, we do not use any ground truth 3D or 2D annotations, stereo video, and ego-motion during the training. Our approach follows the general strategy of Generative Adversarial Networks, where an image generator network learns to create image samples that are realistic enough to fool a discriminator network into believing that they are natural images. In contrast, in our approach the image generation is split into 2 stages. In the first stage a generator network outputs 3D objects. In the second, a differentiable renderer produces an image of the 3D objects from random viewpoints. The key observation is that a realistic 3D object should yield a realistic rendering from any plausible viewpoint. Thus, by randomizing the choice of the viewpoint our proposed training forces the generator network to learn an interpretable 3D representation disentangled from the viewpoint. In this work, a 3D representation consists of a triangle mesh and a texture map that is used to color the triangle surface by using the UV-mapping technique. We provide analysis of our learning approach, expose its ambiguities and show how to overcome them. Experimentally, we demonstrate that our method can learn realistic 3D shapes of faces by using only the natural images of the FFHQ dataset.
I thank the authors for the rebuttal and the additional experiments. The additions do partially address my concerns, although not entirely. For instance, the experiments on non-face classes are very preliminary and it is unclear if they work at all (no other views shown). I hope the authors are right that the method will work on other classes after some tuning, but this is not demonstrated in the paper. Overall, I am quite in a borderline mode. I think the paper looks promising and after further improving the experimental evaluation it can become a great publication. But for now the experiments, especially the new ones, look somewhat incomplete and rushed, more suitable for a workshop paper. Therefore, I still lean towards rejection. --- The paper proposes an approach to learning the 3D structure of images without explicit supervision. The proposed model is a Generative Adversarial Network (GAN) with an appropriate task-specific structure: instead of generating an image directly with a deep network, three intermediate outputs are generated first and then processed by a differentiable renderer. The three outputs are the 3D geometry of the object (represented by a mesh in this work), the texture of the object, and the background image. The final output of the model is produced by rendering the geometry with the texture and overlaying on top of the background. The whole system can be trained end-to-end with a standard GAN objective. The method is applied to the FFHQ dataset of face images, where it produces qualitatively reasonable results. I am in the borderline mode about this paper. On one hand, I believe the task of unsupervised learning 3D from 2D is interesting and important, and the paper makes an interesting contribution in this direction. On the other hand, the experimental evaluation is quite limited: the results are purely qualitative, on a single dataset, and do not contain much analysis of the method. It would be great if the authors could add more experiments to the paper during the discussion phase. More detailed comments: Pros: 1) The paper is presented well, is easy to read. I like the detailed table with comparison to related works, and a good discussion of the limitations of the method and the tricks involved in making it work. I also like section 4 clearly discussing the assumptions of the work, although I think it could be shortened quite a bit. 2) The proposed method is reasonable and seems to work in practice, judging from the qualitative results. cons: 1) The experiments are limited. 1a) There are no quantitative results. I understand it is non-trivial to evaluate the method on 3D reconstruction, although one could either train a network inverting the generator, or, perhaps simpler, apply a pre-trained image-to-3D network to the generated images. But at least some image quality measures (FID, IS) could be reported. 1b) The method is only trained on one dataset of faces. It would be great to apply the method to several other datasets as well, for instance, cars, bedrooms, animal faces, ShapeNet objects. This would showcase the generality of the approach. Otherwise, I am worried the method is fragile and only applies to very clean and simple data. Also, if the method is only applied to faces, it makes sense to mention faces in the title. 1c) It would be very helpful to have more analysis of the different variants of the method, ideally with quantitative results (again, at least some image quality results). Figure 3 goes in this direction, but it is very small and does not give a clear understanding of the relative performance of diferent variants. 2) A missing very relevant citation of HoloGAN by Nguyen-Phuoc et al. [1]. It is not yet published, but has been on arXiv for some time. I am a bit unsure about the ICLR policy in this case (this page https://iclr.cc/Conferences/2019/Reviewer_Guidelines suggests that arXiv paper may be formally considered prior work, in which case it should be discussed in full detail), but at least a brief mention would definitely be good. [1] HoloGAN: Unsupervised learning of 3D representations from natural images. Thu Nguyen-Phuoc, Chuan Li, Lucas Theis, Christian Richardt, Yong-Liang Yang. arXiv 2019.
iclr_2020_SJgVU0EKwS
We propose precision gating (PG), an end-to-end trainable dual-precision quantization technique for deep neural networks. PG computes most features in a low precision and only a small proportion of important features in a higher precision. Precision gating is very lightweight and widely applicable to many neural network architectures. Experimental results show that precision gating can greatly reduce the average bitwidth of computations in both CNNs and LSTMs with negligible accuracy loss. Compared to state-of-the-art counterparts, PG achieves the same or better accuracy with 2.4× less compute on ImageNet. Compared to 8-bit uniform quantization, PG obtains a 1.2% improvement in perplexity per word with 2.8× computational cost reduction on LSTM on the Penn Tree Bank dataset. Precision gating has the potential to greatly reduce the execution costs of DNNs on both commodity and dedicated hardware accelerators. We implement the sampled dense-dense matrix multiplication kernel in PG on CPU, which achieves up to 8.3× wall clock speedup over the dense baseline.
This paper introduces Precision Gating, a novel mechanism to quantize neural network activations to reduce the average bitwidth, resulting in networks with fewer bitwise operations. The idea is to have a learnable threshold Delta that determines if an output activation should be computed in high or low precision, determined by the most significant bits of the value. Assuming that high activations are more important, these are computed at higher precision. I agree that the following three key contributions listed in the paper are (slightly re-formulated): 1. Introducing Precision Gating (PG), the first end-to-end trainable method that enables dual-precision execution of DNNs and is applicable to a wide variety of network architectures. 2. Precision gating enables DNN computation with a better average bitwidth to accuracy tradeoff than other state-ofthe- art quantization methods. Combined with its lightweight gating logic, PG demonstrates the potential to reduce DNN execution costs in both commodity and dedicated hardware. 3. Unlike prior works that focus only on inference, precision gating achieves the same sparsity during back-propagation as forward propagation, which reduces the computational cost for both passes. These contributions are novel and experimental evidence is provided for multiple networks and datasets. The paper is well-written and provides insightful figures to showcase the strengths of the present method. Related work is adequately cited. The paper does not contain much theory, but wherever possible equations are provided to illustrate in detail how the method works. Experimental results are shown for the datasets CIFAR-10 with ResNet-18 and ShiftNet-20, and ImageNet with ShuffleNet V2 0.5x. On both datasets, PG outperforms uniform quantization, PACT, Fix-Threshold and SeerNet in terms of top-1 accuracy and average bitwidth. What I am missing is information about the variability of results, since there are no error bars. Are the results averaged over multiple trials (if yes how many?), and is there a difference in variance between the methods? I realize that adding standard deviations to all results in the tables might be infeasible, but a qualitative statement would be interesting. In particular, the random initialization of the hb bits could play a bigger role than lb bits. The two variants of PG, with and without sparse backpropagation are also investigated, showing that sparse backpropagation leads to more sparsity. To show that the resulting lower average bitwidth gained with PG leads to increased performance, the authors implement it in Python (running on CPU) and measure the wall clock time to execute the ResNet-18 model. Speedups $> 1$ are shown for every layer when using PG. Evidence from other papers is cited to argue that similar speedups are expected on GPUs. Even though at the moment it is unclear to me how statistically significant the results are, and I strongly recommend commenting on this in the paper, I think the idea of PG and the demonstrated benefits make the paper interesting enough to be accepted at ICLR. I also have a few questions that I could not get completely from the paper: 1. I am a bit confused by what you call features. Fig. 2 shows by example how the method works for an input $I$. Is $I$, a single number, i.e. a single entry of your input vector, or do you mean the complete input vector? 2. Could you give a bit more insight, how you tuned your hyperparameters, especially $\delta$ and $\sigma$? 3. What exactly does e.g. $\delta=-1$ mean? The network ideally should compute at high precision, when the result when only considering the most significant bits is above -1? From a hardware point of view, the paper focuses on GPU implementations. I would have hoped for a discussion of suitable custom hardware that could support PG most efficiently. Minor comments that I would be interested in but did not influence my score - It seems to me that on the top-left image of Fig. 3, one blue circle (the second largest) is too much? First part shows 8 dots, middle and right only seven? - Can you please cite a source that DNN activations have outliers (Sec. 3.4)? - You could also define e.g. one $\delta$ and $\Delta$ per layer, couldn't you? Would be interesting to see if e.g. thinning out the precision over depth is possible / has advantages.
iclr_2020_Bkx5XyrtPS
We show that for any convex differentiable loss, a deep linear network has no spurious local minima as long as it is true for the two layer case. This reduction greatly simplifies the study on the existence of spurious local minima in deep linear networks. When applied to the quadratic loss, our result immediately implies the powerful result by Kawaguchi (2016). Further, with the recent work by Zhou & Liang (2018), we can remove all the assumptions in (Kawaguchi, 2016). This property holds for more general "multi-tower" linear networks too. Our proof builds on the work in (Laurent & von Brecht, 2018) and develops a new perturbation argument to show that any spurious local minimum must have full rank, a structural property which can be useful more generally.
The motivation of this paper is training deep neural network seems to not suffer from local minima, and it tries to explain this phenomenon by showing that all local minima of deep neural network is global minima. The paper shows that for any convex differentiable loss function, a deep linear neural network has no so called spurious local minima, which to be specific, are local minima that are not global minima, as long as it is true for two-layer Neural Network. The motivation is that combining with existing result that no spurious local minima exists for quadratic loss in two-layer Neural Network, this relation connecting between two-layer and deeper linear neural network immediately implies an existing result that all local minima are global minima, removing all assumptions. The result also holds for general “multi-tower” linear networks. Overall, this paper could be an improvement of existing results. It is well written and the proof step is clear in general. However, there’re some weakness need clarifications on the results, especially on the novelty. Given reasonable clarifications in response, I would be willing to change my score. For novelty, it is unclear if the results from Lemma 1 to Theorem 1 and 2 are both being stated as novel results. The first part of proof of Theorem 1 is obvious and straightforward, and the other direction has been used before for multiple times as claimed in the paper, what is your novelty exactly here? For the key technical claim of Lemma 1, it looks like this perturbation technique already exists in (Laurent & Brecht, 2018), why do you claim it as a novel argument? Besides novelty, there are also some other unclear pieces in this paper needs clarification: 1) Is the main result which is “no spurious local minima for deep neural network” holds for any differentiable convex loss other than quadratic loss? How will Theorem 1 help us understand the mystery of neural network? 2) How does the result help us understand non-linear deep neural network, which is commonly use in practice? 3) The paper should give some explanations about why the results help training neural networks.
iclr_2020_H1x5wRVtvS
For bidirectional joint image-text modeling, we develop variational hetero-encoder (VHE) randomized generative adversarial network (GAN), a versatile deep generative model that integrates a probabilistic text decoder, probabilistic image encoder, and GAN into a coherent end-to-end multi-modality learning framework. VHE randomized GAN (VHE-GAN) encodes an image to decode its associated text, and feeds the variational posterior as the source of randomness into the GAN image generator. We plug three off-the-shelf modules, including a deep topic model, a ladder-structured image encoder, and StackGAN++, into VHE-GAN, which already achieves competitive performance. This further motivates the development of VHE-raster-scan-GAN that generates photorealistic images in not only a multi-scale low-to-high-resolution manner, but also a hierarchical-semantic coarse-to-fine fashion. By capturing and relating hierarchical semantic and visual concepts with end-to-end training, VHE-rasterscan-GAN achieves state-of-the-art performance in a wide variety of image-text multi-modality learning and generation tasks. PyTorch code is provided at https://drive.google.com/file/d/1UFQ7yS6Lobg ZGAxIwXS40vzWO8zwySV/view.
This paper proposes a combined architecture for image-text modeling. Though the proposed architecture is extremely detailed, the authors explain clearly the overarching concepts and methods used, within limited space. The experimental results are extremely strong, especially on sub-domains where conditional generative models have historically struggled such as images with angular, global features - often mechanical or human constructed objects. "Computers" and "cars" images in Figure 2 show this quite clearly. The model also functions for tagging and annotating images - performing well compared to models designed *only* for this task. The authors have done a commendable job adding detail, further analysis, and experiments in the appendix of the paper. Combined with the included code release, this paper should be of interest to many. My chief criticisms come for the density of the paper - while it is difficult to dilute such a complex model to 8 pages, and the included appendix clarifies many questions in the text body, it would be worth further passes through the main paper with a specific focus on clarity and brevity, to aid in the accessibility of this work. As usual, more experiments are always welcome, and given the strengths of GAN based generators for faces a text based facial image generator could have been a great addition. The existing experiments are more than sufficient for proof-of-concept though. Finally, though this version of the paper includes code directly in a google drive link it would be ideal for the final version to reference a github code link - again to aid access to interested individuals. Being able to read the code online, without downloading and opening locally can be nice, along with other benefits from open source release. However the authors should release the code however they see fit, this is more of a personal preference on the part of this reviewer. To improve my score, the primary changes would be more editing and re-writing, focused on clarity and brevity of the text in the core paper.
iclr_2020_r1eU1gHFvH
Localist coding schemes are more easily interpretable than the distributed schemes but generally believed to be biologically implausible. Recent results have found highly selective units and object detectors in NNs that are indicative of local codes (LCs). Here we undertake a constructionist study on feed-forward NNs and find LCs emerging in response to invariant features, and this finding is robust until the invariant feature is perturbed by 40%. Decreasing the number of input data, increasing the relative weight of the invariant features and large values of dropout all increase the number of LCs. Longer training times increase the number of LCs and the turning point of the LC-epoch curve correlates well with the point at which NNs reach 90-100% on both test and training accuracy. Pseudo-deep networks (2 hidden layers) which have many LCs lose them when common aspects of deep-NN research are applied (large training data, ReLU activations, early stopping on training accuracy and softmax), suggesting that LCs may not be found in deep-NNs. Switching to more biologically feasible constraints (sigmoidal activation functions, longer training times, dropout, activation noise) increases the number of LCs. If LCs are not found in the feed-forward classification layers of modern deep-CNNs these data suggest this could either be caused by a lack of (moderately) invariant features being passed to the fully connected layers or due to the choice of training conditions and architecture. Should the interpretability and resilience to noise of LCs be required, this work suggests how to tune a NN so they emerge.
Paper Overview: This paper aims to study when hidden units provide local codes by analyzing the hidden units of trained fully connected classification networks under various architectures and regularizers. The main text primarily studies networks trained on a dataset where binary inputs are structured to represent 10 classes with each input containing a subset of elements indicative of the class label. The work also studies fully connected networks trained on the MNIST dataset (with the addition of some pixels indicating each class label). After enumerating the number of local codes observed under these different settings, the authors conclude the following: (1) "common" properties of deep neural networks & modern datasets seem to decrease the number of local codes (2) specific architectural choices, regularization choices & dataset choices seem to increase the number local codes (i.e. increasing dropout, decreasing dataset size, using sigmoidal activations etc.). The work then state that these insights may suggest how to train networks to have local codes emerge. Review: I particularly liked the simple dataset the authors construct in determining whether local codes emerge in hidden units, especially since deep networks and dataset used in practice are overly complex to gain insight for this behavior. However, I find the overall message to be a bit confusing, especially in regard to using the analysis to construct networks with emergent local codes. In particular, I feel that the authors could strengthen this work greatly by using their findings to train a deeper neural network for which local codes do emerge on a more realistic dataset. Furthermore, this work would be significantly more impactful if a network with more local codes does generalize better, but that is unclear as of now (especially since local codes seem to not emerge in practical settings even though these networks are state of the art). Criticisms/Questions: (1) Main: I'm somewhat confused about the main takeaway from this work in terms of understanding when local codes actually emerge in deep neural networks. The authors seem to have a number of very specific conditions that are both architecture and dataset dependent, and overall I feel the message would be much stronger if the authors were able to rigorously study perhaps just a few of these conditions across many more settings. For example, even just studying the impact of activation and providing some conditions/theory or a clearer understanding of which nonlinearities lead to more local codes would be insightful. The current work seems to be more broad instead of tackling one of these properties in depth. (2) I am a bit confused about the thresholds used by the authors in determining whether a hidden unit provides a local code or not. Do you just determine if there is some threshold given by the unit that separates out all points of one class from the rest? (3) After several experiments, there are some heavy conjectures trying to rationalize the result of the experiment. As an example, the authors provide statements like "ReLU is a more powerful activation function than sigmoid." However, this statement in particular is not exactly correct, since given enough width, networks with either activation function should be able to interpolate the training data. Another example of this is at the bottom of page 7, when the authors provide 5 possible explanations as to why local codes don't emerge in modern training settings. It is unclear which of these explanations are true, but it would be great if the authors could actually provide a cleaner rationalization. Minor criticisms: (1) I've seen a number of different conventions for how to refer to the depths of networks, and I believe what you refer to as 3 layer networks would conventionally be referred to as 2-layer networks for theory audiences (as there are 2 weight matrices involved) or 1-hidden layer networks for empirical audiences. I think adding a figure in the appendix for your architecture would clear up any confusion immediately. (2) Some of the formatting is a bit awry: there are references to figures that appear as ?? (see page 8 paragraph 3). (3) It would be nice to provide a consistent legend in some of the figures. For example, Figure 4b has no indication for which settings the colors represent. (4) As there seem to be a lot of experiments numbered 1-12, I think it would be much more readable to have different subsections on the different settings and outline the experiments in the subsection more clearly. Referring back to these numbers on page 3 & 4 constantly makes it less readable. (5) I quite liked Figure 8 in the Appendix. I feel that this would have been a great figure to put towards the front of the paper to provide an example of local codes emerging.
iclr_2020_BJxQxeBYwH
Graph Neural Nets (GNNs) have received increasing attentions, partially due to their superior performance in many node and graph classification tasks. However, there is a lack of understanding on what they are learning and how sophisticated the learned graph functions are. In this work, we propose a dissection of GNNs on graph classification into two parts: 1) the graph filtering, where graph-based neighbor aggregations are performed, and 2) the set function, where a set of hidden node features are composed for prediction. To study the importance of both parts, we propose to linearize them separately. We first linearize the graph filtering function, resulting Graph Feature Network (GFN), which is a simple lightweight neural net defined on a set of graph augmented features. Further linearization of GFN's set function results in Graph Linear Network (GLN), which is a linear function. Empirically we perform evaluations on common graph classification benchmarks. To our surprise, we find that, despite the simplification, GFN could match or exceed the best accuracies produced by recently proposed GNNs (with a fraction of computation cost), while GLN underperforms significantly. Our results demonstrate the importance of non-linear set function, and suggest that linear graph filtering with non-linear set function is an efficient and powerful scheme for modeling existing graph classification benchmarks.
This paper presents a dissection analysis of graph neural networks by decomposing GNNs into two parts: a graph filtering function and a set function. Although this decomposition may not be unique in general, as pointed out in the paper, these two parts can help analyze the impact of each part in the GNN model. Two simplified versions of GNN is then proposed by linearizing the graph filtering function and the set function, denoted as GFN and GLN, respectively. Experimental results on benchmarks datasets for graph classification show that GFN can achieve comparable or even better performance compared to recently proposed GNNs with higher computational efficiency. This demonstrates that the current GNN models may be unnecessarily complicated and overkill on graph classification. These empirical results are pretty interesting to the research community, and can encourage other researchers to reflect on existing fancy GNN models whether it's worth having more complex and more computationally expensive models to achieve similar or even inferior performance. Overall, this paper is well-written and the contribution is clear. I would like to recommend a weak accept for this paper. If the suggestions below can be addressed in author response, I would be willing to increase the score. Suggestions for improvement: 1) Considering the experimental results in this paper, it is possible that the existing graph classification tasks are not that difficult so that the simplified GNN variant can also achieve comparable or even better performance (easier to learn). This can be conjectured from the consistently better training performance but comparable testing performance of original GNN. Another possibility is that even the original GNN has larger model capacity, it is not able to capture more useful information from the graph structure, even on tasks that are more challenging than graph classification. However, this paper lacks such in-depth discussions; 2) Besides the graph classification task, it would be better to explore the performance of the simplified GNN on other graph learning tasks, such as node classification, and various downstream tasks using graph neural networks. This can help demystify the question raised in the previous point; 3) The matrix \tilde{A} in Equation 5 is not well explained (described as "similar to that in Kipf and Welling (2016)"). It would be more clear to directly point out that it is the adjacency matrix, as described later in the paper.
iclr_2020_SkxUrTVKDH
Over-parameterization is ubiquitous nowadays in training neural networks to benefit both optimization in seeking global optima and generalization in reducing prediction error. However, compressive networks are desired in many real world applications and direct training of small networks may be trapped in local optima. In this paper, instead of pruning or distilling over-parameterized models to compressive ones, we propose a new approach based on differential inclusions of inverse scale spaces, that generates a family of models from simple to complex ones by coupling gradient descent and mirror descent to explore model structural sparsity. It has a simple discretization, called the Split Linearized Bregman Iteration (SplitLBI), whose global convergence analysis in deep learning is established that from any initializations, algorithmic iterations converge to a critical point of empirical risks. Experimental evidence shows that SplitLBI may achieve comparable and even better performance than other training algorithms on ResNet-18 in large scale training on ImageNet-2012 dataset etc., while with early stopping it unveils effective subnet architecture with comparable test accuracy to dense models after retraining instead of pruning well-trained ones.
Summary ======= This paper aims to train sparse neural networks efficiently, by jointly optimizing the weights and sparsity structure of the network. It applies the Split Linear Bregman Iteration (Split LBI) method from [1] in a large-scale setting, to train deep neural networks. The approach works by considering optimization in a joint space (W, \Gamma) consisting of the network weights W and a new set of parameters \Gamma that model the structural sparsity of the network. The problem of learning sparse networks efficiently is important for modern applications that run on embedded devices, as well as for fast training on specialized hardware. I think the approach is interesting as a potential alternative to more expensive methods for finding sparse networks, such as NAS and the successive pruning & re-training approach of [2]. Overall, the paper pursues a promising direction to induce network sparsity, and presents some interesting results. However, there are several issues with the experiments and the structure/presentation of the paper that should be addressed. Pros ==== * As far as I am aware, this is the first application of Split LBI to train deep neural networks. * It shows that joint optimization of the weights and sparsity structure performs on par with baselines that only optimize weights, on MNIST, CIFAR-10, and ImageNet. * It provides a global convergence analysis that shows that the weights optimized with Split LBI converge to a critical point of the training loss, regardless of initialization. * It provides an ablation study for the two hyperparameters \kappa and \nu of Split LBI. * I think the most interesting parts of the paper are those that examine the structural sparsity learned by Split LBI (Sections 4.3 and 4.4). In particular, the fact that Split LBI was able to match or outperform the test accuracy of several baselines (network slimming, soft filter pruning, and the method in Rethinking the Lottery Ticket Hypothesis) in a single training run (without re-training) is a nice result. Issues ====== * The Split LBI method is presented as a novel contribution (in the abstract: "we propose a new approach based on differential inclusions of inverse scale spaces ..."), but it was already described in detail in a paper that they cite ([1]) published at NeurIPS 2016. The only theoretical contribution of this paper is section 3: Global convergence of Split LBI. * The claim that "Split LBI demonstrates SOTA performance in large scale training on ImageNet" (from the abstract) is not correct, and needs to be qualified. This paper reports 70.55/89.56% (top 1 / top 5) accuracy; as far as I am aware, the current SOTA on ImageNet is [3], which achieves 86.4/98.0% (top 1 / top 5) accuracy. I think it would be better for this paper to argue that it achieves comparable performance to baselines with a particular architecture and training regime. * The structure and presentation of the paper could be improved in several ways, outlined as follows: - Most of the Methodology section discusses the Split LBI method from prior work. I would encourage the authors to split the Methodology section into a separate Background section for Split LBI, followed by a new section specifically about applying Split-LBI to convolutional and fully-connected layers in neural networks. - The writing is missing some details and explanations that would be very helpful for readers. For example, it should clearly state that the dimension of \Gamma is the same as the dimension of the weights W. - It would also be good to expand the explanation about why SplitISS avoids the parameter correlation problem, and what it means for \Gamma to have an orthogonal design? - The figures are too small to be readable without a lot of zooming. - The paper ends abruptly, with no conclusion. - The appendix contains some useful material, but much of it is not referenced from the main paper. I think some parts of the appendix could be moved to the main paper, for example the comparison of computational and memory costs between Split LBI and SGD. * I am not sure what the purpose of the comparisons between optimizers in Table 1 is. The motivation given in the abstract and introduction is to learn sparse networks online during optimization; it does not propose Split LBI as a new optimizer to compete with Adam. Couldn't one use the Adam update rule to optimize the weights in Split-LBI? I think it makes sense to compare Split LBI to standard training setups that do not enforce any sparsity, as well as to setups that use L1 and L2 regularization, but I do not think that Table 1 is set up correctly for this. Different optimizers are paired with different regularizers, and crucially the choice of hyperparameters is not discussed---how did you choose the coefficient of L1 regularization to be 1e-3? Additionally, many rows in Table 1 are missing data (e.g., the variants of Adam are only run for CIFAR-10). * Regarding experiments, it is not clear which experiments the authors actually performed, for which they took the results from previously published papers, and sometimes where the results come from at all. - In Table 1, there is an asterix next to SGD-Mom-Wd that the authors say indicates "results from the official pytorch website." That would imply that the rest of the experiments were done by the authors (and the authors say in the caption "we use the official pytorch codes to run the competitors"). However, Table 2 (found in the Appendix, page 20), contains identical numbers and a sign # that, according to the authors, indicates results of their own experiments. That would mean that of all the SGD and Adam experiments shown in the table, the authors only performed SGD-naive and Adam-naive. Where do the other numbers come from? What does "official pytorch code to run the competitors" mean? Where is that code from? - Figure 4 contains baselines and SplitLBI results. Where do the numbers for the baselines come from? The caption mentions another paper, [5], but I did not find the source of the numbers in that paper. The caption seems to point to Table 9a in [5], but that table does not deal with Network Slimming, Soft-Filter Pruning, Scratch B, or Scratch-E. Additionally, Table 9a of [5] only contains results for VGG-16 and ResNet-50. Where do the baselines for ResNet-56 (Figure 4b) come from? * How is the proximal objective in Eq. 5 optimized? That is, how do you compute the argmin? * Figure 2 shows results for SLBI-1 and SLBI-10, but no discussion of what SLBI-1 and SLBI-10 mean. Also regarding Fig.2, the authors claim that "Filters learned by ImageNet prefer non-semantic texture rather than shape and color." How did the authors come to this conclusion? I looked carefully at the filter visualizations, and I cannot see a clear difference between the filters learned by Split LBI and SGD. * The computation time comparison in Table 11 (Appendix E) is a bit strange, because it shows that Adam takes 2x as long as SGD, which does not align with my experience; in practice, the wall-clock time is nearly identical between Adam and SGD. It would be good to provide more details about how the time was measured. Also, does the memory comparison measure only the memory used for model parameters (W and \Gamma), or also activation memory? Shouldn't Split LBI use 2x the memory of SGD (if measuring only the weights)? * In Figure 1, it looks like the initial magnitude of the filters is larger for SGD compared to Split LBI. Are the weights initialized in the same way? Also, why is the setup of the MNIST experiment in Fig.1 different from the setup in Table 1 (e.g., learning rate decay every 40 epochs vs every 30 epochs)? In addition, it looks like the first learning rate decay causes the filter weight magnitudes to flatten out and stay constant. * What is the learning rate schedule used for the runs in Figure 3? It looks like the lr decays at epoch 80 and 120, but this is only mentioned in table captions in the appendix. This should be stated in the main paper. Also, why is this a different training setup from that used for Table 1? I also noted that the authors do not intend to make the code public upon publication of the paper. On page 6, they state that "source codes will be released upon requests." At present, the preferred path is to make the code public upon publication of the paper. Minor points ============ * In the caption of Figure 3, it says "The results are repeated for 5 times. Shaded area indicates the variance; and in each round, we keep the exactly same initialization for each model." What is different between the 5 runs if the initialization is the same? * There are too many different colors used in Figures 1 and 2. Since the purple, green, and black boxes are important to see for figures 1 and 2, it is confusing to have to deal with additional blue, pink, and yellow boxes around every three. [1] Huang et al., Split LBI: An iterative regularization path with structural sparsity. NeurIPS 2016. [2] Frankle & Carbin, The Lottery Ticket Hypothesis: Finding sparse, trainable neural networks. ICLR 2019. [3] Touvron et al., Fixing the train-test resolution discrepancy. https://arxiv.org/abs/1906.06423. [4] He et al., Deep residual learning for image recognition. https://arxiv.org/abs/1512.03385. [5] Liu et al., Rethinking the value of network pruning. ICLR 2019. Post-rebuttal Update ==================== I thank the authors for their rebuttal, and for clarifying some details in the paper. * I think the experiments on sparsity are interesting. More efficient ways to find good sparse networks are certainly of interest to the community. * I appreciate that the authors released the source code. * In summary, this paper applies Split-LBI to neural network training, and provides a global convergence result as one of the main contributions. Operationally, compared to the original Split LBI approach, it changes the loss function from squared error to cross entropy, and uses mini-batches for training, which are fairly straightforward. * One important issue with the paper is that it blurs the distinction between prior work and the new contribution. For example, the subsection on Split Linearized Bregman Iteration in the "Methodology" section does not contain anything new compared to [1], and this is not clear enough to the reader. Also, not enough credit is given to [2] for the "Differential of Inclusion of Inverse Scale Space" subsection. I maintain that there needs to be a separate "Background" section for this, and that it should be made absolutely clear what is new in the "Methodology" section. It feels like this distinction is obfuscated in the writing. * Given that the authors propose Split-LBI as a new optimizer that can be compared to others (e.g., Adam), one issue is that there doesn't seem to be any search done over the hyperparameters of each optimizer, including the learning rate and amount of weight decay. For example, Adam is only run with learning rate 1e-3 and SGD is run with 0.1; in addition, the weight decay (where used) is only set to 1e-4. Thus, it is not clear how meaningful these comparisons are. Also, in Table 1, the CIFAR-10 test accuracies are fairly low at ~90%, while modern models such as Wide ResNets can achieve ~95%. * The newly-written conclusion is still incorrect, stating again that Split LBI achieves SOTA performance on ImageNet. Also, if "with better interpretability than SGD" refers to the qualitative comparison of the learned filters, I think this conclusion is a bit too strong, because I don't think the difference is visible enough to aid interpretability. * Minor point: On further inspection, the legend in the left-side plots in Figure 2 does not match the labels of the visualizations on the right. There is no yellow training curve in Figure 2, despite the assertion in the rebuttal. [1] Huang et al., Split LBI: An iterative regularization path with structural sparsity. NeurIPS 2016. [2] Osher et al., "Sparse recovery via differential inclusions." Applied and Computational Harmonic Analysis, 2016. I maintain my score of weak reject, but am not totally opposed to it being accepted, because it provides a way to find sparse networks more efficiently.
iclr_2020_Bkf4XgrKvS
Hierarchical abstractions are a methodology for solving large-scale graph problems in various disciplines. Coarsening is one such approach: it generates a pyramid of graphs whereby the one in the next level is a structural summary of the prior one. With a long history in scientific computing, many coarsening strategies were developed based on mathematically driven heuristics. Recently, resurgent interests exist in deep learning to design hierarchical methods learnable through differentiable parameterization. These approaches are paired with downstream tasks for supervised learning. In practice, however, supervised signals (e.g., labels) are scarce and are often laborious and expensive to obtain. In this work, we propose an unsupervised approach, coined OTCOARSENING, with the use of optimal transport. Both the coarsening matrix and the transport cost matrix are parameterized, so that an optimal coarsening strategy can be learned and tailored for a given set of graphs. We demonstrate that the proposed approach produces meaningful coarse graphs and yields competitive performance compared with supervised methods for graph classification and regression.
This paper proposes an unsupervised hierarchical approach for learning graph representations. The proposed architecture is constructed by unrolling k-steps of a parametrized algebraic multigrid approach for minimizing the Wasserstein metric between the graph and its representation. The node distance (transport cost) used in the Wasserstein metric is also learned as an L2 distance between the embeddings of some graph embedding function. The approach is compared against 6 other state of the art approaches on 5 graph classification tasks, showing significant improvements 4 of them. The paper is reasonably well written, however, I think some of the explanations can be tightened further. Especially a lot on the background of AMG is not really that relevant, since the authors are not transferring technical results from AMG. Also, it seems like a better flow for presenting this argument might be to switch the order of sections 3.2.1 and 3.1.2. It looks like the main point is that this architecture is trying to emulate iterative coarsened residual optimization of the Wasserstein metric between a graph and its representation. How the coarsening matrix is derived is more of a technical point (it looks like the results would be much more sensitive to a switch of metric than to a switch of parametrization for S). The empirical results are quite intriguing. There are, however, natural and important questions left unanswered. First and foremost, how does the amount of downsampling (compression) compare between methods. How many parameters do different methods require? It would also be good to see what the baseline performance would have been without any input compression as to understand how close these approaches are to the upper bound. Finally, I think the main issue of this paper, is left unresolved, namely, what is the point of not having supervision from the downstream task. As a user of graph representations trying to solve some problem, the only thing I would want from my representation is to capture some notion of sufficient statistics that are small enough to be efficient and allow me to solve my problem. I would not necessarily care about how well the learned representation resembles the original graph unless I believed that my downstream task was hard to evaluate and that it was very smooth in the Wasserstein metric. I read the paper multiple times, trying to find any discussion on this, but it seems that the fact that an unsupervised representation is a good thing is taken for granted. A point could at least be made using the same representation for different tasks experimentally. Or, perhaps, literally doing an AMG-type unpacking of the downstream task itself as a comparison. This would shed light on the question of whether the iterated residuals or the choice of distance is what's driving the observed results.
iclr_2020_r1xQNlBYPS
A channel corresponds to a viewpoint or transformation of an underlying meaning. A pair of parallel sentences in English and French express the same underlying meaning but through two separate channels corresponding to their languages. In this work, we present Multichannel Generative Language Models (MGLM), which models the joint distribution over multiple channels, and all its decompositions using a single neural network. MGLM can be trained by feeding it k way parallel-data, bilingual data, or monolingual data across pre-determined channels. MGLM is capable of both conditional generation and unconditional sampling. For conditional generation, the model is given a fully observed channel, and generates the k − 1 channels in parallel. In the case of machine translation, this is akin to giving it one source, and the model generates k − 1 targets. MGLM can also do partial conditional sampling, where the channels are seeded with prespecified words, and the model is asked to infill the rest. Finally, we can sample from MGLM unconditionally over all k channels. Our experiments on the Multi30K dataset containing English, French, Czech, and German languages suggest that the multitask training with the joint objective leads to improvements in bilingual translations. We provide a quantitative analysis of the quality-diversity trade-offs for different variants of the multichannel model for conditional generation, and a measurement of self-consistency during unconditional generation. We provide qualitative examples for parallel greedy decoding across languages and sampling from the joint distribution of the 4 languages.
[Paper summary] This work is an extension of KERMIT (Chan et al., 2019) to multiple languages and the proposed model is called “multichannel generative language models”. KERMIT is an extension of “Insertion Transformer” (Stern et. al, 2019), a non-autoregressive model that can jointly determine which word and which place the translated words should be inserted. KERMIT shares the encoder and decoder of insertion Transformer, and the source sentence and target sentence are concatenated to train a generative model (also, various loss functions are included). In this work, parallel sentences from more than two languages are concatenated together and fed into KERMIT. Each language is associated with a language embedding. This work demonstrates that a joint distribution p(x1, . . . , xk) over k channels/languages can be properly modeled through a single model. The authors carry out experiments on multi30k dataset. [Pros] Some discoveries of this work are interesting, including: (1) It is possible to use a single model to translate a sentence into different languages in a non-autoregressive way. (2) The unconditional multilingual generation in Section 4.5 is interesting, especially, the generation order is determined by the model rather than left-to-right. [Questions] 1. The authors work on multi30k dataset, which is not a typical dataset for machine translation. (A) The dataset and the corresponding information is at https://github.com/multi30k/dataset. The number of words in a sentence is smaller than 15, which is too short for a machine translation. Also, the pattern of sentences is relatively simple. (B) For real world application, I am not sure whether it is possible to collect a large amount of k-parallel data where $k>2$. Therefore, the application scenario is limited. What if we have a large amount of bilingual data instead of k-parallel data? How should we leverage the large amount of monolingual data? 2. For novelty, this is an extension of KERMIT to a multilingual version, which limits the novelty of this wok. 3. The best results on En->De in Table 1 are inconsistent. On tst16, bilingual en<->de is the best; on tst17, en<->{rest} is the best; on mscoco, any<->rest is the best. In Table 2, seems using bilingual data only is the best choice. This makes me confuse about how to use your proposed method. However,
iclr_2020_Sklgs0NFvr
Despite alarm over the reliance of machine learning systems on so-called spurious patterns in training data, the term lacks coherent meaning in standard statistical frameworks. However, the language of causality offers clarity: spurious associations are those due to a common cause (confounding) vs direct or indirect effects. In this paper, we focus on NLP, introducing methods and resources for training models insensitive to spurious patterns. Given documents and their initial labels, we task humans with revising each document to accord with a counterfactual target label, asking that the revised documents be internally coherent while avoiding any gratuitous changes. Interestingly, on sentiment analysis and natural language inference tasks, classifiers trained on original data fail on their counterfactually-revised counterparts and vice versa. Classifiers trained on combined datasets perform remarkably well, just shy of those specialized to either domain. While classifiers trained on either original or manipulated data alone are sensitive to spurious features (e.g., mentions of genre), models trained on the combined data are insensitive to this signal. We will publicly release both datasets.
This paper addresses the problem of building models for NLP tasks that are robust against spurious correlations in the data by introducing a human-in-the-loop method: annotators are asked to modify data-points minimally in order to change the label. They refer to this process as counterfactual augmentation. The authors apply this method to the IMDB sentiment dataset and to SNLI and show (among other things) that many models cannot generalize from the original dataset to the counterfactually-augmented one. This contribution is timely and addresses a very important problem that needs to be addressed in order to build more robust NLP systems. Because, however, of a few limitations, I recommend weak acceptance. My main hesitation comes from a lack of clarity about the main lesson we have learned. In particular, if the goal is to use this method to augment the data we use to train NLP systems in order to make them more robust, it seems that the time cost of the process will be prohibitive. On the other hand, perhaps these methods could be used to identify the kind of spurious correlations that models tend to rely on, which could then be used in a more automated data augmentation process. If that's the goal, however, a more detailed error analysis would need to be included. A few small comments: * There was some analysis of the augmented IMDB dataset, but none of the SNLI dataset. I would love to see a more detailed investigation of what annotators usually did. For instance, a reason that hypothesis-only models do well is that certain words are very predictive of certain labels (e.g. "not" and contradiction). Do people leave the negations in when modifying such examples for entailment or neutrality, thus breaking the simple correspondence? That's a very simple kind of question; more generally, I'd like to see more analysis of the new dataset. * The BiLSTM they use is very small (embedding and hidden dimension 50). Given that BERT is most robust against their manipulation, it would be good to see a more powerful recurrent model for comparison. It would be easy to use ELMo here, if the main question is about Transformers vs recurrent models. Some very minor / typographic comments: * abstract: "with revise" should be "with revising" * first paragraph page 2: some references to causality literature and definition of spuriousness as common cause * page 2, "We show that..." I'd break this into two sentences to make it easier to parse. * Table 3: I would make two columns for each model with accuracy on original versus revised. With the current table, one has to compare cells in the top half of the table to those in the bottom half of the table, which is quite difficult to do.
iclr_2020_SJxpsxrYPS
Learning rich representation from data is an important task for deep generative models such as variational auto-encoder (VAE). However, by extracting high-level abstractions in the bottom-up inference process, the goal of preserving all factors of variations for top-down generation is compromised. Motivated by the concept of "starting small", we present a strategy to progressively learn independent hierarchical representations from high-to low-levels of abstractions. The model starts with learning the most abstract representation, and then progressively grow the network architecture to introduce new representations at different levels of abstraction. We quantitatively demonstrate the ability of the presented model to improve disentanglement in comparison to existing works on two benchmark data sets using three disentanglement metrics, including a new metric we proposed to complement the previously-presented metric of mutual information gap. We further present both qualitative and quantitative evidence on how the progression of learning improves disentangling of hierarchical representations. By drawing on the respective advantage of hierarchical representation learning and progressive learning, this is to our knowledge the first attempt to improve disentanglement by progressively growing the capacity of VAE to learn hierarchical representations.
This paper proposed a method for training Variational Ladder Autoencoder (VLAE) using a progressive learning strategy. In comparison to the generative model using a progressive learning strategy, the proposed method focuses not only on the image generation but also on extracting and disentangling hierarchical representation. Overall, I think the purpose of this paper should be written clearly. It is not clear whether the purpose is learning the disentangled representation or the hierarchical representation. In my opinion, I think the focus of the proposed method lies in the hierarchical representation through progressive learning, but the experiments are involved more with disentanglement. Furthermore, I believe the authors need to explain the relationship between hierarchical representation and disentangled representation. In particular, it is not clear why learning hierarchical representation is helpful for disentangled representations. The qualitative experiments are not convincing since the proposed model looks worse in both the reconstruction and hierarchical disentanglement for MNIST dataset than the base model VLAE, as shown in Figure 5 in [1]. Regarding the metric used in the experiments, the authors mention that the proposed disentanglement metric MIG-sup is what they first developed for one-to-one property, but it seems that it was already proposed in [2]. In addition, the proposed metric requires ground truth for the generative factors, so its usage is limited and not practical. I think this work is similar to [3] in that both learn disentangled representations by progressively increasing the capacity of the model. I think the authors need to discuss about this work. Ablation studies should be presented to verify the individual effects of the progressive learning method and implementation strategies on performance, respectively. In Figures 2 and 3, the performance gap in the reconstruction error of the proposed method is greater than the base model when beta changes from 20 to 30. Therefore, it is necessary to show if it is robust against the hyperparameter beta. There is no definition of v_k in Equation (12), so it is difficult to understand the proposed metric clearly. In summary, I do not think the paper is ready for publication. [1] Learning Hierarchical Features from Generative Models, Zhao et al., ICML 2017 [2] A Framework for the Quantitative Evaluation of Disentangled Representations, Eastwood et al., ICLR 2018 [3] Understanding disentangling in beta-VAE, Burgess et al., NIPS 2017 Workshop on Learning Disentangled Representations ------------------------------------- After rebuttal: Thanks for the revision of the paper and the additional experiments. The authors' comments and further experiments address most of my concerns. In particular, new experiments show that pro-VLAE performs quantitatively and qualitatively better than VLAE. Also, Figure 10 and the result of the information flow experiment using MNIST show that the first layer learns the intended representations properly. I appreciate the authors’ efforts put into the rebuttal, and the results of additional experiments are reasonably good. Therefore, I increase my final score to 6: Weak Accept.
iclr_2020_SkgWIxSFvr
Latent-variable models represent observed data by mapping a prior distribution over some latent space to an observed space. Often, the prior distribution is specified by the user to be very simple, effectively shifting the burden of a learning algorithm to the estimation of a highly non-linear likelihood function. This poses a problem for the calculation of a popular distance function-the geodesic between data points in the latent space-as this is often solved iteratively via numerical methods. These are less effective if the problem at hand is not well captured by first or secondorder approximations. In this work, we propose less complex likelihood functions by allowing complex distributions and explicitly penalising the curvature of the decoder. This results in geodesics which are approximated well by the Euclidean distance in latent space, decreasing the runtime by a factor of 1,000 with little loss in accuracy. Additionally, we apply our method to a state-of-the-art tracking algorithm using real world image data, showing that our unsupervised method performs similar to supervised learning methods.
Summary of paper: The paper is concerned with the geometry of latent spaces in VAEs. In particular, it is argued that since geodesics (shortest paths) in the Riemannian interpretation of latent spaces are expensive to compute, then it might be beneficial to regularize the decoder (generator) to be flat, such that geodesics are straight lines. One such regularization is proposed. Review: I have several concerns with the paper: 1) Geodesics are never motivated: The paper provides no motivation for why geodesics are interesting objects in the first place, so it is not clear to me what the authors are even trying to approximate. 2) Under the usual motivation, the work is flawed: The usual motivation for geodesics is that they should follow the trend of the data (e.g. go through regions of high density). Since no other motivation is provided, I will assume this to be the motivation of the paper as well. The paper propose to use a flexible prior and then approximate geodesics by straight lines. Beyond the most simple linear models, then this cannot work. If the prior is flexible, then straight lines will hardly ever constitute paths through regions of high density. The core idea of the work, thus, seem to be in conflict with itself. 3) A substantial bias is ignored: The paper consider the Riemannian metric associated with the *mean* decoder. Due to regularization, holes in the data manifold will be smoothly interpolated by the mean decoder, such that geodesics under the associated metric will systematically be attracted to holes in the data manifold. Hauberg discuss this issue in great length here: https://arxiv.org/abs/1806.04994 Here it is also demonstrated that geodesics under the mean decoder tend to be straight lines (which is also what the authors observe). Taking the stochasticity of the VAE decoder into account drastically change the behavior of geodesics to naturally follow the trend of the data. 4) Related work is mischaracterized: Previous work on the geometry of latent spaces largely fall into two categories: those that treat the decoder as deterministic and those that treat it as being stochastic. In the cited papers Arvanitidis et al and Tosi et al consider stochastic decoders, while the other consider deterministic decoders. Given that geodesics have significantly different behavior in the two cases, it is odd that the difference is never discussed in the paper. 5) It is not clear to me what the experiments actually show: -- I did not understand the sentence (page 5): "The model is more invariant if the condition number is smaller..." What does it mean to be "more invariant" ? And how is invariance (to what) related to the condition number of the metric? -- Figure 3 show example geodesics, but only geodesics going between clusters (I have no idea how such geodesics should look). If I look at the yellow cluster of Fig3a, then it seems clear to me that geodesics really should be circular arcs, yet this is being approximated with straight lines. Are the ground truth geodesics circular? At the end, it seems like the shown examples are the least informative ones, and that intra-cluster geodesics would carry much more meaning. -- What am I supposed to learn from the "Smoothness" experiment (page 7) ? My only take-away is currently that the proposed regularization does what it is asked to do. It is not clear to me if what it aims to do is desirable? Does the experiment shed light on the desirability of the regularizer or is it more of a "unit test" that show that the regularizer is correctly implemented? -- In the "Geodesic" experiment (page 7) I don't agree with the choice of baseline. If I understand correctly, the baseline approximate geodesics with shortest paths over the neighbor graph (akin to Isomap). However, there is no reason to believe that the resulting paths bare any resemblance to geodesics under the studied Riemannian metric. The above-mentioned paper by Hauberg provide significant evidence that these baseline geodesics are not at all related to the actual geodesics of the studied metric. The only sensible baseline I can think of is the expensive optimization-based geodesics. == rebuttal == I have read the rebuttal and discussed with the authors, and I retain my original score.
iclr_2020_HJxJdp4YvS
Generating visualizations and interpretations from high-dimensional data is a 1 common problem in many fields. Two key approaches for tackling this problem approaches. We present a new deep architecture for probabilistic clustering, VarP-7 SOM, and its extension to time series data, VarTPSOM. We show that they achieve way, a much higher degree of confidence in the findings of the exploration is attained (Keim, 2002).
This paper proposes VarPSOM, a method which utilizes variational autoencoders (VAEs) and clustering techniques based on self-organizing maps (SOMs) to learn clustering of image data (MNIST and Fashion MNIST in particular). An LSTM-based extension termed VarTPSOM is also evaluated on medical time series data. For the most part, the experimental results are promising, and the visualizations are particularly nice. One of my main points of confusion is with the exposition of the method. To start, the objective presented in Eq. 3 is simply a sum of a variational lower bound and the PSOM clustering loss. Does this have a probabilistic interpretation, e.g., is it a lower bound for a particular generative model? If so, this would be useful to discuss prominently in the paper. If not, it is not clear to me what the authors are gaining from the variational framework. The paragraph at the bottom of page 4 that discusses the "advantages of a VAE over an AE" is not convincing to me. The authors claim that "points with a higher variance in the latent space could be identified as potential outliers and therefore treated as less precise and trustworthy". This isn't demonstrated in the experiments, and to the best of my knowledge, this has not been shown in prior work. If I am mistaken, a citation would be appreciated and should be included in the paper. Additionally, the claim that "the regularization term of the VAE prevents the network from scattering the embedded points discontinuously in the latent space" can also be accomplished with AEs with simple regularization, a standard technique for a wide range of AEs. Similar comments can be made for the VarTPSOM objective in Eq. 6. Prior work in variational inference for time series, e.g., [1, 2] define a probabilistic time series generative model, from which variational inference naturally prescribes a learning objective. In my opinion, this stands in stark contrast to this work, which takes the VarPSOM objective and simply adds a time series loss on top. This is also a viable approach to building models, but why emphasize variational so much if the method is hardly motivated by anything variational? I believe that the authors need more thorough experimental comparisons if they wish to demonstrate that their method actually benefits from the variational pieces. Most obviously, I do not believe that any of the comparisons represent the proposed method but with the VAE swapped out for some type of AE? It is my understanding that AE+SOM, SOM-VAE, and DESOM do not represent this exact ablation. VarIDEC performing better than IDEC is a data point in support of this hypothesis, however, this is a comparison of a prior method and not the proposed method. The related work section mentions that SOM-VAE and DESOM are "likely limited by the absence of techniques used in state-of-the-art clustering methods". Is it possible to address this limitation of prior work? If so, how would this approach compare to the proposed method in terms of implementation and performance? I am not necessarily interested in an actual empirical evaluation, but including this in the related work section would likely be interesting for the reader. The authors claim in the implementation details that "[s]ince the prior in the VAE enforces the latent embeddings to be compact, it also requires more dimensions to learn a meaningful latent space". Is there a citation for this? My understanding is that posterior collapse leads to VAEs not using additional dimensions even when they are provided, which seems to contradict this claim. Table 2 seems to have very low NMI numbers across the board, am I reading this incorrectly? Are there prior SOTA numbers that can be included? Finally, it seems that some of the ideas and motivation in the paper are related to learning discrete structures with variational approaches, e.g., [3, 4]. If the authors agree, it may be appropriate to include some discussion in related work. [1] Johnson et al, "Composing graphical models with neural networks for structured representations and fast inference". NIPS 2016. [2] Fraccaro et al, "Sequential neural models with stochastic layers". NIPS 2016. [3] Tomczak and Welling, "VAE with a VampPrior". AISTATS 2018. [4] Vikram et al, "The LORACs prior for VAEs: Letting the trees speak for the data". AISTATS 2019. ------ To elaborate on my "Experience Assessment" of "I have read many papers in this area": "this area" in my case refers to amortized variational inference and VAEs, not clustering techniques and SOM.
iclr_2020_H1gfFaEYDS
This paper studies the undesired phenomena of over-sensitivity of representations learned by deep networks to semantically-irrelevant changes in data. We identify a cause for this shortcoming in the classical Variational Auto-encoder (VAE) objective, the evidence lower bound (ELBO). We show that the ELBO fails to control the behaviour of the encoder out of the support of the empirical data distribution and this behaviour of the VAE can lead to extreme errors in the learned representation. This is a key hurdle in the effective use of representations for data-efficient learning and transfer. To address this problem, we propose to augment the data with specifications that enforce insensitivity of the representation with respect to families of transformations. To incorporate these specifications, we propose a regularization method that is based on a selection mechanism that creates a fictive data point by explicitly perturbing an observed true data point. For certain choices of parameters, our formulation naturally leads to the minimization of the entropy regularized Wasserstein distance between representations. We illustrate our approach on standard datasets and experimentally show that significant improvements in the downstream adversarial accuracy can be achieved by learning robust representations completely in an unsupervised manner, without a reference to a particular downstream task and without a costly supervised adversarial training procedure.
This is a very interesting paper, I believe, a solid contribution to Variational Autoencoders. The basic argument is that encoders in VAEs are highly susceptible to noise in input data, whereas decoders are not. This argument is supported with a full fledged section 2.2, reformulating ELBO objective of VAEs, and introducing a VAE with discrete latent variables and discrete observations, so as to easily understand why and where VAEs fail. To make encoders robust to noise in inputs, it is proposed to generate new fictive data points in the neighborhood of original data points so as to ensure that the latent representations of a data point and its fictive version are similar in "some sense" as part of the proposed regularization term. The implementation of this idea is solid in the paper, relating it to theoretical concepts such as "entropy regularized entropy transport problem", "Wasserstein distance", etc. The most important point is that, it is easy to extend an encoder of an existing VAE with the proposed algorithm, while letting a decoder be untouched as the latter is shown to be robust/smooth anyways (in sec 2.2). It is also discussed on how to generate fictive samples, including but not restricted to approaches like projected gradient descent based adversarial attacks. Section 2.2 can be improved further, in terms of presentation. This is the most important section which can be of interest to the community to understand VAEs' limitations, a good contribution on its own. Though challenging, I encourage the authors to improve the exposition in this section as much as possible. Introduction is written beautifully. Good job, done! For instance, some explanation about variables, m_j, u_i, their distribution. How do you relate the Eq. 1 with the standard ELBO. (some reference to derivation?) Is it not possible to explain limitations of present VAEs without introducing the particular von Mise like parameterization (last equation of page 3). I am not suggesting that you should remove it. The connections between the two could be more explicit, though I understand that it is already mentioned in the paper, "parameterization emulates a high capacity network that can model any functional relationship between latent states and observations...". In this context, I found the explanation after Eq. 2 to be intuitive in regards to inefficiency of encoders. If I understand correctly, to put it in even simpler terms, the encoding neural network is overfitting mapping from input data points to the latent representations, not performing any learning for the unseen data points at all; on the other hand, decoder explores the space of latent variables well because it is modeled as a Gaussian? Some of the new equations should be numbered for easy reference. On page 4, the flow is a bit abrupt. Right after Fig. 3, there are points 1 and 2 added without any note on what these two points (items in latex) are about. I found point 1 very confusing in page 4. On the other hand, point 2 is beautifully written. Though, it could be made explicit in the latter on why encoders found in VAE are not smooth, referring to Fig 2, 3. There are minor grammar mistakes making some of sentences incoherent or confusing, in the paper. Something to do with style of language. I think, overall, language can be improved. Though, technical flow of the paper is great, and introduction is written very well, pointing out very important bold insights about the literature on unsupervised representation learning. I would say, it is a very well written paper, which is an enjoyable read, despite some of the grammar mistakes which can be easily fixed by proof reading. Experimental evaluation is sufficient. Last but not the least, one could argue that we are going to the literature of kernel function based methods, or markov random fields, to improve the neural network models. This is a general trend we are observing. It is interesting to see new models such as the proposed one, getting the best from both worlds. It may be worthwhile to point out something along these lines in the paper so that other works like this can be accomplished which are bold, and advance representation learning, digging mathematical concepts from diverse domains. If I am mistaken, please feel free to point out. It is not going to be change the review. I am inspired from this work. One practical challenge is to generate fictive data points which are not very near to existing data points. I am not sure if GANs can achieve that, either. Having such points is critical to deal with more structured noise. Any comments on this?
iclr_2020_Bkg75aVKDH
Training certifiable neural networks enables one to obtain models with robustness guarantees against adversarial attacks. In this work, we use a linear approximation to bound model's output given an input adversarial budget. This allows us to bound the adversary-free region in the data neighborhood by a polyhedral envelope and yields finer-grained certified robustness than existing methods. We further exploit this certifier to introduce a framework called polyhedral envelope regularization (PER), which encourages larger polyhedral envelopes and thus improves the provable robustness of the models. We demonstrate the flexibility and effectiveness of our framework on standard benchmarks; it applies to networks with general activation functions and obtains comparable or better robustness guarantees than state-of-the-art methods, with very little cost in clean accuracy, i.e., without over-regularizing the model.
This paper proposes a certifiable NN training method, "polyhedral envelope regularization" (PER) for defending against adversarial examples. The defense is based on the same linear relaxation based outer bounds of neural networks (KW/CROWN) used in many previous works. The paper makes a few new (but small) technical contributions: 1. this paper uses a different loss function (7), which is essentially Hinge loss on the lower bounds of distance to decision boundary. Previous works like KW used cross-entropy loss on the lower bound of prediction margin instead, which was based on minimax robust optimization theory. But I am not fully convinced if the new loss function is better or not. 2. in (5), the authors solve the bounded input case more carefully than previous works. (5) is trivial to solve in the L infinity case and has been used in previous works like (Wong & Kolter 2018, Gowal et al., 2018 and Zhang et al., 2019); but solving it for other norms requires some efforts, and this paper proposes a good solution for it (Algorithm 2); 3. In previous works like KW/CROWN, to find the largest certifiable radius, a binary search is needed. The authors proposes a very small improvement to the binary search process by setting the lower bound of search to the largest epsilon that is certifiable using the current linear relaxations obtained from a larger epsilon. The authors does not improve any bounds proposed in KW/CROWN, and they reuse the same bounds. I see the main contribution as the new hinge-like loss function for training, and a more careful procedure to find the largest certifiable radius in bounded input case. Empirically, the improvement of the proposed algorithm is limited - based on Table 1 it is hard to say if PER is better than KW or not. PER+at outperforms KW sometimes, however it is not a completely fair comparison, as we can add a PGD based adversarial training loss to KW as well, as done in DiffAI (Mirman et al., https://github.com/eth-sri/diffai). Questions: 1. In my personal experience I usually found Hinge loss not as effective as cross-entropy loss in deep learning based tasks probably due to its non-smoothness. The claim that (7) is better than cross-entropy loss is that it does not overregularize the network. The authors should provide more evidence to show if this argument holds, e.g., plotting the norm of weight matrices during the training for the two losses to show that it can reduce overregularization. 2. I think the metric ACB KW and ACB CRO (average certified radius of KW/CROWN) in Table 1 and 2 are confusing and not fair. In KW and CROWN's evaluation, giving an epsilon, if an example cannot be certified due to epsilon to large (i.e., ||A|| \epsilon + b > 0), certifiable radius will be count as 0 (flat line in Figure 1(a)). In this paper, the authors instead in this case use -b / ||A|| as the certifiable radius. This is merely a different way of evaluation, and I don't see this as a contribution, as the "improvement" does not come from a tighter bound. In the same sense, I don't think Figure 1(a) and the discussions on page 3 are appropriate characterization of KW/CROWN. PEC uses exactly the same linear bounds as in KW/CROWN, and has the same certification power. 3. For L2 based perturbations, in Table 1, the epsilon used for MNIST is too small. It is better to use an epsilon that is aligned with previous works. For example in Wong et al., 2018 (https://arxiv.org/pdf/1805.12514.pdf), page 22, you will find the epsilon used for MNIST and CIFAR. 4. As discussed above, it is probably not fair to compare PER+at with KW. A new baseline like KW+at should also be considered. 5. For norms other than L infinity norms, solving (5) for getting $d$ can be time consuming (Algorithm 2). How much additional time does it comparing to KW? Overall, I cannot recommend accepting this paper due to its limited theoretical contribution as well as unconvincing empirical results comparing to previous methods. I suggest rephrasing some parts of the paper and providing more experimental results as discussed above.
iclr_2020_HJlXC3EtwB
This paper studies similarity search, which is a crucial enabler of many feature vector-based applications. The problem of similarity search has been extensively studied in the machine learning community. Recent advances of proximity graphs have achieved outstanding performance through exploiting the navigability of the underlying graph structure. In this work, we introduce the annealable proximity graph (APG) method to learn and reshape proximity graphs for efficiency and effective similarity search. APG makes proximity graph edges annealable, which can be effectively trained with a stochastic optimization algorithm. APG identifies important edges that best preserve graph navigability and prune inferior edges without drastically changing graph properties. Experimental results show that APG achieves state-of-the-art results not only by producing proximity graphs with less number of edges but also speeding up the search time by 20-40% across different datasets with almost no loss of accuracy.
This paper suggests an approach for learning how to sparsify similarity search graphs. Graph-based methods currently attain state of the art performance for similarity search, and reducing their number of edges may speed them up even further. The paper suggests a learning framework that uses sample queries in order to determine which edges are more useful for searches, and prune the less useful edges. This is a sensible and potentially useful approach in line with the recent flurry of work on improving algorithms with tool from machine learning. While I like the overall approach and believe it could work, the experiments seem to have some weaknesses: 1. It is not clear to me why Table 1 contains only UPG and APG with pruning half the edges, without natural pruning baselines like uniformly subsampling the edges by a factor of half, or constructing the graph with half as many edges to begin with. Both of these baseline appear in the plots afterwards, which suggest very similar performance to APG, and it would be interesting to see the numbers side by side. The numbers for UPG and APG alone do not say much: the fact that the number of edges drops by half and the search speed drops by somewhat less than half are inevitable artifacts of the construction. The interesting part is the effect on the accuracy, and its quality is hard to assess without comparison to any baselines. 2. The plots leave the impression that the proposed algorithm does not actually perform that well. It is superior on SIFT, but does not improve performance on GloVe, and is outperformed on Deep1M. This seems to render the textual description of the results somewhat overstated, if I read it right (is it referring only to SIFT?). In conclusion, while I am optimistic about the paper and the approach, I am tentatively setting my score below the bar in light of the somewhat unsatisfactory experimental performance. The paper would be significantly helped by showing non-negligible improvement on more than one dataset or in more settings. Other comments: 1. What is NSG? I could not find a spelling out of the abbreviation nor a reference. 2. HNSW-sparse and HNSW-rand are very nearly impossible to tell apart in the plots. I suggest using a clearer visual distinction. 3. "Interestingly, pruning provides the benefits of improved search efficiency" - isn't that the point of pruning? 4. It is curious that using HNSW with R=32 instead of R=64 hurts the performance so much on Deep1M, while it has hardly any effect on SIFT and GloVe, do you perhaps have an explanation for this result?
iclr_2020_ryxsUySFwr
Neural network out-of-distribution (OOD) detection aims to identify when a model is unable to generalize to new inputs, either due to covariate shift or anomalous data. Most existing OOD methods only apply to classification tasks, as they assume a discrete set of possible predictions. In this paper, we propose a method for neural network OOD detection that can be applied to regression problems. We demonstrate that the hidden features for in-distribution data can be described by a highly concentrated, low dimensional distribution. Therefore, we can model these in-distribution features with an extremely simple generative model, such as a Gaussian mixture model (GMM) with 4 or fewer components. We demonstrate on several real-world benchmark data sets that GMM-based feature detection achieves state-of-the-art OOD detection results on several regression tasks. Moreover, this approach is simple to implement and computationally efficient.
===Summary=== The authors propose to perform out of distribution detection for regression models by fitting a generative model in the feature space of the regression model. An input example is deemed to be out-of-distribution if it has low likelihood under this generative model. ===Overall Assessment=== I recommend that the paper is rejected. There are a number of aspects that need to be improved. You should fix these and resubmit to a future conference. The paper focuses on the difference between regression and classification tasks and claims that the paper's method addresses an unmet need for OOD for regression. However, both the proposed method and the analysis justifying it are generic enough to be applied to both regression and classification. The paper handles technical claims far too casually in sec 4 and does not provide sufficient justification that the claims are true. There are natural baselines, such as using a generative model on the raw input space, that are ignored. ===Comments=== Remark 1 feels to me like it was added for the sake of having more math in the paper, not because it is crucial to the paper's argument. You remark at various places that existing methods don't naturally generalize from classification to regression. However, you never fully explain why. Also, your proposed method can be applied out-of-the box to classification problems. Your analysis in sec 4 trivially applies to binary classification tasks, and could be naturally extended to multi-class classification where w is not a vector but a num_classes x num_features matrix. The parallel should be between classification and heteroskedastic regression, since there you have a distribution per example. The logic in "In-distribution features are intrinsically low dimensional" is insufficient The connection between section 4 and your proposed method is not particularly precise. You also have lots of technical claims in 4 that are unsubstantiated. For example, you write "this new network will likely have less discarded information than the shallower network". What does 'likely' mean? In what sense are you making an actual technical statement? Each of the subsections in sec 4 has similar issues. "The CNNs are pre-trained on ImageNet (Denget al., 2009) and the last layer is replaced with a linear layer that produces a single output." Why did you do this? Did you fine tune or just retrain the top layer? "For these two baselines, the variance of the forward passes is used as a metric for detecting OOD inputs" Can you explain why these are reasonable baselines for OOD? Why no baseline that fits a generative model in input space? You should cite Ren et al. "Likelihood Ratios for Out-of-Distribution Detection"
iclr_2020_ryl3ygHYDB
Magnitude-based pruning is one of the simplest methods for pruning neural networks. Despite its simplicity, magnitude-based pruning and its variants demonstrated remarkable performances for pruning modern architectures. Based on the observation that the magnitude-based pruning indeed minimizes the Frobenius distortion of a linear operator corresponding to a single layer, we develop a simple pruning method, coined lookahead pruning, by extending the single layer optimization to a multi-layer optimization. Our experimental results demonstrate that the proposed method consistently outperforms the magnitude pruning on various networks including VGG and ResNet, particularly in the high-sparsity regime.
*Summary* The paper proposes a multi-layer alternative to magnitude-based pruning. The operations entailed in the previous, current, and subsequent layers are treated as linear operations (by omitting any nonlinearities), weights are selected for pruning to minimize the "Frobenius distortion", the Frobenius norm of the difference between products of the (i-1, i, i+1)-layer Jacobians with and without the selected weight. This simplifies to a cost-effective pruning criterion. In spite of the simplistic linear setting assumed for the derivation, results show the criterion prunes better than weight-based methods at unstructured pruning of a variety of modern architectures with CIFAR-10, particularly excelling at higher sparsity. *Rating* The paper has some clear positives, particularly: + Clear writing and formatting + Simple method + Good structure for the experimental analysis (with 5x replications!) However there are a few limitations, noted below; while none is fatal on its own, in total the limitations have led me to recommend "weak reject" currently. Limitations of the method: (1) Residual networks: The lack of an explicit strategy for handling residual connections (and the accompanying worsened relative performance) is a notable limitation since residual/skip connections are nearly universal in state-of-the-art large networks. The performance was shown to still be *slightly* better than with magnitude pruning. (2) Global ranking: Since connections are pruned layerwise, rather than taking the best-k neurons across the entire network at once, I assume that the LAP pruning criterion doesn't scale reasonably across layers. This implies that the method cannot be used to learn network structure. Instead the user must decide the desired number of neurons at each layer. (3) Structured pruning: There is no mention of pruning entire convolutional kernels or "neurons" at once, so I assume that only individual weights were pruned. Since structured pruning is the simplest way to achieve speedup in network inference (as opposed to merely reduction in model size), how does the LAP criterion perform when adapted for structured pruning, e.g. removing filters/neurons with the best average LAP score? Limitations of the experiments: (4) Baselines: While the paper is explicitly focused on an easy to compute replacement for magnitude-based pruning, there are a wide variety of alternative methods available. These vary in complexity, runtime, etc., but they deserve mention and either explicit comparison in the experiments or reasoning to justify the omission of such comparisons. (5) ImageNet: (Insert obligatory statement about the ubiquity of ImageNet experiments, ...) While it is cliche to request ImageNet experiments and CIFAR-10 is a helpful stand-in, they would be really nice to have. (6) Activations after non-linearities: While Fig. 3 and the remaining experiments present a reasonable case that the presence of non-linearities doesn't prevent LAP from improving upon magnitude-based pruning, it doesn't resolve the issue either. Whether considering negative values clipped by with ReLU or large magnitude values that are squashed by sigmoid and tanh, the linear-only model is a poor approximation for some unknown fraction of neurons for probably most inputs. Does this mean that LAP is underperforming in those cases? Are those cases sufficiently rare or randomly distributed that they are merely noise? Is there another mechanism at play? In practical terms, how much does the activation rate (positive for ReLU, linear/unsquashed for sigmoid/tanh) vary by neuron? This seems like a reasonably simple thing to compute and incorporate into pruning. *Notes* Eq. (4): Is (4) simply the one-step/greedy approximation to the optimization in (3)? If so, it may be helpful to state this explicitly. Also, is $w = W_i[j,k]$? If so, this is useful to explicitly state. Sec 2.1: Consider noting that the linear-model setup is used to construct the method, but non-linearities are addressed subsequently Sec 2.2: Is the activation probability p_j used in practice, or is it merely an explanatory device? pg5: "gradually prune (p/5)%" and marked with a suffix '-seq'" pg5: note that residual connections are discussed in the experiments? Tables 3-6: note that these all use CIFAR-10
iclr_2020_SkxMjxHYPS
Automatic neural network discovery methods face an enormous challenge caused by the size of the search space. A common practice is to split this space at different levels and to explore only a fraction of it. On one hand, neural architecture search methods look at how to combine a subset of layers to create an architecture while keeping a predefined number of filters in each layer. On the other hand, pruning techniques take a well known architecture and look for the appropriate number of filters per layer. In both cases, the exploration is made iteratively, training models several times during the search. Inspired by the constraints and advantages of these two approaches, we propose a straight-forward and fast option to find models with improved characteristics. We apply a small set of templates, that have been heuristically and experimentally evaluated, to make a one-shot redistribution of the number of filters in an already existing neural network. When compared to the initial base models we found that the resulting architectures, when trained from scratch, surpass the original accuracy even after been reduced to fit to the original amount of resources. Specifically, we show accuracy improvement for some network-task pairs of up to 5.5%, a reduction of up to 45% in parameters and 60% reduction in memory footprint.
This paper presents a simple methodological study on the effect of the distribution of convolutional filters on the accuracy of deep convolutional networks on the CIFAR 10 and CIFAR 100 data sets. There are five different kind of distributions studied: constant number of filters, monotonically increasing and decreasing number of filters and convex/concave with a local extremum at the layer in the middle. For these distributions, the total number of filters is varied to study the trade-off between running-time vs. accuracy, memory vs. accuracy and parameter count vs. accuracy. Although the paper is purely experimental without any particular theoretical considerations, it presents a few surprising observations defying conventional wisdom: - The standard method of increasing the number of filters as the number of convolutional nodes is increasing is not the most optimal strategy in most cases. - The optimal distribution of channels is highly dependent on the network architecture. - Some network architectures are highly stable with respect to the distribution of channels, while others are very sensitive. Given that this paper is easy to read and presents interesting insights for the design of convolutional network architectures and challenges mainstream views, I would consider it to be a generally valuable contribution, at least I enjoyed reading it. Despite the intriguing nature of this paper, there are several weaknesses which make me less enthusiastic about the quality of the paper: - The experiments are done only on CIFAR-10 and CIFAR-100. These benchmarks are somewhat special. It would be useful to see whether the results also hold for more realistic vision benchmarks. Even if running all the experiments would be costly, I think that at least a small selection should be reproduced on OpenImages or MS-Coco or other more realistic benchmarks to validate the findings of this paper. - It would be interesting to see whether starting from the best channel distributions, applying MorphNet would end up with different distributions. In general: whether MorphNet would end up with similar distributions automatically. - The paper does not clarify how the channel sizes for Inception were distributed, since proper balancing of the 1x1 and more spread out convolutions is a key part of that architecture. This is not clarified in this paper. - The grammar of the paper is poor, even the abstract is hard to read and interpret. - The paper presents itself as a methodology for automatically generating the optimal number of channels, while it is more of a one-off experiment and observation than a general purpose method. Another small technical detail regarding the choice of colors in the diagrams: the baseline distribution and constant distribution are very hard to distinguish. This is especially critical because these are the two best distributions on average. Also the diagrams could benefit from more detailed captions. The paper presents interesting, valuable experimental findings, but it is not extremely exciting theoretically. Also its practical execution is somewhat lacking. If it contained at least partial results on more realistic data sets, I would vote for strong accept, but in its current form, I find it borderline acceptance-worthy.
iclr_2020_SJleNCNtDH
We study the role of intrinsic motivation as an exploration bias for reinforcement learning in sparse-reward synergistic tasks, which are tasks where multiple agents must work together to achieve a goal they could not individually. Our key idea is that a good guiding principle for intrinsic motivation in synergistic tasks is to take actions which affect the world in ways that would not be achieved if the agents were acting on their own. Thus, we propose to incentivize agents to take (joint) actions whose effects cannot be predicted via a composition of the predicted effect for each individual agent. We study two instantiations of this idea, one based on the true states encountered, and another based on a dynamics model trained concurrently with the policy. While the former is simpler, the latter has the benefit of being analytically differentiable with respect to the action taken. We validate our approach in robotic bimanual manipulation and multi-agent locomotion tasks with sparse rewards; we find that our approach yields more efficient learning than both 1) training with only the sparse reward and 2) using the typical surprise-based formulation of intrinsic motivation, which does not bias toward synergistic behavior. Videos are available on the project webpage: https://sites.google.com/view/iclr2020-synergistic.
The paper proposes a novel algorithm for encouraging synergistic behavior in multi-agent setups with an intrinsic reward that promotes the agents to work together to achieve states that they cannot achieve individually without cooperation. The paper focuses on a two-agent environment where an approximate forward dynamics model is learnt for each agent, and can be composed sequentially to predict the next environment state given each agent’s action. However, this prediction will be inaccurate if the agent’s affected the environment state in such a way that individual dynamics model cannot predict i.e. synergistic behavior was produced. This prediction error is used as extrinsic reward by the proposed approach, while also having a variant where the true next state is replaced by another approximation of a joint forward model which allows for differentiability of actions with respect to the intrinsic reward. Empirical analysis shows that this intrinsic reward promotes synergetic behavior on two-agent robotic manipulation tasks and achieves better performance that baselines and ablations. I vote for weak accept as the paper proposes a novel intrinsic reward for promoting synergetic behavior in multi-agent systems, while also demonstrating that such an intrinsic reward can be differentiable if a joint forward dynamics model is approximated in addition to individual forward dynamics models given each agent’s actions. The paper does not show experiments beyond 2 agents and the four robotic manipulation tasks have been shown to work when provided with generic skills as an action space, which requires hand-defining or learning by demonstration. From the description of the random policy, it is stated that a random policy over skills serves as a sanity check to ensure that the skills do not trivialize the task. This seems to suggest that the extrinsic reward only baseline did not use these skills and was disadvantaged. Some clarification is required here - did the extrinsic reward only baseline use the same skills as the proposed method? If it did, it would obviate the need to have a random policy sanity check. The paper suggests a baseline for separate arm surprise. In a similar vein, why wasn’t a joint-arm surprise baseline employed, which can basically treat both arms as a single agent? Synergistic behavior seems hard to achieve for a large number of agents, but the paper does not give insights into whether such an algorithm will work for more than 2 agents. Typically multi-agent systems in prior work have worked with a large number of agents in environments other than robotic manipulation - such experiments may help in strengthening the proposed method.
iclr_2020_Syx7WyBtwB
For an explanation of a deep learning model to be effective, it must provide both insight into a model and suggest a corresponding action in order to achieve some objective. Too often, the litany of proposed explainable deep learning methods stop at the first step, providing practitioners with insight into a model, but no way to act on it. In this paper, we propose contextual decomposition explanation penalization (CDEP), a method which enables practitioners to leverage existing explanation methods in order to increase the predictive accuracy of deep learning models. In particular, when shown that a model has incorrectly assigned importance to some features, CDEP enables practitioners to correct these errors by directly regularizing the provided explanations. Using explanations provided by contextual decomposition (CD) (Murdoch et al., 2018), we demonstrate the ability of our method to increase performance on an array of toy and real datasets.
This paper presents a method intended to allow practitioners to *use* explanations provided by various methods. Concretely, the authors propose contextual decomposition explanation penalization (CDEP), which aims to use explanation methods to allow users to dissuade the model from learning unwanted correlations. The proposed method is somewhat similar to prior work by Ross et al., in that the idea is to include an explicit term in the objective that encourages the model to align with prior knowledge. In particular, the authors assume supervision --- effectively labeled features, from what I gather --- provided by users and define an objective that penalizes divergence from this. The object that is penalized is $\Beta(x_i, s)$, which is the importance score for feature s in instance $i$; for this they use a decontextualized representation of the feature (this is the contextual decomposition aspect). Although the authors highlight that any differentiable scoring function could be used, I think the use of this decontextualized variant as is done here is nice because it avoids issues with feature interactions in the hidden space that might result in misleading 'attribution' w.r.t. the original inputs. The main advantage of this effort compared to work that directly penalizes the gradients (as in Ross et al.) is that the method does not rely on second gradients (gradients of gradients), which is computationally problematic. Overall, this is a nice contribution that offers a new mechanism for exploiting human provided annotations. I do have some specific comments below. - I am not sure I agree with the premise as stated here. Namely, the authors write "For an explanation of a deep learning model to be effective, it must provide both insight into a model and suggest a corresponding action in order to achieve some objective" -- I would argue that an explanation may be useful in and of itself by highlighting how a model came to a prediction. I am not convinced that it need necessarily lead to, e.g., improving model performance. I think the authors are perhaps arguing that explanations might be used to interactively improve the underlying model, which is an interesting and sensible direction. - This work, which aims to harness user supervision on explanations to improve model performance, seems closely related to work on "annotator rationales" (Zaidan 2007 being the first work on this), but no mention is made of this. "Do Human Rationales Improve Machine Explanations?" by Strout et al. (2019) also seems relevant as a more recent instance in this line of work. I do not think such approaches are necessarily directly comparable, but some discussion of how this effort is situatied with respect to this line of work would be appreciated. - The experiment with MNIST colors was neat. - The authors compare their approach to Ross and colleagues in Table 1 but see quite poor results for the latter approach. Is this a result of the smaller batch size / learning rate adjustment? It seems that some tuning of this approach is warranted. - Figure 3 is nice but not terribly surprising: The image shows that the objective indeed works as expected; but if this were not the case, then it would suggest basically a failure of optimization (i.e., the objective dictates that the image should look like this *by construction*). Still, it's a good sanity check.
iclr_2020_rkxuWaVYDB
Control policies, trained using the Deep Reinforcement Learning, have been recently shown to be vulnerable to adversarial attacks introducing even very small perturbations to the policy input. The attacks proposed so far have been designed using heuristics, and build on existing adversarial example crafting techniques used to dupe classifiers in supervised learning. In contrast, this paper investigates the problem of devising optimal attacks, depending on a well-defined attacker's objective, e.g., to minimize the main agent average reward. When the policy and the system dynamics, as well as rewards, are known to the attacker, a scenario referred to as a white-box attack, designing optimal attacks amounts to solving a Markov Decision Process. For what we call black-box attacks, where neither the policy nor the system is known, optimal attacks can be trained using Reinforcement Learning techniques. Through numerical experiments, we demonstrate the efficiency of our attacks compared to existing attacks (usually based on Gradient methods). We further quantify the potential impact of attacks and establish its connection to the smoothness of the policy under attack. Smooth policies are naturally less prone to attacks (this explains why Lipschitz policies, with respect to the state, are more resilient). Finally, we show that from the main agent perspective, the system uncertainties and the attacker can be modelled as a Partially Observable Markov Decision Process. We actually demonstrate that using Reinforcement Learning techniques tailored to POMDP (e.g. using Recurrent Neural Networks) leads to more resilient policies.
This paper investigates the design of adversarial policies (where the action of the adversarial agent corresponds to a perturbation in the state perceived by the primary agent). In particular it focuses on the problem of learning so-called optimal adversarial policies, using reinforcement learning. I am perplexed by this paper for a few reasons: 1) What is the real motivation for this work? The intro argues “casting optimal attacks is crucial when assessing the robustness of RL policies, since ideally, the agent should learn and apply policies that resist *any* possible attack”. If the goal is to have agents that are robust to *any* attacks, then they cannot be robust just to so-called optimal attacks. And so what is really the use of learning so-called optimal attacks? 2) The notion itself of “optimal” attack is not clear. The paper does not properly discuss this. It quickly proposes one possible definition (p.4): “the adversary wishes to minimize the agent’s average cumulative reward”. This is indeed an interesting setting, and happens to have been studied extensively in game-theoretic multi-agent systems, but the paper does not make much connection with that literature (apart from a brief mention at bottom of p.2 / top of p.3), so it’s not clear what is new here compared to this. It’s also not discussed whether it would ever be worthwhile considering other notions of optimality for the adversary, and what would be the properties of those. So overall, while I find the general area of this work to be potentially interesting, the current framing is not well motivated enough, and not sufficiently differentiated from other work in robust MDPs and multi-agent RL to make a strong contribution yet. More minor comments: - P.3: “very different setting where the adversary has a direct impact on the system” => Clarify what are the implications of this in terms of framework, theory, algorithm. - P.4: You assume a valid Euclidean distance for the perturbed state. Is this valid in most MDP benchmarks? How is this implemented for the domains in the experiments? What is the action space considered? Do you always assume a continuous action space for the attacker? - P.5: “we can simply not maintain distributions over actions” -> Why not? Given the definition of perturbation, this seems feasible. - P.5: Eqn 4 is defined for a very specific adversarial reward function. Did you consider others? Is the gradient always easy to derive? - P.6: Eqn (5) & (6): What is “R” here? - P.7: Figure 1, top right plot. Seems here that the loss is above 0 for small \epsilon. Is this surprising? Actually improving the policy? - P.7: What happens if you consider even greater \epsilon? I assume the loss is greater. But then the perturbation would be more detectable? How do you think about balancing those 2 requirements of adversarial attacks? How should we formalize detectability in this setting? - Fig.2: Bottom plots are too small to read. - Sec.6: Can you compare to multi-agent baselines, e.g. Morimoto & Doya 2005. - P.8: “We also show that Lipschitz policies have desirable robustness properties.” Can you be more specific about where this is shown formally? Or are you extrapolating from the fact that discrete mountain car suffers more loss than continuous mountain car? I would suggest making that claim more carefully.
iclr_2020_HJxcP2EFDS
User generated content contains opinionated texts not only in dominant languages (like English) but also less dominant languages( like Amharic). However, negation handling techniques that supports for sentiment detection is not developed in such less dominant language(i.e. Amharic). Negation handling is one of the challenging tasks for sentiment classification. Thus, this work builds negation handling schemes which enhances Amharic Sentiment classification. The proposed Negation Handling framework combines the lexicon based approach and character ngram based machine learning model. The performance of framework is evaluated using the annotated Amharic News Comments. The system is outperforming the best of all models and the baselines by an accuracy of 98.0. The result is compared with the baselines (without negation handling and word level ngram model).
This paper is quite difficult to read. The figures are pixelated. There is almost no organization to the text of the work. Descriptions are imprecise and lax: for example, "We apply basic preprocessing on Amharic News Comments. These include normalization of Amharic script symbols, tokenization, stop word removal, punctuation mark removal and so on. Amharic writing system is expressed using only consonants. To handle the features of the language is very challenging. We require conversion of Amharic scripts to consonant-vowel form. Particularly, before performing negation handling and stemming, the algorithm converts each Amharic word to its consonant vowel form." What kind of normalization? What kind of tokenization? Did you use outside tools? If so, cite them or refer to the code. If not, then please explain your method more thoroughly. Don't assume all your readers know what consonant-vowel form is. "and so on" is not appropriate. You should explain what the so on is in a paper submitted to a conference. Don't bother telling us the features of the language are challenging. Just tell us how you did it. Switching between words like "cue" and "clue" Even the font of the text changes part way through the paper. Figure 2 is almost entirely uninformative. The algorithms are poorly displayed and nearly unreadable. Experiments are too limited.
iclr_2020_BkeyOxrYwH
In this paper we investigate an artificial agent's ability to perform task-focused tool synthesis via imagination. Our motivation is to explore the richness of information captured by the latent space of an object-centric generative model -and how to exploit it. In particular, our approach employs activation maximisation of a task-based performance predictor to optimise the latent variable of a structured latent-space model in order to generate tool geometries appropriate for the task at hand. We evaluate our model using a novel dataset of synthetic reaching tasks inspired by the cognitive sciences and behavioural ecology. In doing so we examine the model's ability to imagine tools for increasingly complex scenario types, beyond those seen during training. Our experiments demonstrate that the synthesis process modifies emergent, task-relevant object affordances in a targeted and deliberate way: the agents often specifically modify aspects of the tools which relate to meaningful (yet implicitly learned) concepts such as a tool's length, width and configuration. Our results therefore suggest, that task relevant object affordances are implicitly encoded as directions in a structured latent space shaped by experience.
This paper proposes an architecture for synthesizing tools to be used in a reaching task. Specifically, during training the agent jointly learns to segment an image of a set of three tools (via the MONet architecture) and to classify whether one the tools will solve the given scene. At test time, one of the three tools is selected based on which seems most feasible, and then gradient descent is used to modify the latent representation of the tool in order to synthesize a new tool to (hopefully) solve the scene. The paper demonstrates that this approach can achieve ok performance on familiar scenes with familiar tools, but that it fails to generalize when exposed to unfamiliar scenes or unfamiliar tools. The paper reports a combination of the quantitative results showing that optimizing the latent space can lead to successful synthesis in some cases, and qualitative results showing that the synthesized tools change along interpretable dimensions such as length, width, etc. The combination of these results suggest that the model has learned something about which tool dimensions are important for being able to solve the types of reaching tasks given in the paper. While I think this paper tackles a very interesting, important, and challenging problem, I unfortunately feel it is not ready for publication at ICLR and thus recommend rejection. Specifically, (1) neither the particular task, results, or model are not very compelling, (2) there are no comparisons to meaningful alternatives, and (3) overall I am not quite sure what conclusions I should draw from the paper. However, given the coolness of the problem of tool synthesis, I definitely encourage the authors to continue working on this line of work! 1. The task, results, and model are not very compelling. Any of these three things alone would not necessarily be a problem, but given that all three are true the paper comes across as a bit underwhelming. - First, while the task can be construed as a tool synthesis task, it doesn’t come across to me as very ecologically valid. In fact, the task seems to be more like a navigation task than a tool synthesis task: what’s required is simply to draw an unbroken line from one part of the scene to another, rather than actually generate a tool that has to be manipulated in an interesting way. Navigation has been studied extensively, while synthesis of tools that can be manipulated has not, which makes this task both not very novel and disappointing in comparison to what more ecologically-valid tool synthesis would look like. For example, consider a variation of the task where you would have to start the tool at the red region and move it to the green region. Many of the tools used here would become invalid since you wouldn’t actually be able to fit them through the gaps (e.g. Figure 2E). - Second, given that the “synthesis” task is more like a navigation task, the results are somewhat disappointing. When provided with a feasible solution, the model actually gets *worse* even in some of the in-sample scenes that it has seen during training (e.g. scene types C and D) which suggests that it hasn’t actually learned a good generative model of tools. Generalization performance is pretty bad across the board and is only slightly better than random, which undermines the claim in the abstract that “Our experiments demonstrate that the synthesis process modifies emergent, task-relevant object affordances in a targeted and deliberate way”. While it’s clear there is successful synthesis in some cases, I am not sure that the results support the claim that the synthesis is “targeted” or “deliberate” given how poor the overall performance is. - Third, the model/architecture is a relatively straightforward combination of existing components and is highly specialized to the particular task. As mentioned above, this wouldn’t necessarily be a problem if the task were more interesting (i.e. not just a navigation task) and if the results were better. I do think it is cool to see this use of MONet but I’m skeptical that the particular method of optimizing in the latent space is doing anything meaningful. While there is prior work that has optimized the latent space to achieve certain tasks (as is cited in the paper), there is also a large body of work on adversarial examples which demonstrate that optimizing in the latent space is also fraught with difficulty. I also suspect this is the reason why the results are not particularly good. 2. While I do appreciate the comparisons that are in the paper (to a “Random” version of TasMON that moves in a random direction in the latent space, and to “FroMON” agent which is not allowed to backpropagate gradients from the classification loss into MONet), these comparisons are not particularly meaningful. The difference between FroMON performance and TasMON tool imagination performance (I didn’t test tool utility) across tasks is not statistically significant (z(520, 544)=-0.8588, p=.38978), so I don’t think it is valid to claim that “a task-aware latent space can still provide benefits.” The Random baseline is a pretty weak baseline and it would be more interesting to compare to an alternative plausible architecture (for example, which doesn’t use a structured latent space, or which doesn’t have a perceptual frontend and operates directly on a symbolic representation of the tools/scene). 3. Overall, I am not quite sure what I am supposed to get out of the paper. Is it that “task relevant object affordances are implicitly encoded as directions in a structured latent space shaped by experience”? If so, then the results do not support this claim and so I am not sure what to take away. Is it that the latent space encodes information about what makes a tool feasible? If so, then this is a bit of a weak argument---of *course* it must encode this information if it is able to do the classification task at all. Is it that tool synthesis is a challenging problem? If so, then the lack of strong or canonical baselines makes it hard to evaluate whether this is true (and the navigation-only synthesis task also undermines this a bit). Some additional suggestions: It would be good to include a discussion of other recent work on tool use such as Allen et al. (2019) and Baker et al. (2019), as well as on other related synthesis tasks such as Ha (2018) or Ganin et al. (2018). The introduction states that “tool selection and manufacture – especially once demonstrated – is a significantly easier task than tool innovation”. While this may be true, it is a bit misleading in the context of the paper as the agent is doing something more like tool selection and modification rather than tool innovation (and actually the in-sample scenes are more like “manufacture”, which the agent doesn’t always even do well on). It would be helpful to more clearly explain scene types. Here is some suggested phrasings: in-sample = familiar scenes with familiar tools, interpolation = novel scenes with familiar tools, extrapolation = novel scenes with novel tools. I was originally confused how psi’ knew where to actually place the tool and at what orientation, and whether the background part of the rendering process shown in Figure 1. I realized after reading the supplemental that this is not done by the agent itself but by separate code that tries to find the orientation and position of the tool. This should be explained more clearly in the main text. In Table 1 it would be helpful to indicate which scene types are which (in-sample, interpolation, extrapolation). Allen, K. R., Smith, K. A., & Tenenbaum, J. B. (2019). The Tools Challenge: Rapid Trial-and-Error Learning in Physical Problem Solving. arXiv preprint arXiv:1907.09620. Baker, B., Kanitscheider, I., Markov, T., Wu, Y., Powell, G., McGrew, B., & Mordatch, I. (2019). Emergent tool use from multi-agent autocurricula. arXiv preprint arXiv:1909.07528. Ganin, Y., Kulkarni, T., Babuschkin, I., Eslami, S. M., & Vinyals, O. (2018). Synthesizing programs for images using reinforced adversarial learning. arXiv preprint arXiv:1804.01118. Ha, D. (2018). Reinforcement learning for improving agent design. arXiv preprint arXiv:1810.03779.
iclr_2020_rygvFyrKwH
An important goal in deep learning is to learn versatile, high-level feature representations of input data. However, standard networks' representations seem to possess shortcomings that, as we illustrate, prevent them from fully realizing this goal. In this work, we show that robust optimization can be re-cast as a tool for enforcing priors on the features learned by deep neural networks. It turns out that representations learned by robust models address the aforementioned shortcomings and make significant progress towards learning a high-level encoding of inputs. In particular, these representations are approximately invertible, while allowing for direct visualization and manipulation of salient input features. More broadly, our results indicate adversarial robustness as a promising avenue for improving learned representations.
### Summary 1. The paper proposes robustness to small adversarial perturbations as a prior when learning representations. 2. It demonstrates that representations that satisfy such a prior have non-trivial properties -- they are easier to visualize, are invertible (i.e. optimizing an input that produces the desired activation leads to reasonable images), and allows for direct manipulation of input features (by changing a feature in the representation space and then optimizing the image to satisfy this new representation.) ### Non-blind review THIS IS NOT A BLIND REVIEW Reviewing this paper reminded me of a recent NeurIPS paper I read. I went back to that NeurIPS paper (to better compare the similarities and differences) only to found out: 1- The NeurIPS19 paper cites an earlier arxiv version of this paper as an inspiration for its approach. 2- It is from the exact same authors. This, unfortunately, means I know who the authors are (however, there is no conflict of interest). More importantly, this paper is too similar to the NeurIPS paper and It's hard to review without taking into account the NeurIPS paper. In this review, I will treat the said NeurIPS19 paper as published work, and evaluate if this work adds more to the discourse. (I've refrained from naming the neurips paper so the anonymity is maintained for other reviewers; the authors, I presume, would immediately know which paper I'm referring to). ### Decisions with reasons Even though I think the idea introduced in this paper is interesting, I would argue for rejecting this paper for the simple reason: It doesn't add much to the existing discourse. Using the proposed framework (i.e. learning robust representations), it demonstrates two phenomena. First, it shows that robust models allow feature inversion. Second, it shows that it's easily possible to directly visualize and manipulate features for such a model. (Both of these are achieved using the same idea: treating input to the model as parameterized, and optimizing for a target activation) These are interesting observations and show that robust models learn features that rely on salient parts of the input image. However, the NeurIPS19 paper shows this even more cleary. As a result, I'm not convinced that demonstrating the same phenomena with different examples is sufficient for this to be a standalone paper. (Perhaps the two papers could have been one single paper). ### Questions What is the rationale behind dividing examples showing robust models rely on salient parts of input into two papers? Is there a semantic meaning to the grouping i.e. showing feature inversion, feature manipulation, and visualization in one paper and Generation, inpainting, translation, etc in another? If I understand correctly, all of these examples exist because the robustly learned representation relies on the salient parts of the input and not on the non-robust features. If that is the case, it makes more sense to show all of these examples in a single paper. ### Update after Author's response Since the authors have added and discussed the pertinent NeurIPS paper in this submission, I'm updating my score. I still think that the two papers are more similar than they might seem (See Re: Response 3 for more details). ### Update 2 I pointed out the similarities between the three contributions in this paper and the NeurIPS paper in "Re: Response #3" below. The authors replied to my concerns. I'm summarizing the author's position to my concerns followed by my response. #### Author's Position The authors agreed that features manipulation and feature visualization is similar, but pointed out that the chain of dependency is this paper -> NeurIPS paper and not the other way around. They mentioned that the NeurIPS paper cites this paper and acknowledges this. Moreover, they argued that even if we consider NeurIPS paper to be prior work, feature visualization is explored in much more detail in this paper. #### Response I think the direction of the chain of dependency is not that important since neither paper clearly builds on top of the other. The NeurIPS paper is published work now, and it makes sense to consider it prior work (Especially since it is from the same authors). Moreover, during the NeurIPS review period, the authors did not cite this paper; they only added the citation in the camera-ready version. This means that during the NeurIPS review period, they did, in fact, take credit for the ideas used in feature painting. (The authors mention that they somehow did not, and just stated the method and showed the pictorial result in the NeurIPS paper. However, I don't see how it is possible to present a method and a pictorial result without citing other work and not take credit for the method and result.) I would agree with the authors that this paper does go into more detail for feature visualization. More specifically, this paper also looks at visualizing individual features in the representation (The NeurIPS feature painting restricts the visualization using a mask) and demonstrates that the same feature can be used to visualize similar semantic concepts (such as red limbs) on multiple images. This is definitely interesting, but still very related to the feature-painting result. It would have made more sense to include these feature visualization results in the NeurIPS paper instead of adding them in a separate paper. #### Author's Position 2 They disagreed that feature inversion (this paper) is similar to image generation (NeurIPS paper). I did acknowledge in my initial response that feature inversion is slightly more general than image generation, however, the authors suggest that they are completely different." #### Response I think representation inversion is more similar to generation than it might seem. Representation for an in-distribution image would correspond to a class with high probability. Maximizing a class probability would indirectly optimize for a representation (Say R_0) that maximizes that class probability. Image generation, as presented in NeurIPS paper, can be seen as inverting R_0. Moreover, the qualitative results for feature inversion, as presented in this paper, are not extra-ordinary. In the majority of the inverted images, I can not classify the inverted image correctly. That shows the model is still not paying attention to the correct aspects of the input to do classification. As a result, this paper certainly does not solve the feature inversion problem (Ideally, inverted features would highlight parts of the input necessary for making predictions and ignore other parts. Robust models, on the other hand, seem to be uniformly retaining all information of the image including the background and not highlighting the parts important for making predictions. As a result, many inverted images can not be classified by humans). #### My current position At the end of the day, both this and the NeurIPS paper are demonstration papers (They are empirically demonstrating an unintuitive phenomenon). Both papers are demonstrating that robust models learn features that correspond to salient parts of the input. Even though both papers are nice, either one is sufficient to demonstrate the phenomenon. For this to be a stand-alone paper, the authors would have to do more in my opinion. One option would be to explore and compare different forms of adversarial robustness as priors (The paper is called "Adversarial Robustness as a Prior for Learned Representations" and not "L2 Adversarial Robustness as a Prior for Learned Representations," after all). Another option would be to see if such representations are 'quantitatively' better in some settings (Such as for transfer learning). In its current form, I feel that the two papers are too similar to recommend acceptance.