paper_id,title,keywords,abstract,meta_review
1,"""Information Theoretic Model Predictive Q-Learning""","['entropy regularized reinforcement learning', 'information theoretic MPC', 'robotics']","""Model-free Reinforcement Learning (RL) algorithms work well in sequential decision-making problems when experience can be collected cheaply and model-based RL is effective when system dynamics can be modeled accurately. However, both of these assumptions can be violated in real world problems such as robotics, where querying the system can be prohibitively expensive and real-world dynamics can be difficult to model accurately. Although sim-to-real approaches such as domain randomization attempt to mitigate the effects of biased simulation, they can still suffer from optimization challenges such as local minima and hand-designed distributions for randomization, making it difficult to learn an accurate global value function or policy that directly transfers to the real world. In contrast to RL, Model Predictive Control (MPC) algorithms use a simulator to optimize a simple policy class online, constructing a closed-loop controller that can effectively contend with real-world dynamics. MPC performance is usually limited by factors such as model bias and the limited horizon of optimization. In this work, we present a novel theoretical connection between information theoretic MPC and entropy regularized RL and develop a Q-learning algorithm that can leverage biased models. We validate the proposed algorithm on sim-to-sim control tasks to demonstrate the improvements over optimal control and reinforcement learning from scratch. Our approach paves the way for deploying reinforcement learning algorithms on real-robots in a systematic manner.""","""The authors develop a novel connection between information theoretic MPC and entropy regularized RL. Using this connection, they develop Q learning algorithm that can work with biased models. They evaluate their proposed algorithm on several control tasks and demonstrate performance over the baseline methods. Unfortunately, reviewers were not convinced that the technical contribution of this work was sufficient. They felt that this was a fairly straightforward extension of MPPI. Furthermore, I would have expected a comparison to POLO. As the authors note, their approach is more theoretically principled, so it would be nice to see them outperforming POLO as a validation of their framework. Given the large number of high-quality submissions this year, I recommend rejection at this time."""
2,"""Cross-Lingual Ability of Multilingual BERT: An Empirical Study""","['Cross-Lingual Learning', 'Multilingual BERT']","""Recent work has exhibited the surprising cross-lingual abilities of multilingual BERT (M-BERT) -- surprising since it is trained without any cross-lingual objective and with no aligned data. In this work, we provide a comprehensive study of the contribution of different components in M-BERT to its cross-lingual ability. We study the impact of linguistic properties of the languages, the architecture of the model, and the learning objectives. The experimental study is done in the context of three typologically different languages -- Spanish, Hindi, and Russian -- and using two conceptually different NLP tasks, textual entailment and named entity recognition. Among our key conclusions is the fact that the lexical overlap between languages plays a negligible role in the cross-lingual success, while the depth of the network is an integral part of it. All our models and implementations can be found on our project page: pseudo-url.""","""This paper introduces a set of new analysis methods to try to better understand the reasons that multilingual BERT succeeds. The findings substantially bolster the hypothesis behind the original multilingual BERT work: that this kind of model discovers and uses substantial structural and semantic correspondences between languages in a fully unsupervised setting. This is a remarkable result with serious implications for representation learning work more broadly. All three reviewers saw ways in which the paper could be expanded or improved, and one reviewer argued that the novelty and scope of the paper are below the standard for ICLR. However, I am inclined to side with the two more confident reviewers and argue for acceptance. I don't see any substantive reasons to reject the paper, the methods are novel and appropriate (even in light of the prior work that exists on this question), and the results are surprising and relevant a high-profile ongoing discussion in the literature on representation learning for language."""
3,"""Omnibus Dropout for Improving The Probabilistic Classification Outputs of ConvNets""","['Uncertainty Estimation', 'Calibration', 'Deep Learning']","""While neural network models achieve impressive classification accuracy across different tasks, they can suffer from poor calibration of their probabilistic predictions. A Bayesian perspective has recently suggested that dropout, a regularization strategy popularly used during training, can be employed to obtain better probabilistic predictions at test time (Gal & Ghahramani, 2016a). However, empirical results so far have not been encouraging, particularly with convolutional networks. In this paper, through the lens of ensemble learning, we associate this unsatisfactory performance with the correlation between the models sampled with dropout. Motivated by this, we explore the use of various structured dropout techniques to promote model diversity and improve the quality of probabilistic predictions. We also propose an omnibus dropout strategy that combines various structured dropout methods. Using the SVHN, CIFAR-10 and CIFAR-100 datasets, we empirically demonstrate the superior performance of omnibus dropout relative to several widely used strong baselines in addition to regular dropout. Lastly, we show the merit of omnibus dropout in a Bayesian active learning application. ""","""The paper investigates how to improve the performance of dropout and proposes an omnibus dropout strategy to reduce the correlation between the individual models. All the reviewers felt that the paper requires more work before it can be accepted. In particular, the reviewers raised several concerns about novelty of the method relative to existing methods, significance of performance improvements and clarity of the presentation. I encourage the authors to revise the draft based on the reviewers feedback and resubmit to a different venue. """
4,"""Reflection-based Word Attribute Transfer""","['embedding', 'representation learning', 'analogy', 'geometry']","""We propose a word attribute transfer framework based on reflection to obtain a word vector with an inverted target attribute for a given word in a word embedding space. Word embeddings based on Pointwise Mutual Information (PMI) represent such analogic relations as king - man + woman \approx queen. These relations can be used for changing a words attribute from king to queen by changing its gender. This attribute transfer can be performed by subtracting a difference vector man - woman from king when we have explicit knowledge of the gender of given word king. However, this knowledge cannot be developed for various words and attributes in practice. For transferring queen into king in this analogy-based manner, we need to know that queen denotes a female and add the difference vector to it. In this work, we transfer such binary attributes based on an assumption that such transfer mapping will become identity mapping when we apply it twice. We introduce a framework based on reflection mapping that satisfies this property; queen should be transferred back to king with the same mapping as the transfer from king to queen. Experimental results show that the proposed method can transfer the word attributes of the given words, and does not change the words that do not have the target attributes.""","""This paper proposes a way to transform word vectors based on a binary attribute (e.g. male/female) based on reflection, with the property that applying the reflection operator twice, the vector for a word is left unchanged. By identifying parameterized mirror planes for each word, the proposed method can leave neutral words left unchanged. The paper received 3 weak accepts. There was initially one reject, but the revisions convinced the reviewer to update their score to a weak accept. Overall, the reviewers appreciated the idea of reflection-based binary word attribute transfer. suggestions, the authors made small improvements to the writing, added missing citations, as well as additional results for another word embedding (GloVE) and another dataset (antonyms). One of the main remaining weakness of the work, is still the small dataset. Although somewhat alleviated by the inclusion of the antonym dataset, this is still a weakness of the paper. The AC agrees that the paper has an nice idea and is well presented. However, the work is limited in scope and is likely to be of limited interest to the ICLR community and would be more appreciated in the NLP community. The authors are encouraged to improve upon the work, and resubmit to an appropriate venue. """
5,"""Why ADAM Beats SGD for Attention Models ""","['Optimization', 'ADAM', 'Deep learning']","""While stochastic gradient descent (SGD) is still the de facto algorithm in deep learning, adaptive methods like Adam have been observed to outperform SGD across important tasks, such as attention models. The settings under which SGD performs poorly in comparison to Adam are not well understood yet. In this paper, we provide empirical and theoretical evidence that a heavy-tailed distribution of the noise in stochastic gradients is a root cause of SGD's poor performance. Based on this observation, we study clipped variants of SGD that circumvent this issue; we then analyze their convergence under heavy-tailed noise. Furthermore, we develop a new adaptive coordinate-wise clipping algorithm (ACClip) tailored to such settings. Subsequently, we show how adaptive methods like Adam can be viewed through the lens of clipping, which helps us explain Adam's strong performance under heavy-tail noise settings. Finally, we show that the proposed ACClip outperforms Adam for both BERT pretraining and finetuning tasks.""","""This paper tries to explain why Adam is better than sgd for training attention model. In specific, it first provides some empirical and theoretical evidence that a heavy-tailed distribution of the noise in stochastic gradients is the cause of SGD's worse performance. Then the authors studied a clipped variant of SGD that circumvents this issue, and revisited Adam through the lens of clipping. Overall, this paper conveys some interesting ideas. On the other hand, the theorems proved in this paper do not provide additional insight besides the intuition and the experiments are weak (hyperparameters are not carefully tuned). So even after author response, it still does not gather sufficient support from the reviewers. This is a borderline paper, and due to a rather limited number of papers the conference can accept, I encourage the authors to improve this paper and resubmit it to future conference. """
6,"""Distilled embedding: non-linear embedding factorization using knowledge distillation""","['Model Compression', 'Embedding Compression', 'Low Rank Approximation', 'Machine Translation', 'Natural Language Processing', 'Deep Learning']","""Word-embeddings are a vital component of Natural Language Processing (NLP) systems and have been extensively researched. Better representations of words have come at the cost of huge memory footprints, which has made deploying NLP models on edge-devices challenging due to memory limitations. Compressing embedding matrices without sacrificing model performance is essential for successful commercial edge deployment. In this paper, we propose Distilled Embedding, an (input/output) embedding compression method based on low-rank matrix decomposition with an added non-linearity. First, we initialize the weights of our decomposition by learning to reconstruct the full word-embedding and then fine-tune on the downstream task employing knowledge distillation on the factorized embedding. We conduct extensive experimentation with various compression rates on machine translation, using different data-sets with a shared word-embedding matrix for both embedding and vocabulary projection matrices. We show that the proposed technique outperforms conventional low-rank matrix factorization, and other recently proposed word-embedding matrix compression methods. ""","""This paper proposes to further distill token embeddings via what is effectively a simple autoencoder with a ReLU activation. All reviewers expressed concerns with the degree of technical contribution of this paper. As Reviewer 3 identifies, there are simple variants (e.g. end-to-end training with the factorized model) and there is no clear intuition for why the proposed method should outperform its variants as well as the other baselines (as noted by Reviewer 1). Reviewer 2 further expresses concerns about the merits of the propose approach over existing approaches, given the apparently small effect size of the improvement (let alone the possibility that the improvement may not in fact be statistically significant). """
7,"""GraphNVP: an Invertible Flow-based Model for Generating Molecular Graphs""","['Graph Neural Networks', 'graph generative model', 'invertible flow', 'graphNVP']","""We propose GraphNVP, an invertible flow-based molecular graph generation model. Existing flow-based models only handle node attributes of a graph with invertible maps. In contrast, our model is the first invertible model for the whole graph components: both of dequantized node attributes and adjacency tensor are converted into latent vectors through two novel invertible flows. This decomposition yields the exact likelihood maximization on graph-structured data. We decompose the generation of a graph into two steps: generation of (i) an adjacency tensor and(ii) node attributes. We empirically demonstrate that our model and the two-step generation efficiently generates valid molecular graphs with almost no duplicated molecules, although there are no domain-specific heuristics ingrained in the model. We also confirm that the sampling (generation) of graphs is faster in magnitude than other models in our implementation. In addition, we observe that the learned latent space can be used to generate molecules with desired chemical properties""","""The authors propose an invertible flow-based model for molecular graph generation. The reviewers like the idea but have several concerns: in particular, overfitting in the model, need for more experiments and missing related work. It is important for authors to address them in a future submission"""
8,"""Actor-Critic Approach for Temporal Predictive Clustering""","['Temporal Clustering', 'Predictive Clustering', 'Actor-Critic']","""Due to the wider availability of modern electronic health records (EHR), patient care data is often being stored in the form of time-series. Clustering such time-series data is crucial for patient phenotyping, anticipating patients prognoses by identifying similar patients, and designing treatment guidelines that are tailored to homogeneous patient subgroups. In this paper, we develop a deep learning approach for clustering time-series data, where each cluster comprises patients who share similar future outcomes of interest (e.g., adverse events, the onset of comorbidities, etc.). The clustering is carried out by using our novel loss functions that encourage each cluster to have homogeneous future outcomes. We adopt actor-critic models to allow back-propagation through the sampling process that is required for assigning clusters to time-series inputs. Experiments on two real-world datasets show that our model achieves superior clustering performance over state-of-the-art benchmarks and identifies meaningful clusters that can be translated into actionable information for clinical decision-making.""","""This paper proposes a reinforcement learning approach to clustering time-series data. The reviewers had several questions related to clarity and concerns related to the novelty of the method, the connection to RL, and experimental results. While the authors were able to address some of these questions and concerns in the rebuttal, the reviewers believe that the paper is not quite ready for publication."""
9,"""Efficient Wrapper Feature Selection using Autoencoder and Model Based Elimination""","['Wrapper Feature Selection', 'AMBER', 'Ranker Model', 'Generative Training', 'Wireless Subsampling']","""We propose a computationally efficient wrapper feature selection method - called Autoencoder and Model Based Elimination of features using Relevance and Redundancy scores (AMBER) - that uses a single ranker model along with autoencoders to perform greedy backward elimination of features. The ranker model is used to prioritize the removal of features that are not critical to the classification task, while the autoencoders are used to prioritize the elimination of correlated features. We demonstrate the superior feature selection ability of AMBER on 4 well known datasets corresponding to different domain applications via comparing the accuracies with other computationally efficient state-of-the-art feature selection techniques. Interestingly, we find that the ranker model that is used for feature selection does not necessarily have to be the same as the final classifier that is trained on the selected features. Finally, we hypothesize that overfitting the ranker model on the training set facilitates the selection of more salient features.""","""In this paper the authors propose a wrapper feature selection method that selects features based on 1) redundancy, i.e. the sensitivity of the downstream model to feature elimination, and 2) relevance, i.e. how the individual features impact the accuracy of the target task. The authors use a combination of the redundancy and relevance scores to eliminate the features. While acknowledging that the proposed model is potentially useful, the reviewers raised several important concerns that were viewed by AC as critical issues: (1) all reviewers agreed that the proposed approach lacks theoretical justification or convincing empirical evaluations in order to show its effectiveness and general applicability -- see R1s and R2s requests for evaluation with more datasets/diverse tasks to assess the applicability and generality of the proposed model; see R1s, R4s concerns regarding theoretical analysis; (2) all reviewers expressed concerns regarding the technical issue of combining the redundancy and relevance scores -- see R4s and R2s concerns regarding the individual/disjoint calibration of scores; see R1s suggestion to learn to reweigh the scores; (3) experimental setup requires improvement both in terms of clarity of presentation and implementation -- see R1s comment regarding the ranker model, see R4s concern regarding comparison with a standard deep learning model that does feature learning for a downstream task; both reviewers also suggested to analyse how autoencoders with different capacity could impact the results. Additionally R1 raised a concern regarding relevant recent works that were overlooked. The authors have tried to address some of these concerns during rebuttal, but an insufficient empirical evidence still remains a critical issue of this work. To conclude, the reviewers and AC suggest that in its current state the manuscript is not ready for a publication. We hope the reviews are useful for improving and revising the paper. """
10,"""Action Semantics Network: Considering the Effects of Actions in Multiagent Systems""","['multiagent coordination', 'multiagent learning']","""In multiagent systems (MASs), each agent makes individual decisions but all of them contribute globally to the system evolution. Learning in MASs is difficult since each agent's selection of actions must take place in the presence of other co-learning agents. Moreover, the environmental stochasticity and uncertainties increase exponentially with the increase in the number of agents. Previous works borrow various multiagent coordination mechanisms into deep learning architecture to facilitate multiagent coordination. However, none of them explicitly consider action semantics between agents that different actions have different influences on other agents. In this paper, we propose a novel network architecture, named Action Semantics Network (ASN), that explicitly represents such action semantics between agents. ASN characterizes different actions' influence on other agents using neural networks based on the action semantics between them. ASN can be easily combined with existing deep reinforcement learning (DRL) algorithms to boost their performance. Experimental results on StarCraft II micromanagement and Neural MMO show ASN significantly improves the performance of state-of-the-art DRL approaches compared with several network architectures.""","""The authors address the challenge of sample-efficient learning in multi-agent systems. They propose a model that distinguishes actions in terms of their semantics, specifically in terms of whether they influence the acting agent and environment or whether they influence other agents. This additional structure is shown to substantially benefit learning speed when composed with a range of state of the art multi-agent RL algorithms. During the rebuttal, technical questions were well addressed and the overall quality of the paper improved. The paper provides interesting novel insights on how the proposed structure improves learning."""
11,"""Generative Latent Flow""","['Generative Model', 'Auto-encoder', 'Normalizing Flow']","""In this work, we propose the Generative Latent Flow (GLF), an algorithm for generative modeling of the data distribution. GLF uses an Auto-encoder (AE) to learn latent representations of the data, and a normalizing flow to map the distribution of the latent variables to that of simple i.i.d noise. In contrast to some other Auto-encoder based generative models, which use various regularizers that encourage the encoded latent distribution to match the prior distribution, our model explicitly constructs a mapping between these two distributions, leading to better density matching while avoiding over regularizing the latent variables. We compare our model with several related techniques, and show that it has many relative advantages including fast convergence, single stage training and minimal reconstruction trade-off. We also study the relationship between our model and its stochastic counterpart, and show that our model can be viewed as a vanishing noise limit of VAEs with flow prior. Quantitatively, under standardized evaluations, our method achieves state-of-the-art sample quality and diversity among AE based models on commonly used datasets, and is competitive with GANs' benchmarks. ""","""The authors propose generative latent flow which uses autoencoder to learn latent representations and normalizing flows to map that distribution. The reviewers feel that there is limited novelty since it is a straightforward combination of existing ideas. """
12,"""Reinforcement Learning with Probabilistically Complete Exploration""","['Reinforcement Learning', 'Exploration', 'sparse rewards', 'learning from demonstration']","""Balancing exploration and exploitation remains a key challenge in reinforcement learning (RL). State-of-the-art RL algorithms suffer from high sample complexity, particularly in the sparse reward case, where they can do no better than to explore in all directions until the first positive rewards are found. To mitigate this, we propose Rapidly Randomly-exploring Reinforcement Learning (R3L). We formulate exploration as a search problem and leverage widely-used planning algorithms such as Rapidly-exploring Random Tree (RRT) to find initial solutions. These solutions are used as demonstrations to initialize a policy, then refined by a generic RL algorithm, leading to faster and more stable convergence. We provide theoretical guarantees of R3L exploration finding successful solutions, as well as bounds for its sampling complexity. We experimentally demonstrate the method outperforms classic and intrinsic exploration techniques, requiring only a fraction of exploration samples and achieving better asymptotic performance.""","""This was a borderline paper, with both pros and cons. In the end, it was not considered sufficiently mature to accept in its current form. The reviewers all criticized the assumptions needed, and lamented the lack of clarity around the distinction between reinforcement learning and planning. The paper requires a clearer contribution, based on a stronger justification of the approach and weakening of the assumptions. The submitted comments should be able to help the authors strengthen this work."""
13,"""End-to-end named entity recognition and relation extraction using pre-trained language models""","['named entity recognition', 'relation extraction', 'information extraction', 'information retrival', 'transfer learning', 'multi-task learning', 'BERT', 'transformers', 'language models']","""Named entity recognition (NER) and relation extraction (RE) are two important tasks in information extraction and retrieval (IE & IR). Recent work has demonstrated that it is beneficial to learn these tasks jointly, which avoids the propagation of error inherent in pipeline-based systems and improves performance. However, state-of-the-art joint models typically rely on external natural language processing (NLP) tools, such as dependency parsers, limiting their usefulness to domains (e.g. news) where those tools perform well. The few neural, end-to-end models that have been proposed are trained almost completely from scratch. In this paper, we propose a neural, end-to-end model for jointly extracting entities and their relations which does not rely on external NLP tools and which integrates a large, pre-trained language model. Because the bulk of our model's parameters are pre-trained and we eschew recurrence for self-attention, our model is fast to train. On 5 datasets across 3 domains, our model matches or exceeds state-of-the-art performance, sometimes by a large margin.""","""This paper presents an end-to-end technique for named entity recognition, that uses pre-trained models so as to avoid long training times, and evaluates it against several baselines. The paper was reviewed by three experts working in this area. R1 recommends Reject, giving the opinion that although the paper is well-written and results are good, they feel the technique itself has little novelty and that the main reason the technique works well is using BERT. R2 recommends Weak Reject based on similar reasoning, that the approach consists of existing components (albeit combined in a novel way) and suggest some ablation experiments to isolate the source of the good performance. R3 recommends Weak Accept but feels it is ""unsurprising"" that BERT allows for faster training and higher accuracy. In their response, authors emphasize that the application of pretraining to named entity recognition is new, and that theirs is a methodological advance, not purely a practical one (as R1 suggests and other reviews imply). They also argue it is not possible to do a fair ablation study that removes BERT, but make an attempt. The reviewers chose to keep their scores after the response. Given the split decision, the AC also read the paper. It is clear the paper has significant merit and significant practical value, as the reviews indicate. However, given that three expert reviewers -- all of whom are NLP researchers at top institutions -- feel that the contribution of the paper is weak (in the context of the expectations of ICLR) makes it not possible for us to recommend acceptance at this time. """
14,"""Disentangling neural mechanisms for perceptual grouping""","['Perceptual grouping', 'visual cortex', 'recurrent feedback', 'horizontal connections', 'top-down connections']","""Forming perceptual groups and individuating objects in visual scenes is an essential step towards visual intelligence. This ability is thought to arise in the brain from computations implemented by bottom-up, horizontal, and top-down connections between neurons. However, the relative contributions of these connections to perceptual grouping are poorly understood. We address this question by systematically evaluating neural network architectures featuring combinations bottom-up, horizontal, and top-down connections on two synthetic visual tasks, which stress low-level ""Gestalt"" vs. high-level object cues for perceptual grouping. We show that increasing the difficulty of either task strains learning for networks that rely solely on bottom-up connections. Horizontal connections resolve straining on tasks with Gestalt cues by supporting incremental grouping, whereas top-down connections rescue learning on tasks with high-level object cues by modifying coarse predictions about the position of the target object. Our findings dissociate the computational roles of bottom-up, horizontal and top-down connectivity, and demonstrate how a model featuring all of these interactions can more flexibly learn to form perceptual groups.""","""All the reviewers recommend acceptance. The reviews found the paper to be interesting with substantial insights. """
15,"""Unsupervised Domain Adaptation through Self-Supervision""",['unsupervised domain adaptation'],"""This paper addresses unsupervised domain adaptation, the setting where labeled training data is available on a source domain, but the goal is to have good performance on a target domain with only unlabeled data. Like much of previous work, we seek to align the learned representations of the source and target domains while preserving discriminability. The way we accomplish alignment is by learning to perform auxiliary self-supervised task(s) on both domains simultaneously. Each self-supervised task brings the two domains closer together along the direction relevant to that task. Training this jointly with the main task classifier on the source domain is shown to successfully generalize to the unlabeled target domain. The presented objective is straightforward to implement and easy to optimize. We achieve state-of-the-art results on four out of seven standard benchmarks, and competitive results on segmentation adaptation. We also demonstrate that our method composes well with another popular pixel-level adaptation method.""","""Thanks for your detailed replies to the reviewers, which helped us a lot to clarify several issues. Although the paper discusses an interesting topic and contains potentially interesting idea, its novelty is limited. Given the high competition of ICLR2020, this paper is still below the bar unfortunately."""
16,"""Reinforcement Learning with Chromatic Networks""","['reinforcement', 'learning', 'chromatic', 'networks', 'partitioning', 'efficient', 'neural', 'architecture', 'search', 'weight', 'sharing', 'compactification']","""We present a neural architecture search algorithm to construct compact reinforcement learning (RL) policies, by combining ENAS and ES in a highly scalable and intuitive way. By defining the combinatorial search space of NAS to be the set of different edge-partitionings (colorings) into same-weight classes, we represent compact architectures via efficient learned edge-partitionings. For several RL tasks, we manage to learn colorings translating to effective policies parameterized by as few as 17 weight parameters, providing >90 % compression over vanilla policies and 6x compression over state-of-the-art compact policies based on Toeplitz matrices, while still maintaining good reward. We believe that our work is one of the first attempts to propose a rigorous approach to training structured neural network architectures for RL problems that are of interest especially in mobile robotics with limited storage and computational resources.""","""This paper describes a method for learning compact RL policies suitable for mobile robotic applications with limited storage. The proposed pipeline is a scalable combination of efficient neural architecture search (ENAS) and evolution strategies (ES). Empirical evaluations are conducted on various OpenAI Gym and quadruped locomotion tasks, producing policies with as little as 10s of weight parameters, and significantly increased compression-reward trade-offs are obtained relative to some existing compact policies. Although reviewers appreciated certain aspects of this paper, after the rebuttal period there was no strong support for acceptance and several unsettled points were expressed. For example, multiple reviewers felt that additional baseline comparisons were warranted to better calibrate performance, e.g., random coloring, wider range of generic compression methods, classic architecture search methods, etc. Moreover, one reviewer remained concerned that the scope of this work was limited to very tiny model sizes whereby, at least in many cases, running the uncompressed model might be adequate."""
17,"""Learning transport cost from subset correspondence""",[],"""Learning to align multiple datasets is an important problem with many applications, and it is especially useful when we need to integrate multiple experiments or correct for confounding. Optimal transport (OT) is a principled approach to align datasets, but a key challenge in applying OT is that we need to specify a cost function that accurately captures how the two datasets are related. Reliable cost functions are typically not available and practitioners often resort to using hand-crafted or Euclidean cost even if it may not be appropriate. In this work, we investigate how to learn the cost function using a small amount of side information which is often available. The side information we consider captures subset correspondence---i.e. certain subsets of points in the two data sets are known to be related. For example, we may have some images labeled as cars in both datasets; or we may have a common annotated cell type in single-cell data from two batches. We develop an end-to-end optimizer (OT-SI) that differentiates through the Sinkhorn algorithm and effectively learns the suitable cost function from side information. On systematic experiments in images, marriage-matching and single-cell RNA-seq, our method substantially outperform state-of-the-art benchmarks. ""","""The paper proposes an algorithm for learning a transport cost function that accurately captures how two datasets are related by leveraging side information such as a subset of correctly labeled points. The reviewers believe that this is an interesting and novel idea. There were several questions and comments, which the authors adequately addressed. I recommend that the paper be accepted."""
18,"""Higher-Order Function Networks for Learning Composable 3D Object Representations""","['computer vision', '3d reconstruction', 'deep learning', 'representation learning']","""We present a new approach to 3D object representation where a neural network encodes the geometry of an object directly into the weights and biases of a second 'mapping' network. This mapping network can be used to reconstruct an object by applying its encoded transformation to points randomly sampled from a simple geometric space, such as the unit sphere. We study the effectiveness of our method through various experiments on subsets of the ShapeNet dataset. We find that the proposed approach can reconstruct encoded objects with accuracy equal to or exceeding state-of-the-art methods with orders of magnitude fewer parameters. Our smallest mapping network has only about 7000 parameters and shows reconstruction quality on par with state-of-the-art object decoder architectures with millions of parameters. Further experiments on feature mixing through the composition of learned functions show that the encoding captures a meaningful subspace of objects.""","""The submission presents an approach to single-view 3D reconstruction. The approach is quite creative and involves predicting the weights of a network that is then applied to a point set. The presentation is good. The experimental protocol is well-informed and the results are convincing. The reviewers' concerns have largely been addressed by the authors' responses and the revision. In particular, R2, who gave a ""3"", posted ""I would now advise to raise my score (3 previously) to a be in line with the 6: Weak Accept given by the other reviewers."" This means that all three reviewers recommend accepting the paper. The AC agrees."""
19,"""Augmenting Genetic Algorithms with Deep Neural Networks for Exploring the Chemical Space""","['Generative model', 'Chemical Space', 'Inverse Molecular Design']","""Challenges in natural sciences can often be phrased as optimization problems. Machine learning techniques have recently been applied to solve such problems. One example in chemistry is the design of tailor-made organic materials and molecules, which requires efficient methods to explore the chemical space. We present a genetic algorithm (GA) that is enhanced with a neural network (DNN) based discriminator model to improve the diversity of generated molecules and at the same time steer the GA. We show that our algorithm outperforms other generative models in optimization tasks. We furthermore present a way to increase interpretability of genetic algorithms, which helped us to derive design principles""","""Paper received reviews of A, WA, WR. AC has carefully read all reviews/responses. R1 is less experienced in this area. AC sides with R2,R3 and feels paper should be accepted. Interesting topic and interesting problem. Authors are encouraged to strengthen experiments in final version. """
20,"""Deep 3D Pan via local adaptive ""t-shaped"" convolutions with global and local adaptive dilations""","['Deep learning', 'Stereoscopic view synthesis', 'Monocular depth', 'Deep 3D Pan']","""Recent advances in deep learning have shown promising results in many low-level vision tasks. However, solving the single-image-based view synthesis is still an open problem. In particular, the generation of new images at parallel camera views given a single input image is of great interest, as it enables 3D visualization of the 2D input scenery. We propose a novel network architecture to perform stereoscopic view synthesis at arbitrary camera positions along the X-axis, or Deep 3D Pan, with t-shaped adaptive kernels equipped with globally and locally adaptive dilations. Our proposed network architecture, the monster-net, is devised with a novel t-shaped adaptive kernel with globally and locally adaptive dilation, which can efficiently incorporate global camera shift into and handle local 3D geometries of the target images pixels for the synthesis of naturally looking 3D panned views when a 2-D input image is given. Extensive experiments were performed on the KITTI, CityScapes, and our VICLAB_STEREO indoors dataset to prove the efficacy of our method. Our monster-net significantly outperforms the state-of-the-art method (SOTA) by a large margin in all metrics of RMSE, PSNR, and SSIM. Our proposed monster-net is capable of reconstructing more reliable image structures in synthesized images with coherent geometry. Moreover, the disparity information that can be extracted from the t-shaped kernel is much more reliable than that of the SOTA for the unsupervised monocular depth estimation task, confirming the effectiveness of our method.""","""Two reviewers recommend acceptance while one is negative. The authors propose t-shaped kernels for view synthesis, focusing on stereo images. AC finds the problem and method interesting and the results to be sufficiently convincing to warrant acceptance."""
21,"""Distance-Based Learning from Errors for Confidence Calibration""","['Confidence Calibration', 'Uncertainty Estimation', 'Prototypical Learning']","""Deep neural networks (DNNs) are poorly calibrated when trained in conventional ways. To improve confidence calibration of DNNs, we propose a novel training method, distance-based learning from errors (DBLE). DBLE bases its confidence estimation on distances in the representation space. In DBLE, we first adapt prototypical learning to train classification models. It yields a representation space where the distance between a test sample and its ground truth class center can calibrate the model's classification performance. At inference, however, these distances are not available due to the lack of ground truth labels. To circumvent this by inferring the distance for every test sample, we propose to train a confidence model jointly with the classification model. We integrate this into training by merely learning from mis-classified training samples, which we show to be highly beneficial for effective learning. On multiple datasets and DNN architectures, we demonstrate that DBLE outperforms alternative single-model confidence calibration approaches. DBLE also achieves comparable performance with computationally-expensive ensemble approaches with lower computational cost and lower number of parameters.""","""All reviewers voted to accept this paper. The AC recommends acceptance."""
22,"""Partial Simulation for Imitation Learning""","['Reinforcement Learning', 'Imitation Learning', 'Behavior Cloning', 'Partial Simulation']","""Model-based imitation learning methods require full knowledge of the transition kernel for policy evaluation. In this work, we introduce the Expert Induced Markov Decision Process (eMDP) model as a formulation of solving imitation problems using Reinforcement Learning (RL), when only partial knowledge about the transition kernel is available. The idea of eMDP is to replace the unknown transition kernel with a synthetic kernel that: a) simulate the transition of state components for which the transition kernel is known (s_r), and b) extract from demonstrations the state components for which the kernel is unknown (s_u). The next state is then stitched from the two components: s={s_r,s_u}. We describe in detail the recipe for building an eMDP and analyze the errors caused by its synthetic kernel. Our experiments include imitation tasks in multiplayer games, where the agent has to imitate one expert in the presence of other experts for whom we cannot provide a transition model. We show that combining a policy gradient algorithm with our model achieves superior performance compared to the simulation-free alternative.""","""The paper introduces the concept of an Expert Induced MDP (eMDP) to address imitation learning settings where environment dynamics are part known / part unknown. Based on the formulation a model-based imitation learning approach is derived and the authors obtain theoretical guarantees. Empirical validation focuses on comparison to behavior cloning. Reviewers raised concerns about the size of the contribution. For example, it is unclear to what degree the assumptions made here would hold in practical settings."""
23,"""On the Parameterization of Gaussian Mean Field Posteriors in Bayesian Neural Networks""","['variational Bayes', 'Bayesian neural networks', 'mean field']","""Variational Bayesian Inference is a popular methodology for approximating posterior distributions in Bayesian neural networks. Recent work developing this class of methods has explored ever richer parameterizations of the approximate posterior in the hope of improving performance. In contrast, here we share a curious experimental finding that suggests instead restricting the variational distribution to a more compact parameterization. For a variety of deep Bayesian neural networks trained using Gaussian mean-field variational inference, we find that the posterior standard deviations consistently exhibits strong low-rank structure after convergence. This means that by decomposing these variational parameters into a low-rank factorization, we can make our variational approximation more compact without decreasing the models' performance. What's more, we find that such factorized parameterizations are easier to train since they improve the signal-to-noise ratio of stochastic gradient estimates of the variational lower bound, resulting in faster convergence.""","""This paper proposes to reduce the number of variational parameters for mean-field VI. A low-rank approximation is used for this purpose. Results on a few small problems are reported. As R3 has pointed out, the main reason to reject this paper is the lack of comparison of uncertainty estimates. I also agree that, recent Adam-like optimizers do use preconditioning that can be interpreted as variances, so it is not clear why reducing this will give better results. I agree with R2's comments about missing the ""point estimate"" baseline. Also the reason for rank 1,2,3 giving better accuracies is unclear and I think the reasons provided by the authors is speculative. I do believe that reducing the parameterization is a reasonable idea and could be useful. But it is not clear if the proposal of this paper is the right one. Due to this reason, I recommend to reject this paper. However, I highly encourage the authors to improve their paper taking these points into account."""
24,"""GResNet: Graph Residual Network for Reviving Deep GNNs from Suspended Animation""","['Graph Neural Networks', 'Node Classification', 'Representation Learning']","""The existing graph neural networks (GNNs) based on the spectral graph convolutional operator have been criticized for its performance degradation, which is especially common for the models with deep architectures. In this paper, we further identify the suspended animation problem with the existing GNNs. Such a problem happens when the model depth reaches the suspended animation limit, and the model will not respond to the training data any more and become not learnable. Analysis about the causes of the suspended animation problem with existing GNNs will be provided in this paper, whereas several other peripheral factors that will impact the problem will be reported as well. To resolve the problem, we introduce the GRESNET (Graph Residual Network) framework in this paper, which creates extensively connected highways to involve nodes raw features or intermediate representations throughout the graph for all the model layers. Different from the other learning settings, the extensive connections in the graph data will render the existing simple residual learning methods fail to work. We prove the effectiveness of the introduced new graph residual terms from the norm preservation perspective, which will help avoid dramatic changes to the nodes representations between sequential layers. Detailed studies about the GRESNET framework for many existing GNNs, including GCN, GAT and LOOPYNET, will be reported in the paper with extensive empirical experiments on real-world benchmark datasets.""","""This paper studies the suspended animation limit of various graph neural networks (GNNs) and provides some theoretical analysis to explain its cause. To overcome the limitation, the authors propose Graph Residual Network (GRESNET) framework to involve nodes raw features or intermediate representations throughout the graph for all the model layers. The main concern of the reviewers is: the assumption made for theoretical analysis that the fully connected layer is identical mapping is too stringent. The paper does not gather sufficient support from the reviewers to merit acceptance, even after author response and reviewer discussion. I thus recommend reject."""
25,"""DeepPCM: Predicting Protein-Ligand Binding using Unsupervised Learned Representations""","['Unsupervised Representation Learning', 'Computational biology', 'computational chemistry', 'protein-ligand binding']","""In-silico protein-ligand binding prediction is an ongoing area of research in computational chemistry and machine learning based drug discovery, as an accurate predictive model could greatly reduce the time and resources necessary for the detection and prioritization of possible drug candidates. Proteochemometric modeling (PCM) attempts to make an accurate model of the protein-ligand interaction space by combining explicit protein and ligand descriptors. This requires the creation of information-rich, uniform and computer interpretable representations of proteins and ligands. Previous work in PCM modeling relies on pre-defined, handcrafted feature extraction methods, and many methods use protein descriptors that require alignment or are otherwise specific to a particular group of related proteins. However, recent advances in representation learning have shown that unsupervised machine learning can be used to generate embeddings which outperform complex, human-engineered representations. We apply this reasoning to propose a novel proteochemometric modeling methodology which, for the first time, uses embeddings generated via unsupervised representation learning for both the protein and ligand descriptors. We evaluate performance on various splits of a benchmark dataset, including a challenging split that tests the models ability to generalize to proteins for which bioactivity data is greatly limited, and we find that our method consistently outperforms state-of-the-art methods.""","""This paper uses unsupervised learning to create useful representations to improve the performance of models in predicting protein-ligand binding. After reviewers had time to consider each other's comments, there was consensus that the current work is too lacking in novelty on the modeling side to warrant publication in ICLR. Additionally, current experiments are lacking comparisons with important baselines. The work in its current form may be better suited for a domain journal. """
26,"""A General Upper Bound for Unsupervised Domain Adaptation""","['unsupervised domain adaptation', 'upper bound', 'joint error', 'hypothesis space constraint', 'cross margin discrepancy']","""In this work, we present a novel upper bound of target error to address the problem for unsupervised domain adaptation. Recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks. Furthermore, Ben-David et al. (2010) provide an upper bound for target error when transferring the knowledge, which can be summarized as minimizing the source error and distance between marginal distributions simultaneously. However, common methods based on the theory usually ignore the joint error such that samples from different classes might be mixed together when matching marginal distribution. And in such case, no matter how we minimize the marginal discrepancy, the target error is not bounded due to an increasing joint error. To address this problem, we propose a general upper bound taking joint error into account, such that the undesirable case can be properly penalized. In addition, we utilize constrained hypothesis space to further formalize a tighter bound as well as a novel cross margin discrepancy to measure the dissimilarity between hypotheses which alleviates instability during adversarial learning. Extensive empirical evidence shows that our proposal outperforms related approaches in image classification error rates on standard domain adaptation benchmarks.""","""Given two distributions, source and target, the paper presents an upper bound on the target risk of a classifier in terms of its source risk and other terms comparing the risk under the source/target input distribution and target/source labeling function. In the end, the bound is shown to be minimized by the true labeling function for the source, and at this minimum, the value of the bound is shown to also control the ""joint error"", i.e., the best achievable risk on both target and source by a single classifier. The point of the analysis is to go beyond the target risk bound presented by Ben-David et al. 2010 that is in terms of the discrepancy between the source and target and the performance of the source labeling function on the target or vice versa, whichever is smaller. Apparently, concrete domain adaptation methods ""based on"" the Ben-David et al. bound do not end up controlling the joint error. After various heuristic arguments, the authors develop an algorithm for unsupervised domain adaptation based on their bound in terms of a two-player game. Only one reviewer ended up engaging with the authors in a nontrivial way. This review also argued for (weak) acceptance. Another reviewer mostly raised minor issues about grammar/style and got confused by the derivation of the ""general"" bound, which I've checked is ok. The third reviewer raised some issues around the realizability assumption and also asked for better understanding as to what aspects of the new proposal are responsible for the improved performance, e.g., via an ablation study. I'm sympathetic to reviewer 1, even though I wish they had engaged with the rebuttal. I don't believe the revision included any ablation study. I think this would improve the paper. I don't think the issues raised by reviewer 3 rise to the level of rejection, especially since their main technical concern is due to their own confusion. Reviewer 2 argues for weak acceptance. However, if there was support for this paper, it wasn't enough for reviewers to engage with each other, despite my encouragement, which was disappointing."""
27,"""Mixture Distributions for Scalable Bayesian Inference""","['uncertainty estimation', 'Deep Ensembles', 'Adverserial Robustness']","""Bayesian Neural Networks (BNNs) provides a mathematically grounded framework to quantify uncertainty. However BNNs are computationally inefficient, thus are generally not employed on complicated machine learning tasks. Deep Ensembles were introduced as a Bootstrap inspired frequentist approach to the community, as an alternative to BNNs. Ensembles of deterministic and stochastic networks are a good uncertainty estimator in various applications (Although, they are criticized for not being Bayesian). We show Ensembles of deterministic and stochastic Neural Networks can indeed be cast as an approximate Bayesian inference. Deep Ensembles have another weakness of having high space complexity, we provide an alternative to it by modifying the original Bayes by Backprop (BBB) algorithm to learn more general concrete mixture distributions over weights. We show our methods and its variants can give better uncertainty estimates at a significantly lower parametric overhead than Deep Ensembles. We validate our hypothesis through experiments like non-linear regression, predictive uncertainty estimation, detecting adversarial images and exploration-exploitation trade-off in reinforcement learning.""","""This paper proposes to use mixture distributions to improve uncertainty estimates in BNNs. Ensemble methods are interpreted as a Bayesian mixture posterior approximation. To reduce the computation, a modification to BBB is provided based on a concrete mixture distribution. Both R1 and R3 have given useful feedback. It is clear that interpretation of ensemble as a Bayesian posterior is well known, and some of them also have theoretical issues. The experiment to clearly comparing proposed mixture posterior to more commonly used mixture distribution is also necessary. Due to these reasons, I recommend to reject this paper. I encourage the authors to use reviewers feedback to improve the paper."""
28,"""Inferring Dynamical Systems with Long-Range Dependencies through Line Attractor Regularization""","['Recurrent Neural Networks', 'Nonlinear State Space Models', 'Generative Models', 'Long short-term memory', 'vanishing/exploding gradient problem', 'Nonlinear dynamics', 'Interpretable machine learning', 'Time series analysis']","""Vanilla RNN with ReLU activation have a simple structure that is amenable to systematic dynamical systems analysis and interpretation, but they suffer from the exploding vs. vanishing gradients problem. Recent attempts to retain this simplicity while alleviating the gradient problem are based on proper initialization schemes or orthogonality/unitary constraints on the RNNs recurrency matrix, which, however, comes with limitations to its expressive power with regards to dynamical systems phenomena like chaos or multi-stability. Here, we instead suggest a regularization scheme that pushes part of the RNNs latent subspace toward a line attractor configuration that enables long short-term memory and arbitrarily slow time scales. We show that our approach excels on a number of benchmarks like the sequential MNIST or multiplication problems, and enables reconstruction of dynamical systems which harbor widely different time scales.""","""The paper proposes an interesting idea to leave a very simple form for piecewise-linear RNN, but separate units in to two types, one of which acts as memory. The ""memory"" units are penalized towards the linear attractor parameters, i.e. making elements of pseudo-formula close to 1 and off-diagonal of pseudo-formula close to pseudo-formula . The benchmarks are presented that confirm the efficiency of the model. The reviewer opinion were mixed; one ""1"", one ""3"" and one ""6""; the Reviewer1 is far too negative and some of his claims are not very constructive, the ""positive"" reviewer is very short. Finally, the last reviewer raised a question about the actual quality on the results. This is not addressed. Although there is a motivation for such partial regularization, the main practical question is how many ""memory neurons"" are needed. I looked through the paper - this addressed only in the supplementary, where the value of pseudo-formula is mentioned (=0.5 M). For = M$ it is the L2 penalty; what happens if the fraction is 0.1, 0.2, ... and more? A very crucial hyperparameter (and of course, smart selection of it can not be worse than L2RNN). This study is lacking. In my opinion, one can also introduce weights and sparsity constraints on them (in order to detect the number of ""memory"" neurons more-or less automatically). Although I feel this paper has a potential, it is not still ready for publication and could be significantly improved."""
29,"""Quaternion Equivariant Capsule Networks for 3D Point Clouds""","['3d', 'capsule networks', 'pointnet', 'quaternion', 'equivariant networks', 'rotations', 'local reference frame']","""We present a 3D capsule architecture for processing of point clouds that is equivariant with respect to the SO(3) rotation group, translation and permutation of the unordered input sets. The network operates on a sparse set of local reference frames, computed from an input point cloud and establishes end-to-end equivariance through a novel 3D quaternion group capsule layer, including an equivariant dynamic routing procedure. The capsule layer enables us to disentangle geometry from pose, paving the way for more informative descriptions and a structured latent space. In the process, we theoretically connect the process of dynamic routing between capsules to the well-known Weiszfeld algorithm, a scheme for solving iterative re-weighted least squares (IRLS) problems with provable convergence properties, enabling robust pose estimation between capsule layers. Due to the sparse equivariant quaternion capsules, our architecture allows joint object classification and orientation estimation, which we validate empirically on common benchmark datasets. ""","""This paper presents a capsule network to handle 3d point clouds which is equivariant to SO(3) rotations. It also provides the theoretical analysis to connect the dynamic routing approach to the Generalized Weiszfeld Iterations. The equivariant property of the method is demonstrated on classification and orientation estimation tasks of 3D shapes. While the technical contribution of the method is sound, the main concern raised by the reviewers was the lack of details in the presentation of methodology and results. Although the authors have made substantial efforts to update the paper, some reviewers were still not convinced and thus the scores remained the same. The paper was on the very borderline, but because of the limited capacity, I regret that I have to recommend rejection. Invariances and equivariances are indeed important topics in representation learning, for which the capsule network is known as one of the promising approaches but still not well investigated compared to other standard architectures. I encourage authors to resubmit the paper taking in the reviewers' comments."""
30,"""Structured consistency loss for semi-supervised semantic segmentation""","['semi-supervised learning', 'semantic segmentation', 'structured prediction', 'structured consistency loss']","""The consistency loss has played a key role in solving problems in recent studies on semi-supervised learning. Yet extant studies with the consistency loss are limited to its application to classification tasks; extant studies on semi-supervised semantic segmentation rely on pixel-wise classification, which does not reflect the structured nature of characteristics in prediction. We propose a structured consistency loss to address this limitation of extant studies. Structured consistency loss promotes consistency in inter-pixel similarity between teacher and student networks. Specifically, collaboration with CutMix optimizes the efficient performance of semi-supervised semantic segmentation with structured consistency loss by reducing computational burden dramatically. The superiority of proposed method is verified with the Cityscapes; The Cityscapes benchmark results with validation and with test data are 81.9 mIoU and 83.84 mIoU respectively. This ranks the first place on the pixel-level semantic labeling task of Cityscapes benchmark suite. To the best of our knowledge, we are the first to present the superiority of state-of-the-art semi-supervised learning in semantic segmentation.""","""This submission proposes to combine the CutMix data augmentation of Yun et al 2019 with the standard consistency loss of and the structured consistency loss of Liu et al 2019 and applies the resulting approach to the Cityscapes dataset. The reviewers were unanimous that the paper is not suitable for publication at ICLR due to a lack of novelty in the method. No rebuttal was provided."""
31,"""Phase Transitions for the Information Bottleneck in Representation Learning""","['Information Theory', 'Representation Learning', 'Phase Transition']","""In the Information Bottleneck (IB), when tuning the relative strength between compression and prediction terms, how do the two terms behave, and what's their relationship with the dataset and the learned representation? In this paper, we set out to answer these questions by studying multiple phase transitions in the IB objective: IB_[p(z|x)] = I(X; Z) I(Y; Z) defined on the encoding distribution p(z|x) for input X, target Y and representation Z, where sudden jumps of dI(Y; Z)/d and prediction accuracy are observed with increasing . We introduce a definition for IB phase transitions as a qualitative change of the IB loss landscape, and show that the transitions correspond to the onset of learning new classes. Using second-order calculus of variations, we derive a formula that provides a practical condition for IB phase transitions, and draw its connection with the Fisher information matrix for parameterized models. We provide two perspectives to understand the formula, revealing that each IB phase transition is finding a component of maximum (nonlinear) correlation between X and Y orthogonal to the learned representation, in close analogy with canonical-correlation analysis (CCA) in linear settings. Based on the theory, we present an algorithm for discovering phase transition points. Finally, we verify that our theory and algorithm accurately predict phase transitions in categorical datasets, predict the onset of learning new classes and class difficulty in MNIST, and predict prominent phase transitions in CIFAR10. ""","""This submission presents a theoretical study of phase transitions in IB: adjusting the IB parameter leads to step-wise behaviour of the prediction. Quoting R3: The core result is given by theorem 1: the phase transition betas necessarily satisfy an equation, where the LHS is expressed in terms of an optimal perturbation of the encoding function X->Z. This paper received a borderline review and two votes for weak accept. The main comment for the borderline review was about the rigor of a proof and the use of << symbols. The authors have updated the proof using limits as requested, addressing this primary concern. On the balance, the paper makes a strong contribution to understanding an important learning setting and a contribution to theoretical understanding of the behavior of information bottleneck predictors. """
32,"""Masked Based Unsupervised Content Transfer""",[],"""We consider the problem of translating, in an unsupervised manner, between two domains where one contains some additional information compared to the other. The proposed method disentangles the common and separate parts of these domains and, through the generation of a mask, focuses the attention of the underlying network to the desired augmentation alone, without wastefully reconstructing the entire target. This enables state-of-the-art quality and variety of content translation, as demonstrated through extensive quantitative and qualitative evaluation. Our method is also capable of adding the separate content of different guide images and domains as well as remove existing separate content. Furthermore, our method enables weakly-supervised semantic segmentation of the separate part of each domain, where only class labels are provided. Our code is available at pseudo-url. ""","""This paper extends the prior work on disentanglement and attention guided translation to instance-based unsupervised content transfer. The method is somewhat complicated, with five different networks and a multi-component loss function, however the importance of each component appears to be well justified in the ablation study. Overall the reviewers agree that the experimental section is solid and supports the proposed method well. It demonstrates good performance across a number of transfer tasks, including transfer to out-of-domain images, and that the method outperforms the baselines. For these reasons, I recommend the acceptance of this paper."""
33,"""Self-Imitation Learning via Trajectory-Conditioned Policy for Hard-Exploration Tasks""","['imitation learning', 'hard-exploration tasks', 'exploration and exploitation']","""Imitation learning from human-expert demonstrations has been shown to be greatly helpful for challenging reinforcement learning problems with sparse environment rewards. However, it is very difficult to achieve similar success without relying on expert demonstrations. Recent works on self-imitation learning showed that imitating the agent's own past good experience could indirectly drive exploration in some environments, but these methods often lead to sub-optimal and myopic behavior. To address this issue, we argue that exploration in diverse directions by imitating diverse trajectories, instead of focusing on limited good trajectories, is more desirable for the hard-exploration tasks. We propose a new method of learning a trajectory-conditioned policy to imitate diverse trajectories from the agent's own past experiences and show that such self-imitation helps avoid myopic behavior and increases the chance of finding a globally optimal solution for hard-exploration tasks, especially when there are misleading rewards. Our method significantly outperforms existing self-imitation learning and count-based exploration methods on various hard-exploration tasks with local optima. In particular, we report a state-of-the-art score of more than 20,000 points on Montezumas Revenge without using expert demonstrations or resetting to arbitrary states.""","""This paper addresses the problem of exploration in challenging RL environments using self-imitation learning. The idea behind the proposed approach is for the agent to imitate a diverse set of its own past trajectories. To achieve this, the authors introduce a policy conditioned on trajectories. The proposed approach is evaluated on various domains including Atari Montezuma's Revenge and MuJoCo. Given that the evaluation is purely empirical, the major concern is in the design of experiments. The amount of stochasticity induced by the random initial state alone does not lead to convincing results regarding the performance of the proposed approach compared with baselines (e.g. Go-Explore). With such simple stochasticity, it is not clear why one could not use a model to recover from it and then rely on an existing technique like Go-Explore. Although this paper tackles an important problem (hard-exploration RL tasks), all reviewers agreed that this limitation is crucial and I therefore recommend to reject this paper."""
34,"""Mirror-Generative Neural Machine Translation""","['neural machine translation', 'generative model', 'mirror']",""" Training neural machine translation models (NMT) requires a large amount of parallel corpus, which is scarce for many language pairs. However, raw non-parallel corpora are often easy to obtain. Existing approaches have not exploited the full potential of non-parallel bilingual data either in training or decoding. In this paper, we propose the mirror-generative NMT (MGNMT), a single unified architecture that simultaneously integrates the source to target translation model, the target to source translation model, and two language models. Both translation models and language models share the same latent semantic space, therefore both translation directions can learn from non-parallel data more effectively. Besides, the translation models and language models can collaborate together during decoding. Our experiments show that the proposed MGNMT consistently outperforms existing approaches in all a variety of scenarios and language pairs, including resource-rich and low-resource languages. ""","""This paper proposes a novel method for considering translations in both directions within the framework of generative neural machine translation, significantly improving accuracy. All three reviewers appreciated the paper, although they noted that the gains were somewhat small for the increased complexity of the model. Nonetheless, the baselines presented are already quite competitive, so improvements on these datasets are likely to never be extremely large. Overall, I found this to be a quite nice paper, and strongly recommend acceptance, perhaps as an oral presentation."""
35,"""Fooling Detection Alone is Not Enough: Adversarial Attack against Multiple Object Tracking""","['Adversarial examples', 'object detection', 'object tracking', 'security', 'autonomous vehicle', 'deep learning']","""Recent work in adversarial machine learning started to focus on the visual perception in autonomous driving and studied Adversarial Examples (AEs) for object detection models. However, in such visual perception pipeline the detected objects must also be tracked, in a process called Multiple Object Tracking (MOT), to build the moving trajectories of surrounding obstacles. Since MOT is designed to be robust against errors in object detection, it poses a general challenge to existing attack techniques that blindly target objection detection: we find that a success rate of over 98% is needed for them to actually affect the tracking results, a requirement that no existing attack technique can satisfy. In this paper, we are the first to study adversarial machine learning attacks against the complete visual perception pipeline in autonomous driving, and discover a novel attack technique, tracker hijacking, that can effectively fool MOT using AEs on object detection. Using our technique, successful AEs on as few as one single frame can move an existing object in to or out of the headway of an autonomous vehicle to cause potential safety hazards. We perform evaluation using the Berkeley Deep Drive dataset and find that on average when 3 frames are attacked, our attack can have a nearly 100% success rate while attacks that blindly target object detection only have up to 25%.""","""The authors agree after reading the rebuttal that attacks on MOT are novel. While the datasets used are small, and the attacks are generated in digital simulation rather than the physical world, this paper still demonstrates an interesting attack on a realistic system."""
36,"""StructPool: Structured Graph Pooling via Conditional Random Fields""","['Graph Pooling', 'Representation Learning', 'Graph Analysis']","""Learning high-level representations for graphs is of great importance for graph analysis tasks. In addition to graph convolution, graph pooling is an important but less explored research area. In particular, most of existing graph pooling techniques do not consider the graph structural information explicitly. We argue that such information is important and develop a novel graph pooling technique, know as the StructPool, in this work. We consider the graph pooling as a node clustering problem, which requires the learning of a cluster assignment matrix. We propose to formulate it as a structured prediction problem and employ conditional random fields to capture the relationships among assignments of different nodes. We also generalize our method to incorporate graph topological information in designing the Gibbs energy function. Experimental results on multiple datasets demonstrate the effectiveness of our proposed StructPool.""","""The paper proposed an operation called StructPool for graph-pooling by treating it as node clustering problem (assigning a label from 1..k to each node) and then use a pairwise CRF structure to jointly infer these labels. The reviewers all think that this is a well-written paper, and the experimental results are adequate to back up the claim that StructPool offers advantage over other graph-pooling operations. Even though the idea of the presented method is simple and it does add more (albeit by a constant factor) to the computational burden of graph neural network, I think this would make a valuable addition to the literature."""
37,"""Understanding the Limitations of Variational Mutual Information Estimators""",[],"""Variational approaches based on neural networks are showing promise for estimating mutual information (MI) between high dimensional variables. However, they can be difficult to use in practice due to poorly understood bias/variance tradeoffs. We theoretically show that, under some conditions, estimators such as MINE exhibit variance that could grow exponentially with the true amount of underlying MI. We also empirically demonstrate that existing estimators fail to satisfy basic self-consistency properties of MI, such as data processing and additivity under independence. Based on a unified perspective of variational approaches, we develop a new estimator that focuses on variance reduction. Empirical results on standard benchmark tasks demonstrate that our proposed estimator exhibits improved bias-variance trade-offs on standard benchmark tasks.""","""This paper presents a critical appraisal of variational mutual information estimators, and suggests a slight variance-reducing improvement based on clipping density ratio estimates, and prove that this reduces variance (at the cost of bias). They also propose a set of criteria they term ""self-consistency"" for evaluation of MI estimators and, and show convincingly that variational MI estimators fall short with respect to these. Reviewers were generally positive about the contribution, and were happy with improvements made. While somewhat limited in scope, I believe this is nonetheless a valuable contribution to the conversation surrounding mutual information objectives that have become popular recently. I therefore recommend acceptance."""
38,"""Efficient Content-Based Sparse Attention with Routing Transformers""","['Sparse attention', 'autoregressive', 'generative models']","""Self-attention has recently been adopted for a wide range of sequence modeling problems. Despite its effectiveness, self-attention suffers quadratic compute and memory requirements with respect to sequence length. Successful approaches to reduce this complexity focused on attention to local sliding windows or a small set of locations independent of content. Our work proposes to learn dynamic sparse attention patterns that avoid allocating computation and memory to attend to content unrelated to the query of interest. This work builds upon two lines of research: it combines the modeling flexibility of prior work on content-based sparse attention with the efficiency gains from approaches based on local, temporal sparse attention. Our model, the Routing Transformer, endows self-attention with a sparse routing module based on online k-means while reducing the overall complexity of attention to O(n^{1.5}d) from O(n^2d) for sequence length n and hidden dimension d. We show that our model outperforms comparable sparse attention models on language modeling on Wikitext-103 (15.8 vs 18.3 perplexity) as well as on image generation on ImageNet-64 (3.43 vs 3.44 bits/dim) while using fewer self-attention layers. Code will be open-sourced on acceptance.""","""This paper proposes a new model, the Routing Transformer, which endows self-attention with a sparse routing module based on online k-means while reducing the overall complexity of attention from O(n^2) to O(n^1.5). The model attained very good performance on WikiText-103 (in terms of perplexity) and similar performance to baselines (published numbers) in two other tasks. Even though the problem addressed (reducing the quadratic complexity of self-attention) is extremely relevant and the proposed approach is very intuitive and interesting, the reviewers raised some concerns, notably: - How efficient is the proposed approach in practice. Even though the theoretical complexity is reduced, more modules were introduced (e.g., forced clustering, mix of local heads and clustering heads, sorting, etc.) - Why is W_R fixed random? Since W_R is orthogonal, it's just a random (generalized) ""rotation"" (performed on the word embedding space). Does this really provide sensible ""routing""? - The experimental section can be improved to better understand the impact of the proposed method. Adding ablations, as suggested by the reviewers, would be an important part of this work. - Not clear why the work needs to be motivated through NMF, since the proposed method uses k-means. Unfortunately several points raised by the reviewers (except R2) were not addressed in the author rebuttal, and therefore it is not clear if some of the raised issues are fixable in camera ready time, which prevents me from recommend this paper to be accepted. However, I *do* think the proposed approach is very interesting and has great potential, once these points are clarified. The gains obtained in WikiText-103 are promising. Therefore, I strongly encourage the authors to resubmit this paper taking into account the suggestions made by the reviewers. """
39,"""Discourse-Based Evaluation of Language Understanding""","['Natural Language Understanding', 'Pragmatics', 'Discourse', 'Semantics', 'Evaluation', 'BERT', 'Natural Language Processing']","""New models for natural language understanding have made unusual progress recently, leading to claims of universal text representations. However, current benchmarks are predominantly targeting semantic phenomena; we make the case that discourse and pragmatics need to take center stage in the evaluation of natural language understanding. We introduce DiscEval, a new benchmark for the evaluation of natural language understanding, that unites 11 discourse-focused evaluation datasets. DiscEval can be used as supplementary training data in a multi-task learning setup, and is publicly available, alongside the code for gathering and preprocessing the datasets. Using our evaluation suite, we show that natural language inference, a widely used pretraining task, does not result in genuinely universal representations, which opens a new challenge for multi-task learning.""","""This paper proposes a new benchmark to evaluate natural language processing models on discourse-related tasks based on existing datasets that are not available in other benchmarks (SentEval/GLUE/SuperGLUE). The authors also provide a set of baselines based on BERT, ELMo, and others; and estimates of human performance for some tasks. I think this has the potential to be a valuable resource to the research community, but I am not sure that it is the best fit for a conference such as ICLR. R3 also raises a valid concern regarding the performance of fine-tuned BERT that are comparable to human estimates on half of the tasks (3 out of 5), which slightly weakens the main motivation of having this new benchmark. My main suggestion to the authors is to have a very solid motivation for the new benchmark, including the reason of inclusion for each of the tasks. I believe that this is important to encourage the community to adopt it. For something like this, it would be nice (although not necessary) to have a clean website for submission as well. I believe that someone who proposes a new benchmark needs to do as best as they can to make it easy for other people to use it. Due to the above issues and space constraint, I recommend to reject the paper."""
40,"""Learning World Graph Decompositions To Accelerate Reinforcement Learning""","['environment decomposition', 'subgoal discovery', 'generative modeling', 'reinforcement learning', 'unsupervised learning']","""Efficiently learning to solve tasks in complex environments is a key challenge for reinforcement learning (RL) agents. We propose to decompose a complex environment using a task-agnostic world graphs, an abstraction that accelerates learning by enabling agents to focus exploration on a subspace of the environment.The nodes of a world graph are important waypoint states and edges represent feasible traversals between them. Our framework has two learning phases: 1) identifying world graph nodes and edges by training a binary recurrent variational auto-encoder (VAE) on trajectory data and 2) a hierarchical RL framework that leverages structural and connectivity knowledge from the learned world graph to bias exploration towards task-relevant waypoints and regions. We show that our approach significantly accelerates RL on a suite of challenging 2D grid world tasks: compared to baselines, world graph integration doubles achieved rewards on simpler tasks, e.g. MultiGoal, and manages to solve more challenging tasks, e.g. Door-Key, where baselines fail.""","""This paper introduces an approach for structured exploration based on graph-based representations. While a number of the ideas in the paper are quite interesting and relevant to the ICLR community, the reviewers were generally in agreement about several concerns, which were discussed after the author response. These concerns include the ad-hoc nature of the approach, the limited technical novelty, and the difficulty of the experimental domains (and whether the approach could be applied to a more general class of challenging long-horizon problems such as those in prior works). Overall, the paper is not quite ready for publication at ICLR."""
41,"""Principled Weight Initialization for Hypernetworks""","['hypernetworks', 'initialization', 'optimization', 'meta-learning']","""Hypernetworks are meta neural networks that generate weights for a main neural network in an end-to-end differentiable manner. Despite extensive applications ranging from multi-task learning to Bayesian deep learning, the problem of optimizing hypernetworks has not been studied to date. We observe that classical weight initialization methods like Glorot & Bengio (2010) and He et al. (2015), when applied directly on a hypernet, fail to produce weights for the mainnet in the correct scale. We develop principled techniques for weight initialization in hypernets, and show that they lead to more stable mainnet weights, lower training loss, and faster convergence.""","""All the reviewers agreed that this was a sensible application of mostly existing ideas from standard neural net initialization to the setting of hypernetworks. The main criticism was that this method was used to improve existing applications of hypernets, instead of extending their limits of applicability."""
42,"""Zero-Shot Out-of-Distribution Detection with Feature Correlations""","['out-of-distribution', 'gram matrices', 'classification', 'out-of-distribution detection']","""When presented with Out-of-Distribution (OOD) examples, deep neural networks yield confident, incorrect predictions. Detecting OOD examples is challenging, and the potential risks are high. In this paper, we propose to detect OOD examples by identifying inconsistencies between activity patterns and class predicted. We find that characterizing activity patterns by feature correlations and identifying anomalies in pairwise feature correlation values can yield high OOD detection rates. We identify anomalies in the pairwise feature correlations by simply comparing each pairwise correlation value with its respective range observed over the training data. Unlike many approaches, this can be used with any pre-trained softmax classifier and does not require access to OOD data for fine-tuning hyperparameters, nor does it require OOD access for inferring parameters. The method is applicable across a variety of architectures and vision datasets and generally performs better than or equal to state-of-the-art OOD detection methods, including those that do assume access to OOD examples.""","""The paper proposes a new scoring function for OOD detection based on calculating the total deviation of the pairwise feature correlations. This is an important problem that is of general interest in our community. Reviewer 2 found the paper to be clear, provided a set of weaknesses relating to lack of explanations of performance and more careful ablations, along with a set of strategies to address them. Reviewer 1 recognized the importance of being useful for pretrained networks but also raised questions of explanation and theoretical motivations. Reviewer 3 was extremely supportive, used the authors' code to highlight the difference between far-from-distribution behaviour versus near-distribution OOD examples. The authors provided detailed responses to all points raised and provided additional eidence. There was no convergence of the review recommendations. The review added much more clarity to the paper and it is no a better paper. The paper demonstrates all the features of a good paper, but unfortunately didn't yet reach the level for acceptance for the next conference. """
43,"""Global Momentum Compression for Sparse Communication in Distributed SGD""","['Distributed momentum SGD', 'Communication compression']","""With the rapid growth of data, distributed stochastic gradient descent~(DSGD) has been widely used for solving large-scale machine learning problems. Due to the latency and limited bandwidth of network, communication has become the bottleneck of DSGD when we need to train large scale models, like deep neural networks. Communication compression with sparsified gradient, abbreviated as \emph{sparse communication}, has been widely used for reducing communication cost in DSGD. Recently, there has appeared one method, called deep gradient compression~(DGC), to combine memory gradient and momentum SGD for sparse communication. DGC has achieved promising performance in practice. However, the theory about the convergence of DGC is lack. In this paper, we propose a novel method, called \emph{\underline{g}}lobal \emph{\underline{m}}omentum \emph{\underline{c}}ompression~(GMC), for sparse communication in DSGD. GMC also combines memory gradient and momentum SGD. But different from DGC which adopts local momentum, GMC adopts global momentum. We theoretically prove the convergence rate of GMC for both convex and non-convex problems. To the best of our knowledge, this is the first work that proves the convergence of distributed momentum SGD~(DMSGD) with sparse communication and memory gradient. Empirical results show that, compared with the DMSGD counterpart without sparse communication, GMC can reduce the communication cost by approximately 100 fold without loss of generalization accuracy. GMC can also achieve comparable~(sometimes better) performance compared with DGC, with an extra theoretical guarantee.""","""The author propose a method called global momentum compression for sparse communication setting, and provided some theoretical results on the convergence rate. The convergence result is interesting, but the underlying assumptions used in the analysis appear very strong. Moreover, the proposed algorithm has limited novelty as it is only a minor modification. Another main concern is that the proposed algorithm shows little performance improvement in the experiments. Moreover, more related algorithms should be included in the experimental comparison."""
44,"""Symplectic ODE-Net: Learning Hamiltonian Dynamics with Control""","['Deep Model Learning', 'Physics-based Priors', 'Control of Mechanical Systems']","""In this paper, we introduce Symplectic ODE-Net (SymODEN), a deep learning framework which can infer the dynamics of a physical system, given by an ordinary differential equation (ODE), from observed state trajectories. To achieve better generalization with fewer training samples, SymODEN incorporates appropriate inductive bias by designing the associated computation graph in a physics-informed manner. In particular, we enforce Hamiltonian dynamics with control to learn the underlying dynamics in a transparent way, which can then be leveraged to draw insight about relevant physical aspects of the system, such as mass and potential energy. In addition, we propose a parametrization which can enforce this Hamiltonian formalism even when the generalized coordinate data is embedded in a high-dimensional space or we can only access velocity data instead of generalized momentum. This framework, by offering interpretable, physically-consistent models for physical systems, opens up new possibilities for synthesizing model-based control strategies.""","""This paper proposes a novel method for learning Hamiltonian dynamics from data. The data is obtained from systems subjected to an external control signal. The authors show the utility of their method for subsequent improved control in a reinforcement learning setting. The paper is well written, the method is derived from first principles, and the experimental validation is solid. The authors were also able to take into account the reviewers feedback and further improve their paper during the discussion period. Overall all of the reviewers agree that this is a great contribution to the field and hence I am happy to recommend acceptance."""
45,"""Learning from Explanations with Neural Execution Tree""",[],"""While deep neural networks have achieved impressive performance on a range of NLP tasks, these data-hungry models heavily rely on labeled data, which restricts their applications in scenarios where data annotation is expensive. Natural language (NL) explanations have been demonstrated very useful additional supervision, which can provide sufficient domain knowledge for generating more labeled data over new instances, while the annotation time only doubles. However, directly applying them for augmenting model learning encounters two challenges: (1) NL explanations are unstructured and inherently compositional, which asks for a modularized model to represent their semantics, (2) NL explanations often have large numbers of linguistic variants, resulting in low recall and limited generalization ability. In this paper, we propose a novel Neural Execution Tree (NExT) framework to augment training data for text classification using NL explanations. After transforming NL explanations into executable logical forms by semantic parsing, NExT generalizes different types of actions specified by the logical forms for labeling data instances, which substantially increases the coverage of each NL explanation. Experiments on two NLP tasks (relation extraction and sentiment analysis) demonstrate its superiority over baseline methods. Its extension to multi-hop question answering achieves performance gain with light annotation effort.""","""This paper proposing a framework for augmenting classification systems with explanations was very well received by two reviewers, and on reviewer labeling themselves as ""perfectly neutral"". I see no reason not to recommend acceptance."""
46,"""Effective Use of Variational Embedding Capacity in Expressive End-to-End Speech Synthesis""","['Speech Synthesis', 'Deep Generative Models', 'Latent Variable Models', 'Unsupervised Representation Learning']","""Recent work has explored sequence-to-sequence latent variable models for expressive speech synthesis (supporting control and transfer of prosody and style), but has not presented a coherent framework for understanding the trade-offs between the competing methods. In this paper, we propose embedding capacity (the amount of information the embedding contains about the data) as a unified method of analyzing the behavior of latent variable models of speech, comparing existing heuristic (non-variational) methods to variational methods that are able to explicitly constrain capacity using an upper bound on representational mutual information. In our proposed model (Capacitron), we show that by adding conditional dependencies to the variational posterior such that it matches the form of the true posterior, the same model can be used for high-precision prosody transfer, text-agnostic style transfer, and generation of natural-sounding prior samples. For multi-speaker models, Capacitron is able to preserve target speaker identity during inter-speaker prosody transfer and when drawing samples from the latent prior. Lastly, we introduce a method for decomposing embedding capacity hierarchically across two sets of latents, allowing a portion of the latent variability to be specified and the remaining variability sampled from a learned prior. Audio examples are available on the web.""","""This paper investigates variational models of speech for synthesis, and in particular ways of making them more controllable for a variety of synthesis tasks (e.g. prosody transfer, style transfer). They propose to do this via a modified VAE objective that imposes a learnable weight on the KL term, as well as using a hierarchical decomposition of latent variables. The paper shows promising results and includes a good amount of analysis, and should be very interesting for speech synthesis researchers. However, there is not much novelty from a machine learning perspective. Therefore, I think the paper is not a great fit for ICLR and is better suited for a speech conference/journal."""
47,"""Angular Visual Hardness""","['angular similarity', 'self-training', 'hard samples mining']","""The mechanisms behind human visual systems and convolutional neural networks (CNNs) are vastly different. Hence, it is expected that they have different notions of ambiguity or hardness. In this paper, we make a surprising discovery: there exists a (nearly) universal score function for CNNs whose correlation with human visual hardness is statistically significant. We term this function as angular visual hardness (AVH) and in a CNN, it is given by the normalized angular distance between a feature embedding and the classifier weights of the corresponding target category. We conduct an in-depth scientific study. We observe that CNN models with the highest accuracy also have the best AVH scores. This agrees with an earlier finding that state-of-art models tend to improve on classification of harder training examples. We find that AVH displays interesting dynamics during training: it quickly reaches a plateau even though the training loss keeps improving. This suggests the need for designing better loss functions that can target harder examples more effectively. Finally, we empirically show significant improvement in performance by using AVH as a measure of hardness in self-training tasks. ""","""This paper proposes a new measure for CNN and show its correlation to human visual hardness. The topic of this paper is interesting, and it sparked many interesting discussions among reviews. After reviewing each others comments, reviewers decided to recommend reject due to a few severe concerns that are yet to be address. In particular, reviewer 1 and 2 both raised concerns about potentially misleading and perhaps confusing statements around the correlation between HSF and accuracy. A concrete step was suggested by a reviewer - reporting correlation between accuracy and HSF. A few other points were raised around its conflict/agreement with prior work [RRSS19], or self-contradictory statements as pointed out by Reviewer 1 and 2 (see reviewer 2s comment). We hope authors would use this helpful feedback to improve the paper for the future submission. """
48,"""On importance-weighted autoencoders""","['variational inference', 'autoencoders', 'importance sampling']","""The importance weighted autoencoder (IWAE) (Burda et al., 2016) is a popular variational-inference method which achieves a tighter evidence bound (and hence a lower bias) than standard variational autoencoders by optimising a multi-sample objective, i.e. an objective that is expressible as an integral over > 1 Monte Carlo samples. Unfortunately, IWAE crucially relies on the availability of reparametrisations and even if these exist, the multi-sample objective leads to inference-network gradients which break down as pseudo-formula is increased (Rainforth et al., 2018). This breakdown can only be circumvented by removing high-variance score-function terms, either by heuristically ignoring them (which yields the 'sticking-the-landing' IWAE (IWAE-STL) gradient from Roeder et al. (2017)) or through an identity from Tucker et al. (2019) (which yields the 'doubly-reparametrised' IWAE (IWAE-DREG) gradient). In this work, we argue that directly optimising the proposal distribution in importance sampling as in the reweighted wake-sleep (RWS) algorithm from Bornschein & Bengio (2015) is preferable to optimising IWAE-type multi-sample objectives. To formalise this argument, we introduce an adaptive-importance sampling framework termed adaptive importance sampling for learning (AISLE) which slightly generalises the RWS algorithm. We then show that AISLE admits IWAE-STL and IWAE-DREG (i.e. the IWAE-gradients which avoid breakdown) as special cases.""","""The authors argue that directly optimizing the IS proposal distribution as in RWS is preferable to optimizing the IWAE multi-sample objective. They formalize this with an adaptive IS framework, AISLE, that generalizes RWS, IWAE-STL and IWAE-DREG. Generally reviewers found the paper to be well-written and the connections drawn in this paper interesting. However, all reviewers raised concerns about the lack of experiments (Reviewer 3 suggested several experiments that could be done to clarify remaining questions) and practical takeaways. The authors responded by explaining that ""the main ""practical"" takeaway from our work is the following: If one is interested in the bias-reduction potential offered by IWAEs over plain VAEs then the adaptive importance-sampling framework appears to be a better starting point for designing new algorithms than the specific multi-sample objective used by IWAE. This is because the former retains all of the benefits of the latter without inheriting its drawbacks."" I did not find this argument convincing as a primary advantage of variational approaches over WS is that the variational approach optimizes a unified objective. At least in principle, this is a serious drawback of the WS approaches. Experiments and/or a discussion of this is warranted. This paper is borderline, and unfortunately, due to the high number of quality submissions this year, I have to recommend rejection at this point. """
49,"""Symplectic Recurrent Neural Networks""","['Hamiltonian systems', 'learning physical laws', 'symplectic integrators', 'recurrent neural networks', 'inverse problems']","""We propose Symplectic Recurrent Neural Networks (SRNNs) as learning algorithms that capture the dynamics of physical systems from observed trajectories. SRNNs model the Hamiltonian function of the system by a neural networks, and leverage symplectic integration, multiple-step training and initial state optimization to address the challenging numerical issues associated with Hamiltonian systems. We show SRNNs succeed reliably on complex and noisy Hamiltonian systems. Finally, we show how to augment the SRNN integration scheme in order to handle stiff dynamical systems such as bouncing billiards.""","""This paper proposes a novel architecture for learning Hamiltonian dynamics from data. The model outperforms the existing state of the art Hamiltonian Neural Networks on challenging physical datasets. It also goes further by proposing a way to deal with observation noise and a way to model stiff dynamical systems, like bouncing balls. The paper is well written, the model works well and the experimental evaluation is solid. All reviewers agree that this is an excellent contribution to the field, hence I am happy to recommend acceptance as an oral."""
50,"""SpikeGrad: An ANN-equivalent Computation Model for Implementing Backpropagation with Spikes""","['spiking neural network', 'neuromorphic engineering', 'backpropagation']","""Event-based neuromorphic systems promise to reduce the energy consumption of deep neural networks by replacing expensive floating point operations on dense matrices by low energy, sparse operations on spike events. While these systems can be trained increasingly well using approximations of the backpropagation algorithm, this usually requires high precision errors and is therefore incompatible with the typical communication infrastructure of neuromorphic circuits. In this work, we analyze how the gradient can be discretized into spike events when training a spiking neural network. To accelerate our simulation, we show that using a special implementation of the integrate-and-fire neuron allows us to describe the accumulated activations and errors of the spiking neural network in terms of an equivalent artificial neural network, allowing us to largely speed up training compared to an explicit simulation of all spike events. This way we are able to demonstrate that even for deep networks, the gradients can be discretized sufficiently well with spikes if the gradient is properly rescaled. This form of spike-based backpropagation enables us to achieve equivalent or better accuracies on the MNIST and CIFAR10 datasets than comparable state-of-the-art spiking neural networks trained with full precision gradients. The algorithm, which we call SpikeGrad, is based on only accumulation and comparison operations and can naturally exploit sparsity in the gradient computation, which makes it an interesting choice for a spiking neuromorphic systems with on-chip learning capacities.""","""This paper proposes a learning framework for spiking neural networks that exploits the sparsity of the gradient during backpropagation to reduce the computational cost of training. The method is evaluated against prior works that use full precision gradients and shown comparable performance. Overall, the contribution of the paper is solid, and after a constructive rebuttal cycle, all reviewers reached a consensus of weak accept. Therefore, I recommend accepting this submission."""
51,"""Generalized Clustering by Learning to Optimize Expected Normalized Cuts""","['Clustering', 'Normalized cuts', 'Generalizability']","""We introduce a novel end-to-end approach for learning to cluster in the absence of labeled examples. Our clustering objective is based on optimizing normalized cuts, a criterion which measures both intra-cluster similarity as well as inter-cluster dissimilarity. We define a differentiable loss function equivalent to the expected normalized cuts. Unlike much of the work in unsupervised deep learning, our trained model directly outputs final cluster assignments, rather than embeddings that need further processing to be usable. Our approach generalizes to unseen datasets across a wide variety of domains, including text, and image. Specifically, we achieve state-of-the-art results on popular unsupervised clustering benchmarks (e.g., MNIST, Reuters, CIFAR-10, and CIFAR-100), outperforming the strongest baselines by up to 10.9%. Our generalization results are superior (by up to 21.9%) to the recent top-performing clustering approach with the ability to generalize.""","""This paper proposes a deep clustering method based on normalized cuts. As the general idea of deep clustering has been investigated a fair bit, the reviewers suggest a more thorough empirical validation. Myself, I would also like further justification of many of the choices within the algorithm, the effect of changing the architecture."""
52,"""The Ingredients of Real World Robotic Reinforcement Learning""","['Reinforcement Learning', 'Robotics']","""The success of reinforcement learning in the real world has been limited to instrumented laboratory scenarios, often requiring arduous human supervision to enable continuous learning. In this work, we discuss the required elements of a robotic system that can continually and autonomously improve with data collected in the real world, and propose a particular instantiation of such a system. Subsequently, we investigate a number of challenges of learning without instrumentation -- including the lack of episodic resets, state estimation, and hand-engineered rewards -- and propose simple, scalable solutions to these challenges. We demonstrate the efficacy of our proposed system on dexterous robotic manipulation tasks in simulation and the real world, and also provide an insightful analysis and ablation study of the challenges associated with this learning paradigm.""","""This is a very interesting paper which discusses practical issues and solutions around deploying RL on real physical robotic systems, specifically involving questions on the use of raw sensory data, crafting reward functions, and not having resets at the end of episodes. Many of the issues raised in the reviews and discussion were concerned with experimental details and settings, as well as relation to different areas of related work. These were all sufficiently handled in the rebuttal, and all reviewers were in favour of acceptance."""
53,"""Polylogarithmic width suffices for gradient descent to achieve arbitrarily small test error with shallow ReLU networks""","['neural tangent kernel', 'polylogarithmic width', 'test error', 'gradient descent', 'classification']","""Recent theoretical work has guaranteed that overparameterized networks trained by gradient descent achieve arbitrarily low training error, and sometimes even low test error. The required width, however, is always polynomial in at least one of the sample size pseudo-formula , the (inverse) target error pseudo-formula , and the (inverse) failure probability pseudo-formula . This work shows that pseudo-formula iterations of gradient descent with pseudo-formula training examples on two-layer ReLU networks of any width exceeding pseudo-formula suffice to achieve a test misclassification error of pseudo-formula . We also prove that stochastic gradient descent can achieve pseudo-formula test error with polylogarithmic width and pseudo-formula samples. The analysis relies upon the separation margin of the limiting kernel, which is guaranteed positive, can distinguish between true labels and random labels, and can give a tight sample-complexity analysis in the infinite-width setting.""","""This paper studies how much overparameterization is required to achieve zero training error via gradient descent in one hidden layer neural nets. In particular the paper studies the effect of margin in data on the required amount of overparameterization. While the paper does not improve in the worse case in the presence of margin the paper shows that sometimes even logarithmic width is sufficient. The reviewers all seem to agree that this is a nice paper but had a few mostly technical concerns. These concerns were sufficiently addressed in the response. Based on my own reading I also find the paper to be interesting, well written with clever proofs. So I recommend acceptance. I would like to make a suggestion that the authors do clarify in the abstract intro that this improvement can not be achieved in the worst case as a shallow reading of the manuscript may cause some confusion (that logarithmic width suffices in general)."""
54,"""Collaborative Inter-agent Knowledge Distillation for Reinforcement Learning""","['Reinforcement learning', 'distillation']","""Reinforcement Learning (RL) has demonstrated promising results across several sequential decision-making tasks. However, reinforcement learning struggles to learn efficiently, thus limiting its pervasive application to several challenging problems. A typical RL agent learns solely from its own trial-and-error experiences, requiring many experiences to learn a successful policy. To alleviate this problem, we propose collaborative inter-agent knowledge distillation (CIKD). CIKD is a learning framework that uses an ensemble of RL agents to execute different policies in the environment while sharing knowledge amongst agents in the ensemble. Our experiments demonstrate that CIKD improves upon state-of-the-art RL methods in sample efficiency and performance on several challenging MuJoCo benchmark tasks. Additionally, we present an in-depth investigation on how CIKD leads to performance improvements. ""","""The paper introduces an ensemble of RL agents that share knowledge amongst themselves. Because there are no theoretical results, the experiments have to carry the paper. The reviewers had rather different views on the significance of these experiments and whether they are sufficient to convincingly validate the learning framework introduced. Overall, because of the high bar for ICLR acceptance, this paper falls just below the threshold. """
55,"""Learning Underlying Physical Properties From Observations For Trajectory Prediction""","['Physical Games', 'Deep Learning', 'Physical Reasoning', 'Transfer of Knowledge']","""In this work we present an approach that combines deep learning together with laws of Newtons physics for accurate trajectory predictions in physical games. Our model learns to estimate physical properties and forces that generated given observations, learns the relationships between available players actions and estimated physical properties and uses these extracted forces for predictions. We show the advantages of using physical laws together with deep learning by evaluating it against two baseline models that automatically discover features from the data without such a knowledge. We evaluate our model abilities to extract physical properties and to generalize to unseen trajectories in two games with a shooting mechanism. We also evaluate our model capabilities to transfer learned knowledge from a 2D game for predictions in a 3D game with a similar physics. We show that by using physical laws together with deep learning we achieve a better human-interpretability of learned physical properties, transfer of knowledge to a game with similar physics and very accurate predictions for previously unseen data.""","""This paper aims to estimate the parameters of a projectile physical equation from a small number of trajectory observations in two computer games. The authors demonstrate that their method works, and that the learnt model generalises from one game to another. However, the reviewers had concerns about the simplicity of the tasks, the longer term value of the proposed method to the research community, and the writing of the paper. During the discussion period, the authors were able to address some of these questions, however many other points were left unanswered, and the authors did not modify the paper to reflect the reviewers feedback. Hence, in the current state this paper appears more suitable for a workshop rather than a conference, and I recommend rejection."""
56,"""Actor-Critic Provably Finds Nash Equilibria of Linear-Quadratic Mean-Field Games""",[],"""We study discrete-time mean-field Markov games with infinite numbers of agents where each agent aims to minimize its ergodic cost. We consider the setting where the agents have identical linear state transitions and quadratic cost func- tions, while the aggregated effect of the agents is captured by the population mean of their states, namely, the mean-field state. For such a game, based on the Nash certainty equivalence principle, we provide sufficient conditions for the existence and uniqueness of its Nash equilibrium. Moreover, to find the Nash equilibrium, we propose a mean-field actor-critic algorithm with linear function approxima- tion, which does not require knowing the model of dynamics. Specifically, at each iteration of our algorithm, we use the single-agent actor-critic algorithm to approximately obtain the optimal policy of the each agent given the current mean- field state, and then update the mean-field state. In particular, we prove that our algorithm converges to the Nash equilibrium at a linear rate. To the best of our knowledge, this is the first success of applying model-free reinforcement learn- ing with function approximation to discrete-time mean-field Markov games with provable non-asymptotic global convergence guarantees.""","""The authors propose an actor-critic method for finding Nash equilibrium in linear-quadratic mean field games and establish linear convergence under some assumptions. There were some minor concerns about motivation and clarity, especially with regards to the simulator. In an extensive and interactive rebuttal, the authors were able to argue that their results/methods, which appear to be rather specialized to the LQ setting, offer insight/methods beyond the LQ setting."""
57,"""DSReg: Using Distant Supervision as a Regularizer""",[],"""In this paper, we aim at tackling a general issue in NLP tasks where some of the negative examples are highly similar to the positive examples, i.e., hard-negative examples). We propose the distant supervision as a regularizer (DSReg) approach to tackle this issue. We convert the original task to a multi-task learning problem, in which we first utilize the idea of distant supervision to retrieve hard-negative examples. The obtained hard-negative examples are then used as a regularizer, and we jointly optimize the original target objective of distinguishing positive examples from negative examples along with the auxiliary task objective of distinguishing soften positive examples (comprised of positive examples and hard-negative examples) from easy-negative examples. In the neural context, this can be done by feeding the final token representations to different output layers. Using this unbelievably simple strategy, we improve the performance of a range of different NLP tasks, including text classification, sequence labeling and reading comprehension. ""","""This paper proposes a way to handle the hard-negative examples (those very close to positive ones) in NLP, using a distant supervision approach that serves as a regularization. The paper addresses an important issue and is well written; however, reviewers pointed put several concerns, including testing the approach on the state-of-art neural nets, and making experiments more convincing by testing on larger problems. """
58,"""Address2vec: Generating vector embeddings for blockchain analytics""","['crypto-currency', 'bitcoin', 'blockchain', '2vec']","""Bitcoin is a virtual coinage system that enables users to trade virtually free of a central trusted authority. All transactions on the Bitcoin blockchain are publicly available for viewing, yet as Bitcoin is built mainly for security its original structure does not allow for direct analysis of address transactions. Existing analysis methods of the Bitcoin blockchain can be complicated, computationally expensive or inaccurate. We propose a computationally efficient model to analyze bitcoin blockchain addresses and allow for their use with existing machine learning algorithms. We compare our approach against Multi Level Sequence Learners (MLSLs), one of the best performing models on bitcoin address data.""","""The paper propose to analyze bitcoin addresses using graph embeddings. The reviewers found that the paper was too incomplete for publication. Important information such as a description of datasets and metrics was omitted."""
59,"""SEED RL: Scalable and Efficient Deep-RL with Accelerated Central Inference""","['machine learning', 'reinforcement learning', 'scalability', 'distributed', 'DeepMind Lab', 'ALE', 'Atari-57', 'Google Research Football']","""We present a modern scalable reinforcement learning agent called SEED (Scalable, Efficient Deep-RL). By effectively utilizing modern accelerators, we show that it is not only possible to train on millions of frames per second but also to lower the cost. of experiments compared to current methods. We achieve this with a simple architecture that features centralized inference and an optimized communication layer. SEED adopts two state-of-the-art distributed algorithms, IMPALA/V-trace (policy gradients) and R2D2 (Q-learning), and is evaluated on Atari-57, DeepMind Lab and Google Research Football. We improve the state of the art on Football and are able to reach state of the art on Atari-57 twice as fast in wall-time. For the scenarios we consider, a 40% to 80% cost reduction for running experiments is achieved. The implementation along with experiments is open-sourced so results can be reproduced and novel ideas tried out.""","""The paper presents a framework for scalable Deep-RL on really large-scale architecture, which addresses several problems on multi-machine training of such systems with many actors and learners running. Large-scale experiments and impovements over IMPALA are presented, leading to new SOTA results. The reviewers are very positive over this work, and I think this is an important contribution to the overall learning / RL community."""
60,"""FSPool: Learning Set Representations with Featurewise Sort Pooling""","['set auto-encoder', 'set encoder', 'pooling']","""Traditional set prediction models can struggle with simple datasets due to an issue we call the responsibility problem. We introduce a pooling method for sets of feature vectors based on sorting features across elements of the set. This can be used to construct a permutation-equivariant auto-encoder that avoids this responsibility problem. On a toy dataset of polygons and a set version of MNIST, we show that such an auto-encoder produces considerably better reconstructions and representations. Replacing the pooling function in existing set encoders with FSPool improves accuracy and convergence speed on a variety of datasets.""","""Overall, this paper got strong scores from the reviewers (2 accepts and 1 weak accept). The paper proposes to address the responsibility problem, enabling encoding and decoding sets without worrying about permutations. This is achieved using permutation-equivariant set autoencoders and an 'inverse' operation that undoes the sorting in the decoder. The reviewers all agreed that the paper makes a meaningful contribution and should be accepted. Some concerns regarding clarity of exposition were initially raised but were addressed during the rebuttal period. I recommend that the paper be accepted."""
61,"""Enhanced Convolutional Neural Tangent Kernels""","['neural tangent kernel', 'data augmentation', 'global average pooling', 'kernel regression', 'deep learning theory', 'kernel design']","""Recent research shows that for training with l2 loss, convolutional neural networks (CNNs) whose width (number of channels in convolutional layers) goes to infinity, correspond to regression with respect to the CNN Gaussian Process kernel (CNN-GP) if only the last layer is trained, and correspond to regression with respect to the Convolutional Neural Tangent Kernel (CNTK) if all layers are trained. An exact algorithm to compute CNTK (Arora et al., 2019) yielded the finding that classification accuracy of CNTK on CIFAR-10 is within 6-7% of that of the corresponding CNN architecture (best figure being around 78%) which is interesting performance for a fixed kernel. Here we show how to significantly enhance the performance of these kernels using two ideas. (1) Modifying the kernel using a new operation called Local Average Pooling (LAP) which preserves efficient computability of the kernel and inherits the spirit of standard data augmentation using pixel shifts. Earlier papers were unable to incorporate naive data augmentation because of the quadratic training cost of kernel regression. This idea is inspired by Global Average Pooling (GAP), which we show for CNN-GP and CNTK, GAP is equivalent to full translation data augmentation. (2) Representing the input image using a pre-processing technique proposed by Coates et al. (2011), which uses a single convolutional layer composed of random image patches. On CIFAR-10 the resulting kernel, CNN-GP with LAP and horizontal flip data augmentation achieves 89% accuracy, matching the performance of AlexNet (Krizhevsky et al., 2012). Note that this is the best such result we know of for a classifier that is not a trained neural network. Similar improvements are obtained for Fashion-MNIST.""","""This paper was assessed by three reviewers who scored it as 6/3/6. The reviewers liked some aspects of this paper e.g., a good performance, but they also criticized some aspects of work such as inventing new names for existing pooling operators, observation that large parts of improvements come from the pre-processing step rather than the proposed method, suspected overfitting. Taking into account all positives and negatives, AC feels that while the proposed idea has some positives, it also falls short of the quality required by ICLR2020, thus it cannot be accepted at this time. AC strongly encourages authors to go through all comments (especially these negative ones), address them and resubmit an improved version to another venue. """
62,"""Incorporating BERT into Neural Machine Translation""","['BERT', 'Neural Machine Translation']","""The recently proposed BERT (Devlin et al., 2019) has shown great power on a variety of natural language understanding tasks, such as text classification, reading comprehension, etc. However, how to effectively apply BERT to neural machine translation (NMT) lacks enough exploration. While BERT is more commonly used as fine-tuning instead of contextual embedding for downstream language understanding tasks, in NMT, our preliminary exploration of using BERT as contextual embedding is better than using for fine-tuning. This motivates us to think how to better leverage BERT for NMT along this direction. We propose a new algorithm named BERT-fused model, in which we first use BERT to extract representations for an input sequence, and then the representations are fused with each layer of the encoder and decoder of the NMT model through attention mechanisms. We conduct experiments on supervised (including sentence-level and document-level translations), semi-supervised and unsupervised machine translation, and achieve state-of-the-art results on seven benchmark datasets. Our code is available at pseudo-url""","""The authors propose a novel way of incorporating a large pretrained language model (BERT) into neural machine translation using an extra attention model for both the NMT encoder and decoder. The paper presents thorough experimental design, with strong baselines and consistent positive results for supervised, semi-supervised and unsupervised experiments. The reviewers all mentioned lack of clarity in the writing and there was significant discussion with the authors. After improvements and clarifications, all reviewers agree that this paper would make a good contribution to ICLR and be of general use to the field. """
63,"""Efficient meta reinforcement learning via meta goal generation""",[],"""Meta reinforcement learning (meta-RL) is able to accelerate the acquisition of new tasks by learning from past experience. Current meta-RL methods usually learn to adapt to new tasks by directly optimizing the parameters of policies over primitive actions. However, for complex tasks which requires sophisticated control strategies, it would be quite inefficient to to directly learn such a meta-policy. Moreover, this problem can become more severe and even fail in spare reward settings, which is quite common in practice. To this end, we propose a new meta-RL algorithm called meta goal-generation for hierarchical RL (MGHRL) by leveraging hierarchical actor-critic framework. Instead of directly generate policies over primitive actions for new tasks, MGHRL learns to generate high-level meta strategies over subgoals given past experience and leaves the rest of how to achieve subgoals as independent RL subtasks. Our empirical results on several challenging simulated robotics environments show that our method enables more efficient and effective meta-learning from past experience and outperforms state-of-the-art meta-RL and Hierarchical-RL methods in sparse reward settings.""","""This paper combines PEARL with HAC to create a hierarchical meta-RL algorithm that operates on goals at the high level and learns low-level policies to reach those goals. Reviewers remarked that its well-presented and well-organized, with enough details to be mostly reproducible. In the experiments conducted, it appears to show strong results. However there was strong consensus on two major weaknesses that render this paper unpublishable in its current form: 1) the continuous control tasks used dont seem to require hierarchy, and 2) the baselines dont appear to be appropriate. Reviewers remarked that a vital missing baseline is HER, and that its unfair to compare to PEARL, which is a more general meta-RL algorithm. The authors dont appear to have made revisions in response to these concerns. All reviewers made useful and constructive comments, and I urge the authors to take them into consideration when revising for a future submission."""
64,"""SINGLE PATH ONE-SHOT NEURAL ARCHITECTURE SEARCH WITH UNIFORM SAMPLING""","['Neural Architecture Search', 'Single Path']","""We revisit the one-shot Neural Architecture Search (NAS) paradigm and analyze its advantages over existing NAS approaches. Existing one-shot method (Benderet al., 2018), however, is hard to train and not yet effective on large scale datasets like ImageNet. This work propose a Single Path One-Shot model to address the challenge in the training. Our central idea is to construct a simplified supernet, where all architectures are single paths so that weight co-adaption problem is alleviated. Training is performed by uniform path sampling. All architectures (and their weights) are trained fully and equally. Comprehensive experiments verify that our approach is flexible and effective. It is easy to train and fast to search. It effortlessly supports complex search spaces(e.g., building blocks, channel, mixed-precision quantization) and different search constraints (e.g., FLOPs, latency). It is thus convenient to use for various needs. It achieves start-of-the-art performance on the large dataset ImageNet.""","""This paper introduces a simple NAS method based on sampling single paths of the one-shot model based on a uniform distribution. Next to the private discussion with reviewers, I read the paper in detail. During the discussion, first, the reviewer who gave a weak reject upgraded his/her score to a weak accept since all reviewers appreciated the importance of neural architecture search and that the authors' approach is plausible. Then, however, it surfaced that the main claim of novelty in the paper, namely the uniform sampling of paths with weight-sharing, is not novel: Li & Talwalkar already introduced a uniform random sampling of paths with weight-sharing in the one-shot model in their paper ""Random Search and Reproducibility in NAS"" (pseudo-url), which was on arXiv since February 2019 and has been published at UAI 2019. This was their method ""RandomNAS with weight sharing"". The authors actually cite that paper but do not mention RandomNAS with weight sharing. This may be because their paper also has been on arXiv since March 2019 (6 weeks after the one above), and was therefore likely parallel work. Nevertheless, now, 9 months later, the situation has changed, and the authors should at least point out in their paper that they were not the first to introduce RandomNAS with weight sharing during the search, but that they rather study the benefits of that previously-introduced method. The only real novelty in terms of NAS methods that the authors provide is to use a genetic algorithm to select the architecture with the best one-shot model performance, rather than random search. This is a relatively minor contribution, discussed literally in a single paragraph in the paper (with missing details about the crossover operator used; please fill these in). Also, this step is very cheap, so one could potentially just run random search longer. Finally, the comparison presented may be unfair: evolution uses a population size of 50, and Figure 2 plots iterations. It is unclear whether each iteration for random search also evaluated 50 samples; if not, then evolution got 50x more samples than random search. The authors should fix this in a new version of the paper. The paper also appears to make some wrong claims in Section 2. For example, the authors write that gradient-based NAS methods like DARTS inherit the one-shot weights and fine-tune the discretized architectures, but all methods I know of actually retrain from scratch rather than fine-tuning. Also, equation (3) is not what DARTS does; that does a bi-level optimization. In Section 3, the authors say that their single-path strategy corresponds to a dropout rate of 1. I do not think that this is correct, since a dropout rate of 1 drops every connection (and does not leave one remaining). All of these issues should be rectified. The paper reports good results on ImageNet. Unfortunately, these may well be due to using a better training pipeline than other works, rather than due to a better NAS method (no code is available, so there is no way to verify this). On the other hand, the application to mixed-precision quantization is novel and interesting. AnonReviewer2 asked about the correlation of the one-shot performance and the final evaluation performance, and this question was not answered properly by the authors. This question is relevant, because this correlation has been shown to be very low in several works (e.g., Sciuto et al: ""Evaluating the search phase of Neural Architecture Search"" (pseudo-url), on arXiv since February 2019 and a parallel ICLR submission). In those cases, the proposed approach would definitely not work. The high scores the reviewers gave were based on the understanding that uniform sampling in the one-shot model was a novel contribution of this paper. Adjusting for that, the real score is much lower and right at the acceptance threshold. After a discussion with the PCs, due to limited capacity, the recommendation is to reject the current version. I encourage the authors to address the issues identified by the reviewers and in this meta-review and to submit to a future venue. """
65,"""Robust And Interpretable Blind Image Denoising Via Bias-Free Convolutional Neural Networks""","['denoising', 'overfitting', 'generalization', 'robustness', 'interpretability', 'analysis of neural networks']","""We study the generalization properties of deep convolutional neural networks for image denoising in the presence of varying noise levels. We provide extensive empirical evidence that current state-of-the-art architectures systematically overfit to the noise levels in the training set, performing very poorly at new noise levels. We show that strong generalization can be achieved through a simple architectural modification: removing all additive constants. The resulting ""bias-free"" networks attain state-of-the-art performance over a broad range of noise levels, even when trained over a limited range. They are also locally linear, which enables direct analysis with linear-algebraic tools. We show that the denoising map can be visualized locally as a filter that adapts to both image structure and noise level. In addition, our analysis reveals that deep networks implicitly perform a projection onto an adaptively-selected low-dimensional subspace, with dimensionality inversely proportional to noise level, that captures features of natural images. ""","""This paper focuses on studying neural network-based denoising methods. The paper makes the interesting observation that most existing denoising approaches have a tendency to overfit to knowledge of the noise level. The authors claim that simply removing the bias on the network parameters enables a variety of improvements in this regard and provide some theoretical justification for their results. The reviewers were mostly postive but raised some concerns about generalization beyond Gaussian noise and not ""being very well theoretically motivated"". These concerns seem to have at least partially been alleviated during the discussion period. I agree with the reviewers. I think the paper looks at an important phenomena for denoising (role of variance parameter) and is well suited to ICLR. I recommend acceptance. I suggest that the authors continue to further improve the paper based on the reviewers' comments."""
66,"""Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets""","['Adversarial Example', 'Transferability', 'Skip Connection', 'Neural Network']","""Skip connections are an essential component of current state-of-the-art deep neural networks (DNNs) such as ResNet, WideResNet, DenseNet, and ResNeXt. Despite their huge success in building deeper and more powerful DNNs, we identify a surprising \emph{security weakness} of skip connections in this paper. Use of skip connections \textit{allows easier generation of highly transferable adversarial examples}. Specifically, in ResNet-like (with skip connections) neural networks, gradients can backpropagate through either skip connections or residual modules. We find that using more gradients from the skip connections rather than the residual modules according to a decay factor, allows one to craft adversarial examples with high transferability. Our method is termed \emph{Skip Gradient Method} (SGM). We conduct comprehensive transfer attacks against state-of-the-art DNNs including ResNets, DenseNets, Inceptions, Inception-ResNet, Squeeze-and-Excitation Network (SENet) and robustly trained DNNs. We show that employing SGM on the gradient flow can greatly improve the transferability of crafted attacks in almost all cases. Furthermore, SGM can be easily combined with existing black-box attack techniques, and obtain high improvements over state-of-the-art transferability methods. Our findings not only motivate new research into the architectural vulnerability of DNNs, but also open up further challenges for the design of secure DNN architectures.""","""This paper makes the observation that, by adjusting the ratio of gradients from skip connections and residual connections in ResNet-family networks in a projected gradient descent attack (that is, upweighting the contribution of the skip connection gradient), one can obtain more transferable adversarial examples. This is evaluated empirically in the single-model black box transfer setting, against a wide range of models, both with and without countermeasures. Reviewers praised the novelty and simplicity of the method, the breadth of empirical results, and the review of related work. Concerns were raised regarding a lack of variance reporting, strength of the baselines vs. numbers reported in the literature, and the lack of consideration paid to the threat model under which an adversary employs an ensemble of source models, as well as the framing given by the original title and abstract. All of these appear to have been satisfactorily addressed, in a fine example of what ICLR's review & revision process can yield. It is therefore my pleasure to recommend acceptance."""
67,"""SoftLoc: Robust Temporal Localization under Label Misalignment""","['deep learning', 'temporal localization', 'robustness', 'label misalignment', 'music', 'time series']","""This work addresses the long-standing problem of robust event localization in the presence of temporally of misaligned labels in the training data. We propose a novel versatile loss function that generalizes a number of training regimes from standard fully-supervised cross-entropy to count-based weakly-supervised learning. Unlike classical models which are constrained to strictly fit the annotations during training, our soft localization learning approach relaxes the reliance on the exact position of labels instead. Training with this new loss function exhibits strong robustness to temporal misalignment of labels, thus alleviating the burden of precise annotation of temporal sequences. We demonstrate state-of-the-art performance against standard benchmarks in a number of challenging experiments and further show that robustness to label noise is not achieved at the expense of raw performance. ""","""Main content: Blind review #3 summarizes it well: This paper proposes a new loss for training models that predict where events occur in a sequence when the training sequence has noisy labels. The central idea is to smooth the label sequence and prediction sequence and compare these rather than to force the model to treat all errors as equally serious. The proposed problem seems sensible, and the method is a reasonable approach. The evaluations are carried out on a variety of different tasks (piano onset detection, drum detection, smoking detection, video action segmentation). -- Discussion: The reviewers were concerned about the relatively low level of novelty, simplicity of the proposed approach (which the authors argue could be seen as a feature rather than a flaw, given its good performance), and inadequate motivation. -- Recommendation and justification: After the authors' revision in response to the reviews, this paper could be a weak accept if not for the large number of stronger submissions."""
68,"""Single Episode Policy Transfer in Reinforcement Learning""","['transfer learning', 'reinforcement learning']","""Transfer and adaptation to new unknown environmental dynamics is a key challenge for reinforcement learning (RL). An even greater challenge is performing near-optimally in a single attempt at test time, possibly without access to dense rewards, which is not addressed by current methods that require multiple experience rollouts for adaptation. To achieve single episode transfer in a family of environments with related dynamics, we propose a general algorithm that optimizes a probe and an inference model to rapidly estimate underlying latent variables of test dynamics, which are then immediately used as input to a universal control policy. This modular approach enables integration of state-of-the-art algorithms for variational inference or RL. Moreover, our approach does not require access to rewards at test time, allowing it to perform in settings where existing adaptive approaches cannot. In diverse experimental domains with a single episode test constraint, our method significantly outperforms existing adaptive approaches and shows favorable performance against baselines for robust transfer.""","""This is an interesting paper that is concerned with single episode transfer to reinforcement learning problems with different dynamics models, assuming they are parameterised by a latent variable. Given some initial training tasks to learn about this parameter, and a new test task, they present an algorithm to probe and estimate the latent variable on the test task, whereafter the inferred latent variable is used as input to a control policy. There were several issues raised by the reviewers. Firstly, there were questions with the number of runs and the baseline implementations, which were all addressed in the rebuttals. Then, there were questions around the novelty and the main contribution being wall-clock time. These issues were also adequately addressed. In light of this, I recommend acceptance of this paper."""
69,"""Learning Compositional Koopman Operators for Model-Based Control""","['Koopman operators', 'graph neural networks', 'compositionality']","""Finding an embedding space for a linear approximation of a nonlinear dynamical system enables efficient system identification and control synthesis. The Koopman operator theory lays the foundation for identifying the nonlinear-to-linear coordinate transformations with data-driven methods. Recently, researchers have proposed to use deep neural networks as a more expressive class of basis functions for calculating the Koopman operators. These approaches, however, assume a fixed dimensional state space; they are therefore not applicable to scenarios with a variable number of objects. In this paper, we propose to learn compositional Koopman operators, using graph neural networks to encode the state into object-centric embeddings and using a block-wise linear transition matrix to regularize the shared structure across objects. The learned dynamics can quickly adapt to new environments of unknown physical parameters and produce control signals to achieve a specified goal. Our experiments on manipulating ropes and controlling soft robots show that the proposed method has better efficiency and generalization ability than existing baselines.""","""This paper proposes using object-centered graph neural network embeddings of a dynamical system as approximate Koopman embeddings, and then learning the linear transition matrix to model the dynamics of the system according to the Koopman operator theory. The authors propose adding an inductive bias (a block diagonal structure of the transition matrix with shared components) to limit the number of parameters necessary to learn, which improves the computational efficiency and generalisation of the proposed approach. The authors also propose adding an additional input component that allows for external control of the dynamics of the system. The reviewers initially had concerns about the experimental section, since the approach was only tested on toy domains. The reviewers also asked for more baselines. The authors were able to answer some of the questions raised during the discussion period, and by the end of it all reviewers agreed that this is a solid and novel piece of work that deserves to be accepted. For this reason I recommend acceptance."""
70,"""Universal Learning Approach for Adversarial Defense""","['Adversarial examples', 'Adversarial training', 'Universal learning', 'pNML for DNN']","""Adversarial attacks were shown to be very effective in degrading the performance of neural networks. By slightly modifying the input, an almost identical input is misclassified by the network. To address this problem, we adopt the universal learning framework. In particular, we follow the recently suggested Predictive Normalized Maximum Likelihood (pNML) scheme for universal learning, whose goal is to optimally compete with a reference learner that knows the true label of the test sample but is restricted to use a learner from a given hypothesis class. In our case, the reference learner is using his knowledge on the true test label to perform minor refinements to the adversarial input. This reference learner achieves perfect results on any adversarial input. The proposed strategy is designed to be as close as possible to the reference learner in the worst-case scenario. Specifically, the defense essentially refines the test data according to the different hypotheses, where each hypothesis assumes a different label for the sample. Then by comparing the resulting hypotheses probabilities, we predict the label and detect whether the sample is adversarial or natural. Combining our method with adversarial training we create a robust scheme which can handle adversarial input along with detection of the attack. The resulting scheme is demonstrated empirically.""","""The reviewers attempted to give this paper a fair assessment, but were unanimous in recommending rejection. The technical quality of motivation was questioned, while the experimental evaluation was not found to be clear or convincing. Hopefully the feedback provided can help the authors improve their paper."""
71,"""NeuroFabric: Identifying Ideal Topologies for Training A Priori Sparse Networks""","['Sparsity', 'model compression', 'training', 'topology']","""Long training times of deep neural networks are a bottleneck in machine learning research. The major impediment to fast training is the quadratic growth of both memory and compute requirements of dense and convolutional layers with respect to their information bandwidth. Recently, training `a priori' sparse networks has been proposed as a method for allowing layers to retain high information bandwidth, while keeping memory and compute low. However, the choice of which sparse topology should be used in these networks is unclear. In this work, we provide a theoretical foundation for the choice of intra-layer topology. First, we derive a new sparse neural network initialization scheme that allows us to explore the space of very deep sparse networks. Next, we evaluate several topologies and show that seemingly similar topologies can often have a large difference in attainable accuracy. To explain these differences, we develop a data-free heuristic that can evaluate a topology independently from the dataset the network will be trained on. We then derive a set of requirements that make a good topology, and arrive at a single topology that satisfies all of them. ""","""This work proposes new initialization and layer topologies for training a priori sparse networks. Reviewers agreed that the direction is interesting and that the paper is well written. Additionally the theory presented on the toy matrix reconstruction task helped motivate the proposed approach. However, it is also necessary to validate the new approach by comparing with existing sparsity literature on standard benchmarks. I recommend resubmitting with the additional experiments suggested by the reviewers."""
72,"""Distributionally Robust Neural Networks""","['distributionally robust optimization', 'deep learning', 'robustness', 'generalization', 'regularization']","""Overparameterized neural networks can be highly accurate on average on an i.i.d. test set, yet consistently fail on atypical groups of the data (e.g., by learning spurious correlations that hold on average but not in such groups). Distributionally robust optimization (DRO) allows us to learn models that instead minimize the worst-case training loss over a set of pre-defined groups. However, we find that naively applying group DRO to overparameterized neural networks fails: these models can perfectly fit the training data, and any model with vanishing average training loss also already has vanishing worst-case training loss. Instead, the poor worst-case performance arises from poor generalization on some groups. By coupling group DRO models with increased regularization---stronger-than-typical L2 regularization or early stopping---we achieve substantially higher worst-group accuracies, with 10-40 percentage point improvements on a natural language inference task and two image tasks, while maintaining high average accuracies. Our results suggest that regularization is important for worst-group generalization in the overparameterized regime, even if it is not needed for average generalization. Finally, we introduce a stochastic optimization algorithm for the group DRO setting and provide convergence guarantees for the new algorithm. ""","""This paper proposes distributionally robust optimization (DRO) to learn robust models that minimize worst-case training loss over a set of pre-defined groups. They find that increased regularization is necessary for worst-group performance in the overparametrized regime (something that is not needed for non-robust average performance). This is an interesting paper and I recommend acceptance. The discussion phase suggested a change in the title which slightly overstated the paper's contributions (a comment which I agree with). The authors agreed to change the title in the final version. """
73,"""UW-NET: AN INCEPTION-ATTENTION NETWORK FOR UNDERWATER IMAGE CLASSIFICATION""","['Underwater image', 'Convolutional neural network', 'Image classification', 'Inception module', 'Attention module']","""The classification of images taken in special imaging environments except air is the first challenge in extending the applications of deep learning. We report on an UW-Net (Underwater Network), a new convolutional neural network (CNN) based network for underwater image classification. In this model, we simulate the visual correlation of background attention with image understanding for special environments, such as fog and underwater by constructing an inception-attention (I-A) module. The experimental results demonstrate that the proposed UW-Net achieves an accuracy of 99.3% on underwater image classification, which is significantly better than other image classification networks, such as AlexNet, InceptionV3, ResNet and Se-ResNet. Moreover, we demonstrate the proposed IA module can be used to boost the performance of the existing object recognition networks. By substituting the inception module with the I-A module, the Inception-ResnetV2 network achieves a 10.7% top1 error rate and a 0% top5 error rate on the subset of ILSVRC-2012, which further illustrates the function of the background attention in the image classifications.""","""The reviewers have issues with the lack of enough experimental results as well as with novelty of the solution proposed. I recommend rejection."""
74,"""Unifying Graph Convolutional Networks as Matrix Factorization""","['graph convolutional networks', 'matrix factorization', 'unification']","""In recent years, substantial progress has been made on graph convolutional networks (GCN). In this paper, for the first time, we theoretically analyze the connections between GCN and matrix factorization (MF), and unify GCN as matrix factorization with co-training and unitization. Moreover, under the guidance of this theoretical analysis, we propose an alternative model to GCN named Co-training and Unitized Matrix Factorization (CUMF). The correctness of our analysis is verified by thorough experiments. The experimental results show that CUMF achieves similar or superior performances compared to GCN. In addition, CUMF inherits the benefits of MF-based methods to naturally support constructing mini-batches, and is more friendly to distributed computing comparing with GCN. The distributed CUMF on semi-supervised node classification significantly outperforms distributed GCN methods. Thus, CUMF greatly benefits large scale and complex real-world applications.""","""The paper makes an interesting attempt at connecting graph convolutional neural networks (GCN) with matrix factorization (MF) and then develops a MF solution that achieves similar prediction performance as GCN. While the work is a good attempt, the work suffers from two major issues: (1) the connection between GCN and other related models have been examined recently. The paper did not provide additional insights; (2) some parts of the derivations could be problematic. The paper could be a good publication in the future if the motivation of the work can be repositioned. """
75,"""Deep Variational Semi-Supervised Novelty Detection""","['anomaly detection', 'semi-supervised anomaly detection', 'variational autoencoder']","""In anomaly detection (AD), one seeks to identify whether a test sample is abnormal, given a data set of normal samples. A recent and promising approach to AD relies on deep generative models, such as variational autoencoders (VAEs),for unsupervised learning of the normal data distribution. In semi-supervised AD (SSAD), the data also includes a small sample of labeled anomalies. In this work,we propose two variational methods for training VAEs for SSAD. The intuitive idea in both methods is to train the encoder to separate between latent vectors for normal and outlier data. We show that this idea can be derived from principled probabilistic formulations of the problem, and propose simple and effective algorithms. Our methods can be applied to various data types, as we demonstrate on SSAD datasets ranging from natural images to astronomy and medicine, and can be combined with any VAE model architecture. When comparing to state-of-the-art SSAD methods that are not specific to particular data types, we obtain marked improvement in outlier detection.""","""This paper presents two novel VAE-based methods for semi-supervised anomaly detection (SSAD) where one has also access to a small set of labeled anomalous samples. The reviewers had several concerns about the paper, in particular completely addressing reviewer #3's comments would strengthen the paper."""
76,"""Asymptotic learning curves of kernel methods: empirical data v.s. Teacher-Student paradigm""",[],"""How many training data are needed to learn a supervised task? It is often observed that the generalization error decreases as pseudo-formula where pseudo-formula is the number of training examples and pseudo-formula an exponent that depends on both data and algorithm. In this work we measure pseudo-formula when applying kernel methods to real datasets. For MNIST we find 0.4 and for CIFAR10 0.1 Remarkably, pseudo-formula is the same for regression and classification tasks, and for Gaussian or Laplace kernels. To rationalize the existence of non-trivial exponents that can be independent of the specific kernel used, we introduce the Teacher-Student framework for kernels. In this scheme, a Teacher generates data according to a Gaussian random field, and a Student learns them via kernel regression. With a simplifying assumption --- namely that the data are sampled from a regular lattice --- we derive analytically pseudo-formula for translation invariant kernels, using previous results from the kriging literature. Provided that the Student is not too sensitive to high frequencies, pseudo-formula depends only on the training data and their dimension. We confirm numerically that these predictions hold when the training points are sampled at random on a hypersphere. Overall, our results quantify how smooth Gaussian data should be to avoid the curse of dimensionality, and indicate that for kernel learning the relevant dimension of the data should be defined in terms of how the distance between nearest data points depends on pseudo-formula . With this definition one obtains reasonable effective smoothness estimates for MNIST and CIFAR10.""","""The paper studies, theoretically and empirically, the problem when generalization error decreases as pseudo-formula where pseudo-formula is not pseudo-formula . It analyses a Teacher-Student problem where the Teacher generates data from a Gaussian random field. The paper provides a theorem that derives pseudo-formula for Gaussian and Laplace kernels, and show empirical evidence supporting the theory using MNIST and CIFAR. The reviews contained two low scores, both of which were not confident. A more confident reviewer provided a weak accept score, and interacted multiple times with the authors during the discussion period (which is one of the nice things about the ICLR review process). However, this reviewer also noted that ICLR may not be the best venue for this work. Overall, while this paper shows promise, the negative review scores show that the topic may not be the best fit to the ICLR audience."""
77,"""Unsupervised Out-of-Distribution Detection with Batch Normalization""",[],"""Likelihood from a generative model is a natural statistic for detecting out-of-distribution (OoD) samples. However, generative models have been shown to assign higher likelihood to OoD samples compared to ones from the training distribution, preventing simple threshold-based detection rules. We demonstrate that OoD detection fails even when using more sophisticated statistics based on the likelihoods of individual samples. To address these issues, we propose a new method that leverages batch normalization. We argue that batch normalization for generative models challenges the traditional \emph{i.i.d.} data assumption and changes the corresponding maximum likelihood objective. Based on this insight, we propose to exploit in-batch dependencies for OoD detection. Empirical results suggest that this leads to more robust detection for high-dimensional images.""","""The authors observe that batch normalization using the statistics computed from a *test* batch significantly improves out-of-distribution detection with generative models. Essentially, normalizing an OOD test batch using the test batch statistics decreases the likelihood of that batch and thus improves detection of OOD examples. The reviewers seemed concerned with this setting and they felt that it gives a significant advantage over existing methods since they typically deal with single test example. The reviewers thus wanted empirical comparisons to methods designed for this setting, i.e. traditional statistical tests for comparing distributions. Despite some positive discussion, this paper unfortunately falls below the bar for acceptance. The authors added significant experiments and hopefully adding these and additional analysis providing some insight into how the batchnorm is helping would make for a stronger submission to a future conference."""
78,"""Image-guided Neural Object Rendering""","['Neural Rendering', 'Neural Image Synthesis']","""We propose a learned image-guided rendering technique that combines the benefits of image-based rendering and GAN-based image synthesis. The goal of our method is to generate photo-realistic re-renderings of reconstructed objects for virtual and augmented reality applications (e.g., virtual showrooms, virtual tours and sightseeing, the digital inspection of historical artifacts). A core component of our work is the handling of view-dependent effects. Specifically, we directly train an object-specific deep neural network to synthesize the view-dependent appearance of an object. As input data we are using an RGB video of the object. This video is used to reconstruct a proxy geometry of the object via multi-view stereo. Based on this 3D proxy, the appearance of a captured view can be warped into a new target view as in classical image-based rendering. This warping assumes diffuse surfaces, in case of view-dependent effects, such as specular highlights, it leads to artifacts. To this end, we propose EffectsNet, a deep neural network that predicts view-dependent effects. Based on these estimations, we are able to convert observed images to diffuse images. These diffuse images can be projected into other views. In the target view, our pipeline reinserts the new view-dependent effects. To composite multiple reprojected images to a final output, we learn a composition network that outputs photo-realistic results. Using this image-guided approach, the network does not have to allocate capacity on ``remembering'' object appearance, instead it learns how to combine the appearance of captured images. We demonstrate the effectiveness of our approach both qualitatively and quantitatively on synthetic as well as on real data.""","""The paper presents a new variation of neural (re) rendering of objects, that uses a set of two deep ConvNets to model non-Lambertian effects associated with an object. The paper has received mostly positive reviews. The reviewers agree that the contribution is well-described, valid and valuable. The method is validated against strong baselines including Hedman et al., though Reviewer4 rightfully points out that the comparison might have been more thorough. One additional concern not raised by the reviewers is the lack of comparison with [Thies et al. 2019], which is briefly mentioned but not discussed. The authors are encouraged to provide a corresponding comparison (as well as additional comparisons with Hedman et al) and discuss pros and cons w.r.t. [Thies et al] in the final version."""
79,"""Gradient-based training of Gaussian Mixture Models in High-Dimensional Spaces""","['GMM', 'SGD']","""We present an approach for efficiently training Gaussian Mixture Models (GMMs) with Stochastic Gradient Descent (SGD) on large amounts of high-dimensional data (e.g., images). In such a scenario, SGD is strongly superior in terms of execution time and memory usage, although it is conceptually more complex than the traditional Expectation-Maximization (EM) algorithm. For enabling SGD training, we propose three novel ideas: First, we show that minimizing an upper bound to the GMM log likelihood instead of the full one is feasible and numerically much more stable way in high-dimensional spaces. Secondly, we propose a new regularizer that prevents SGD from converging to pathological local minima. And lastly, we present a simple method for enforcing the constraints inherent to GMM training when using SGD. We also propose an SGD-compatible simplification to the full GMM model based on local principal directions, which avoids excessive memory use in high-dimensional spaces due to quadratic growth of covariance matrices. Experiments on several standard image datasets show the validity of our approach, and we provide a publicly available TensorFlow implementation.""","""The paper presents an SGD-based learning of a Gaussian mixture model, designed to match a data streaming setting. The reviews state that the paper contains some quite good points, such as * the simplicity and scalability of the method, and its robustness w.r.t. the initialization of the approach; * the SOM-like approach used to avoid degenerated solutions; Among the weaknesses are * an insufficient discussion wrt the state of the art, e.g. for online EM; * the description of the approach seems yet not mature (e.g., the constraint enforcement boils down to considering that the pseudo-formula are obtained using softmax; the discussion about the diagonal covariance matrix vs the use of local principal directions is not crystal clear); * the fact that experiments need be strengthened. I thus encourage the authors to rewrite and polish the paper, simplifying the description of the approach and better positioning it w.r.t. the state of the art (in particular, mentioning the data streaming motivation from the start). Also, more evidence, and a more thorough analysis thereof, must be provided to back up the approach and understand its limitations."""
80,"""Bayesian Meta Sampling for Fast Uncertainty Adaptation""","['Bayesian Sampling', 'Uncertainty Adaptation', 'Meta Learning', 'Variational Inference']","""Meta learning has been making impressive progress for fast model adaptation. However, limited work has been done on learning fast uncertainty adaption for Bayesian modeling. In this paper, we propose to achieve the goal by placing meta learning on the space of probability measures, inducing the concept of meta sampling for fast uncertainty adaption. Specifically, we propose a Bayesian meta sampling framework consisting of two main components: a meta sampler and a sample adapter. The meta sampler is constructed by adopting a neural-inverse-autoregressive-flow (NIAF) structure, a variant of the recently proposed neural autoregressive flows, to efficiently generate meta samples to be adapted. The sample adapter moves meta samples to task-specific samples, based on a newly proposed and general Bayesian sampling technique, called optimal-transport Bayesian sampling. The combination of the two components allows a simple learning procedure for the meta sampler to be developed, which can be efficiently optimized via standard back-propagation. Extensive experimental results demonstrate the efficiency and effectiveness of the proposed framework, obtaining better sample quality and faster uncertainty adaption compared to related methods.""","""This paper presents a meta-learning algorithm that represents uncertainty both at the meta-level and at the task-level. The approach contains an interesting combination of techniques. The reviewers raised concerns about the thoroughness of the experiments, which were resolved in a convincing way in the rebuttal. Concerns about clarity remain, and the authors are *strongly encouraged* to revise the paper throughout to make the presentation more clear and understandable, including to readers who do not have a meta-learning background. See the reviewer's comments for further details on how the organization of the paper and the presentation of the ideas can be improved."""
81,"""Unknown-Aware Deep Neural Network""","['unknown', 'rejection', 'CNN', 'product relationship']","""An important property of image classification systems in the real world is that they both accurately classify objects from target classes (``knowns'') and safely reject unknown objects (``unknowns'') that belong to classes not present in the training data. Unfortunately, although the strong generalization ability of existing CNNs ensures their accuracy when classifying known objects, it also causes them to often assign an unknown to a target class with high confidence. As a result, simply using low-confidence detections as a way to detect unknowns does not work well. In this work, we propose an Unknown-aware Deep Neural Network (UDN for short) to solve this challenging problem. The key idea of UDN is to enhance existing CNNs to support a product operation that models the product relationship among the features produced by convolutional layers. This way, missing a single key feature of a target class will greatly reduce the probability of assigning an object to this class. UDN uses a learned ensemble of these product operations, which allows it to balance the contradictory requirements of accurately classifying known objects and correctly rejecting unknowns. To further improve the performance of UDN at detecting unknowns, we propose an information-theoretic regularization strategy that incorporates the objective of rejecting unknowns into the learning process of UDN. We experiment on benchmark image datasets including MNIST, CIFAR-10, CIFAR-100, and SVHN, adding unknowns by injecting one dataset into another. Our results demonstrate that UDN significantly outperforms state-of-the-art methods at rejecting unknowns by 25 percentage points improvement in accuracy, while still preserving the classification accuracy. ""","""This paper proposes the unknown-aware deep neural network (UDN), which can discover out-of-distribution samples for CNN classifiers. Experiments show that the proposed method has an improved rejection accuracy while maintaining a good classification accuracy on the test set. Three reviewers have split reviews. Reviewer #2 provides positive review for this work, while indicating that he is not an expert in image classification. Reviewer #1 agrees that the topic is interesting, yet the experiment is not so convincing, especially with limited and simple databases. Reviewer #3 shared the similar concern that the experiments are not sufficient. Further, R3 felt that the main idea is not well explained. The ACs concur these major concerns and agree that the paper can not be accepted at its current state."""
82,"""HIPPOCAMPAL NEURONAL REPRESENTATIONS IN CONTINUAL LEARNING""",[],"""The hippocampus has long been associated with spatial memory and goal-directed spatial navigation. However, the regions independent role in continual learning of navigational strategies has seldom been investigated. Here we analyse populationlevel activity of hippocampal CA1 neurons in the context of continual learning of two different spatial navigation strategies. Demixed Principal Component Analysis (dPCA) is applied on neuronal recordings from 612 hippocampal CA1 neurons of rodents learning to perform allocentric and egocentric spatial tasks. The components uncovered using dPCA from the firing activity reveal that hippocampal neurons encode relevant task variables such decisions, navigational strategies and reward location. We compare this hippocampal features with standard reinforcement learning algorithms, highlighting similarities and differences. Finally, we demonstrate that a standard deep reinforcement learning model achieves similar average performance when compared to animal learning, but fails to mimic animals during task switching. Overall, our results gives insights into how the hippocampus solves reinforced spatial continual learning, and puts forward a framework to explicitly compare biological and machine learning during spatial continual learning.""","""This paper analyzes neural recording data taken from rodents performing a continual learning task using demixed principal component analysis, and aims to find representations for behaviorally relevant variables. They compare these features with those of a deep RL agent. I am a big fan of papers like this that try to bridge between neuroscience and machine learning. It seems to have a great motivation and there are some interesting results presented. However the reviewers pointed out many issues that lead me to believe this work is not quite ready for publication. In particular, not considering space when analyzing hippocampal rodent data, as R2 points out, seems to be a major oversight. In addition, the sample size is incredibly small (5 rats, only 1 of which was used for the continual learning simulation). This seems to me like more of an exploratory, pilot study than a full experiment that is ready for publication, and therefore I am unfortunately recommending reject. Reviewer comments were very thorough and on point. Sounds like the authors are already working on the next version of the paper with these points in mind, so I look forward to it. """
83,"""Self-Adversarial Learning with Comparative Discrimination for Text Generation""","['adversarial learning', 'text generation']","""Conventional Generative Adversarial Networks (GANs) for text generation tend to have issues of reward sparsity and mode collapse that affect the quality and diversity of generated samples. To address the issues, we propose a novel self-adversarial learning (SAL) paradigm for improving GANs' performance in text generation. In contrast to standard GANs that use a binary classifier as its discriminator to predict whether a sample is real or generated, SAL employs a comparative discriminator which is a pairwise classifier for comparing the text quality between a pair of samples. During training, SAL rewards the generator when its currently generated sentence is found to be better than its previously generated samples. This self-improvement reward mechanism allows the model to receive credits more easily and avoid collapsing towards the limited number of real samples, which not only helps alleviate the reward sparsity issue but also reduces the risk of mode collapse. Experiments on text generation benchmark datasets show that our proposed approach substantially improves both the quality and the diversity, and yields more stable performance compared to the previous GANs for text generation.""","""This paper proposes a method for improving training of text generation with GANs by performing discrimination between different generated examples, instead of solely between real and generated examples. R3 and R1 appreciated the general idea, and thought that while there are still concerns, overall the paper seems to be interesting enough to warrant publication at ICLR. R2 has a rating of ""weak reject"", but I tend to agree with the authors that comparison with other methods that use different model architectures is orthogonal to the contribution of this paper. In sum, I think that this paper would likely make a good contribution to ICLR and recommend acceptance."""
84,"""All Simulations Are Not Equal: Simulation Reweighing for Imperfect Information Games""","['Contract Bridge', 'Simulation', 'Imperfect Information Games', 'Reweigh', 'Belief Modeling']","""Imperfect information games are challenging benchmarks for artificial intelligent systems. To reason and plan under uncertainty is a key towards general AI. Traditionally, large amounts of simulations are used in imperfect information games, and they sometimes perform sub-optimally due to large state and action spaces. In this work, we propose a simulation reweighing mechanism using neural networks. It performs backwards verification to public previous actions and assign proper belief weights to the simulations from the information set of the current observation, using an incomplete state solver network (ISSN). We use simulation reweighing in the playing phase of the game contract bridge, and show that it outperforms previous state-of-the-art Monte Carlo simulation based methods, and achieves better play per decision. ""","""A method is introduced to estimate the hidden state in imperfect information in multiplayer games, in particular Bridge. This is interesting, but the paper falls short in various ways. Several reviewers complained about the readability of the paper, and also about the quality and presentation of the interesting results. It seems that this paper represents an interesting idea, but is not yet ready for publication."""
85,"""Hierarchical Graph-to-Graph Translation for Molecules""","['graph generation', 'deep learning']","""The problem of accelerating drug discovery relies heavily on automatic tools to optimize precursor molecules to afford them with better biochemical properties. Our work in this paper substantially extends prior state-of-the-art on graph-to-graph translation methods for molecular optimization. In particular, we realize coherent multi-resolution representations by interweaving the encoding of substructure components with the atom-level encoding of the original molecular graph. Moreover, our graph decoder is fully autoregressive, and interleaves each step of adding a new substructure with the process of resolving its attachment to the emerging molecule. We evaluate our model on multiple molecular optimization tasks and show that our model significantly outperforms previous state-of-the-art baselines.""","""Two reviewers are negative on this paper while the other reviewer is slightly positive. Overall, the paper does not make the bar of ICLR. A reject is recommended."""
86,"""The asymptotic spectrum of the Hessian of DNN throughout training""","['theory of deep learning', 'loss surface', 'training', 'fisher information matrix']","""The dynamics of DNNs during gradient descent is described by the so-called Neural Tangent Kernel (NTK). In this article, we show that the NTK allows one to gain precise insight into the Hessian of the cost of DNNs: we obtain a full characterization of the asymptotics of the spectrum of the Hessian, at initialization and during training. ""","""This paper studies the spectrum of the Hessian through training, making connections with the NTK limit. While many of the results are perhaps unsurprising, and more empirically driven, together the paper represents a valuable contribution towards our understanding of generalization in deep learning. Please carefully account for the reviewer comments in the final version."""
87,"""VILD: Variational Imitation Learning with Diverse-quality Demonstrations""","['Imitation learning', 'inverse reinforcement learning', 'noisy demonstrations']","""The goal of imitation learning (IL) is to learn a good policy from high-quality demonstrations. However, the quality of demonstrations in reality can be diverse, since it is easier and cheaper to collect demonstrations from a mix of experts and amateurs. IL in such situations can be challenging, especially when the level of demonstrators' expertise is unknown. We propose a new IL paradigm called Variational Imitation Learning with Diverse-quality demonstrations (VILD), where we explicitly model the level of demonstrators' expertise with a probabilistic graphical model and estimate it along with a reward function. We show that a naive estimation approach is not suitable to large state and action spaces, and fix this issue by using a variational approach that can be easily implemented using existing reinforcement learning methods. Experiments on continuous-control benchmarks demonstrate that VILD outperforms state-of-the-art methods. Our work enables scalable and data-efficient IL under more realistic settings than before.""","""The paper proposes a new imitation learning algorithm that explicitly models the quality of demonstrators. All reviewers agreed that the problem and the approach were interesting, the paper well-written, and the experiments well-conducted. However, there was a shared concern about the applicability of the method to more realistic settings, in which the model generating the demonstrations does not fall under the assumptions of the method. The authors did add a real-world experiment during the rebuttal, but the reviewers were suspicious of the reported InfoGAIL performance and were not persuaded to change their assessment. Following this discussion, I recommend rejection at this time, but it seems like a good paper and I encourage the authors to do a more careful validation experiment, and resubmit to a future venue."""
88,"""Visual Interpretability Alone Helps Adversarial Robustness""","['adversarial robustness', 'visual explanation', 'CNN', 'image classification']","""Recent works have empirically shown that there exist adversarial examples that can be hidden from neural network interpretability, and interpretability is itself susceptible to adversarial attacks. In this paper, we theoretically show that with the correct measurement of interpretation, it is actually difficult to hide adversarial examples, as confirmed by experiments on MNIST, CIFAR-10 and Restricted ImageNet. Spurred by that, we develop a novel defensive scheme built only on robust interpretation (without resorting to adversarial loss minimization). We show that our defense achieves similar classification robustness to state-of-the-art robust training methods while attaining higher interpretation robustness under various settings of adversarial attacks.""","""This work focuses on how one can design models with robustness of interpretations. While this is an interesting direction, the paper would benefit from a more careful treatment of its technical claims. """
89,"""An Inductive Bias for Distances: Neural Nets that Respect the Triangle Inequality""","['metric learning', 'deep metric learning', 'neural network architectures', 'triangle inequality', 'graph distances']","""Distances are pervasive in machine learning. They serve as similarity measures, loss functions, and learning targets; it is said that a good distance measure solves a task. When defining distances, the triangle inequality has proven to be a useful constraint, both theoretically---to prove convergence and optimality guarantees---and empirically---as an inductive bias. Deep metric learning architectures that respect the triangle inequality rely, almost exclusively, on Euclidean distance in the latent space. Though effective, this fails to model two broad classes of subadditive distances, common in graphs and reinforcement learning: asymmetric metrics, and metrics that cannot be embedded into Euclidean space. To address these problems, we introduce novel architectures that are guaranteed to satisfy the triangle inequality. We prove our architectures universally approximate norm-induced metrics on pseudo-formula , and present a similar result for modified Input Convex Neural Networks. We show that our architectures outperform existing metric approaches when modeling graph distances and have a better inductive bias than non-metric approaches when training data is limited in the multi-goal reinforcement learning setting. ""","""This paper proposes a neural network approach to approximate distances, based on a representation of norms in terms of convex homogeneous functions. The authors show universal approximation of norm-induced metrics and present applications to value-function approximation in RL and graph distance problems. Reviewers were in general agreement that this is a solid paper, well-written and with compelling results. The AC shares this positive assessment and therefore recommends acceptance. """
90,"""Improving Semantic Parsing with Neural Generator-Reranker Architecture""","['Natural Language Processing', 'Semantic Parsing', 'Neural Reranking']","""Semantic parsing is the problem of deriving machine interpretable meaning representations from natural language utterances. Neural models with encoder-decoder architectures have recently achieved substantial improvements over traditional methods. Although neural semantic parsers appear to have relatively high recall using large beam sizes, there is room for improvement with respect to one-best precision. In this work, we propose a generator-reranker architecture for semantic parsing. The generator produces a list of potential candidates and the reranker, which consists of a pre-processing step for the candidates followed by a novel critic network, reranks these candidates based on the similarity between each candidate and the input sentence. We show the advantages of this approach along with how it improves the parsing performance through extensive analysis. We experiment our model on three semantic parsing datasets (GEO, ATIS, and OVERNIGHT). The overall architecture achieves the state-of-the-art results in all three datasets. ""","""This paper presents and evaluates a technique for semantic parsing, and in particular proposes a model to re-rank the candidates generated by beam search. The paper was reviewed by 3 experts and received Reject, Weak Reject, and Weak Reject opinions. The reviews identified strengths of the paper but also significant concerns, mostly centered around the experimental evaluation (including choice of datasets, lack of direct comparison to baselines, need for more methodical and quantitative analysis, need for additional analysis, etc.) and some questions about the design of the technical approach. The authors submitted responses that addressed some of these concerns, but indicated that additional experimentation would be needed to address all of them. In light of these reviews, we are not able to recommend acceptance at this time, but I hope authors use the detailed, constructive feedback to improve the paper for another venue."""
91,"""Natural- to formal-language generation using Tensor Product Representations""","['Neural Symbolic Reasoning', 'Deep Learning', 'Natural Language Processing', 'Structural Representation', 'Interpretation of Learned Representations']","""Generating formal-language represented by relational tuples, such as Lisp programs or mathematical expressions, from a natural-language input is an extremely challenging task because it requires to explicitly capture discrete symbolic structural information from the input to generate the output. Most state-of-the-art neural sequence models do not explicitly capture such structure information, and thus do not perform well on these tasks. In this paper, we propose a new encoder-decoder model based on Tensor Product Representations (TPRs) for Natural- to Formal-language generation, called TP-N2F. The encoder of TP-N2F employs TPR 'binding' to encode natural-language symbolic structure in vector space and the decoder uses TPR 'unbinding' to generate a sequence of relational tuples, each consisting of a relation (or operation) and a number of arguments, in symbolic space. TP-N2F considerably outperforms LSTM-based Seq2Seq models, creating a new state of the art results on two benchmarks: the MathQA dataset for math problem solving, and the AlgoList dataset for program synthesis. Ablation studies show that improvements are mainly attributed to the use of TPRs in both the encoder and decoder to explicitly capture relational structure information for symbolic reasoning. ""","""The paper proposed a new seq2seq method to implement natural language to formal language translation. Fixed length Tensor Product Representations are used as the intermediate representation between encoder and decoder. Experiments are conducted on MathQA and AlgoList datasets and show the effectiveness of the methods. Intensive discussions happened between the authors and reviewers. Despite of the various concerns raised by the reviewers, a main problem pointed by both reviewer#3 and reviewer#4 is that there is a gap between the theory and the implementation in this paper. The other reviewer (#2) likes the paper but is less confident and tend to agree with the other two reviewers."""
92,"""Adaptive network sparsification with dependent variational beta-Bernoulli dropout""","['network sparsification', 'variational inference', 'pruning']","""While variational dropout approaches have been shown to be effective for network sparsification, they are still suboptimal in the sense that they set the dropout rate for each neuron without consideration of the input data. With such input independent dropout, each neuron is evolved to be generic across inputs, which makes it difficult to sparsify networks without accuracy loss. To overcome this limitation, we propose adaptive variational dropout whose probabilities are drawn from sparsity inducing beta-Bernoulli prior. It allows each neuron to be evolved either to be generic or specific for certain inputs, or dropped altogether. Such input-adaptive sparsity- inducing dropout allows the resulting network to tolerate larger degree of sparsity without losing its expressive power by removing redundancies among features. We validate our dependent variational beta-Bernoulli dropout on multiple public datasets, on which it obtains significantly more compact networks than baseline methods, with consistent accuracy improvements over the base networks.""","""This paper introduces a new adaptive variational dropout approach to balance accuracy, sparsity and computation. The method proposed here is sound, the motivation for smaller (perhaps sparser) networks is easy to follow. The paper provides experiments in several data-sets and compares against several other regularization/pruning approaches, and measures accuracy, speedup, and memory. The reviewers agreed on all these points, but overall they found the results unconvincing. They requested (1) more baselines (which the authors added), (2) larger tasks/datasets, and (3) more variety in network architectures. The overall impression was it was hard to see a clear benefit of the proposed approach, based on the provided tables of results. The paper could sharpen its impact with several adjustments. The results are much more clear looking at the error vs speedup graphs. Presenting ""representative results"" in the tables was confusing, especially considering the proposed approach rarely dominated across all measures. It was unclear how the variants of the algorithms presented in the tables were selected---explaining this would help a lot. In addition, more text is needed to help the reader understand how improvements in speed, accuracy, and memory matter. For example in LeNet 500-300 is a speedup of ~12 @ 1.26 error for BB worth-it/important compared a speedup of ~8 for similar error for L_0? How should the reader think about differences in speedup, memory and accuracy---perhaps explanations linking to the impact of these metrics to their context in real applications. I found myself wondering this about pretty much every result, especially when better speedup and memory could be achieved at the cost of some accuracy---how much does the reduction in accuracy actually matter? Is speed and size the dominant thing? I don't know. Overall the analysis and descriptions of the results are very terse, leaving much to the reader to figure out. For example (fig 2 bottom right). If a result is worth including in the paper it's worth explaining it to the reader. Summary statements like ""BB and DBB either achieve significantly smaller error than the baseline methods, or significant speedup and memory saving at similar error rates."" Is not helpful where there are so many dimensions of performance to figure out. The paper spends a lot of time explaining what was done in a matter of fact way, but little time helping the reader interpret the results. There are other issues that hurt the paper, including reporting the results of only 3 runs, sometimes reporting median without explanation, undefined metrics like speedup ,%memory (explain how they are calculated), restricting the batchsize for all methods to a particular value without explanation, and overall somewhat informal and imprecise discussion of the empirical methodology. The authors did a nice job responding to the reviewers (illustrating good understanding of the area and the strengths of their method), and this could be a strong paper indeed if the changes suggested above were implemented. Including SSL and SVG in the appendix was great, but they really should have been included in the speedup vs error plots throughout the paper. This is a nice direction and was very close. Keep going!"""
93,"""Winning the Lottery with Continuous Sparsification""",[],"""The Lottery Ticket Hypothesis from Frankle & Carbin (2019) conjectures that, for typically-sized neural networks, it is possible to find small sub-networks which train faster and yield superior performance than their original counterparts. The proposed algorithm to search for such sub-networks (winning tickets), Iterative Magnitude Pruning (IMP), consistently finds sub-networks with 90-95% less parameters which indeed train faster and better than the overparameterized models they were extracted from, creating potential applications to problems such as transfer learning. In this paper, we propose a new algorithm to search for winning tickets, Continuous Sparsification, which continuously removes parameters from a network during training, and learns the sub-network's structure with gradient-based methods instead of relying on pruning strategies. We show empirically that our method is capable of finding tickets that outperforms the ones learned by Iterative Magnitude Pruning, and at the same time providing up to 5 times faster search, when measured in number of training epochs.""","""This paper proposes a new algorithm called Continuous Sparsification (CS) to search for winning tickets (in the context of the Lottery Ticket Hypothesis from Frankle & Carbin (2019)), as an alternative to the Iterative Magnitude Pruning (IMP) algorithm proposed therein. CS continuously removes parameters from a network during training, and learns the sub-network's structure with gradient-based methods instead of relying on pruning strategies. The papers shows empirically that CS finds lottery tickets that outperforms the ones learned by ITS with up to 5 times faster search, when measured in number of training epochs. While this paper presents a novel contribution of pruning and of finding winning lottery tickets and is very well written, there are some concerns raised by the reviewers regarding the current evaluation. The paper presents no concrete data on the comparative costs of performing CS and IMP even though the core claim is that CS is more efficient. The paper does not disclose enough detail to compute these costs, and it seems like CS is more expensive than IMP for standard workflows. Moreover, the current presentation of the data through ""pareto curves"" is misleadingly favorable to CS. The reviewers suggest including more experiments on ImageNet and a more thorough evaluation as a pruning technique beyond the lottery ticket hypothesis. We recommend the authors to address the detailed reviewers' comments in an eventual ressubmission. """
94,"""Improving Exploration of Deep Reinforcement Learning using Planning for Policy Search""","['reinforcement learning', 'kinodynamic planning', 'policy search']","""Most Deep Reinforcement Learning methods perform local search and therefore are prone to get stuck on non-optimal solutions. Furthermore, in simulation based training, such as domain-randomized simulation training, the availability of a simulation model is not exploited, which potentially decreases efficiency. To overcome issues of local search and exploit access to simulation models, we propose the use of kino-dynamic planning methods as part of a model-based reinforcement learning method and to learn in an off-policy fashion from solved planning instances. We show that, even on a simple toy domain, D-RL methods (DDPG, PPO, SAC) are not immune to local optima and require additional exploration mechanisms. We show that our planning method exhibits a better state space coverage, collects data that allows for better policies than D-RL methods without additional exploration mechanisms and that starting from the planner data and performing additional training results in as good as or better policies than vanilla D-RL methods, while also creating data that is more fit for re-use in modified tasks. ""","""The paper is about exploration in deep reinforcement learning. The reviewers agree that this is an interesting and important topic, but the authors provide only a slim analysis and theoretical support for the proposed methods. Furthermore, the authors are encouraged to evaluate the proposed method on more than a single benchmark problem."""
95,"""AdaX: Adaptive Gradient Descent with Exponential Long Term Memory""","['Optimization Algorithm', 'Machine Learning', 'Deep Learning', 'Adam']","""Adaptive optimization algorithms such as RMSProp and Adam have fast convergence and smooth learning process. Despite their successes, they are proven to have non-convergence issue even in convex optimization problems as well as weak performance compared with the first order gradient methods such as stochastic gradient descent (SGD). Several other algorithms, for example AMSGrad and AdaShift, have been proposed to alleviate these issues but only minor effect has been observed. This paper further analyzes the performance of such algorithms in a non-convex setting by extending their non-convergence issue into a simple non-convex case and show that Adam's design of update steps would possibly lead the algorithm to local minimums. To address the above problems, we propose a novel adaptive gradient descent algorithm, named AdaX, which accumulates the long-term past gradient information exponentially. We prove the convergence of AdaX in both convex and non-convex settings. Extensive experiments show that AdaX outperforms Adam in various tasks of computer vision and natural language processing and can catch up with SGD. ""","""This paper analyzes the non-convergence issue in Adam in a simple non-convex case. The authors propose a new adaptive gradient descent algorithm based on exponential long term memory, and analyze its convergence in both convex and non-convex settings. The major weakness of this paper pointed out by many reviewers is its experimental evaluation, ranging from experimental design to missing comparison with strong baseline algorithms. I agree with the reviewers evaluation and thus recommend reject."""
96,"""Matrix Multilayer Perceptron""","['Multilayer Perceptron', 'symmetric positive definite', 'heteroscedastic regression', 'covariance estimation']","""Models that output a vector of responses given some inputs, in the form of a conditional mean vector, are at the core of machine learning. This includes neural networks such as the multilayer perceptron (MLP). However, models that output a symmetric positive definite (SPD) matrix of responses given inputs, in the form of a conditional covariance function, are far less studied, especially within the context of neural networks. Here, we introduce a new variant of the MLP, referred to as the matrix MLP, that is specialized at learning SPD matrices. Our construction not only respects the SPD constraint, but also makes explicit use of it. This translates into a model which effectively performs the task of SPD matrix learning even in scenarios where data are scarce. We present an application of the model in heteroscedastic multivariate regression, including convincing performance on six real-world datasets. ""","""This paper introduces a novel architecture and loss for estimating PSD matrices using neural networks. There is some theoretical justification for the architecture, and a small-scale but encouraging experiment. Overall, I think there is a sensible contribution here, but there are so many architectural and computational choices presented together at once that it's hard to tell what the important parts are. The main problems with this paper are: 1) The scalability of the approach O(N^3) 2) The derivation of the architecture and gradient computations wasn't clear about what choices were available and why. Several alternative choices were mentioned but not evaluated. I think the authors also need to improve their understanding of automatic differentiation. Backprop through eigendecomposition is already available in most autodiff packages. It was claimed that a certain kind of matrix derivative provided better generalization, which seems like a strong claim to make in general. 3) The experimental setup seemed contrived, except for the heteroskedastic regression experiments, which lacked competitive baselines. Why were the GP and MLPs homoskedastic? As a matter of personal preference, I found that having 4 different ""H""s differing only in font and capitalization for the network architecture was hard to keep track of. I agree that R1 had some unjustified comments and R2's review was contentless. I apologize for these inadequate reviews. """
97,"""Deep amortized clustering""","['clustering', 'amortized inference', 'meta learning', 'deep learning']","""We propose a \textit{deep amortized clustering} (DAC), a neural architecture which learns to cluster datasets efficiently using a few forward passes. DAC implicitly learns what makes a cluster, how to group data points into clusters, and how to count the number of clusters in datasets. DAC is meta-learned using labelled datasets for training, a process distinct from traditional clustering algorithms which usually require hand-specified prior knowledge about cluster shapes/structures. We empirically show, on both synthetic and image data, that DAC can efficiently and accurately cluster new datasets coming from the same distribution used to generate training datasets. ""","""This paper introduces a new clustering method, which builds upon the work introduced by Lee et al, 2019 - contextual information across different dataset samples is gathered with a transformer, and then used to predict the cluster label for a given sample. All reviewers agree the writing should be improved and clarified. The novelty is also on the low side, given the previous work by Lee et al. Experiments should be more convincing. """
98,"""Starfire: Regularization-Free Adversarially-Robust Structured Sparse Training""","['Structured Sparsity', 'Sparsity', 'Training', 'Compression', 'Adversarial', 'Regularization', 'Acceleration']","""This paper studies structured sparse training of CNNs with a gradual pruning technique that leads to fixed, sparse weight matrices after a set number of epochs. We simplify the structure of the enforced sparsity so that it reduces overhead caused by regularization. The proposed training methodology explores several options for structured sparsity. We study various tradeoffs with respect to pruning duration, learning-rate configuration, and the total length of training. We show that our method creates a sparse version of ResNet50 and ResNet50v1.5 on full ImageNet while remaining within a negligible <1% margin of accuracy loss. To make sure that this type of sparse training does not harm the robustness of the network, we also demonstrate how the network behaves in the presence of adversarial attacks. Our results show that with 70% target sparsity, over 75% top-1 accuracy is achievable. ""","""This paper concerns a training procedure for neural networks which results in sparse connectivity in the final resulting network, consisting of an ""early era"" of training in which pruning takes place, followed by fixed connectivity training thereafter, and a study of tradeoffs inherent in various approaches to structured and unstructured pruning, and an investigation of adversarial robustness of pruned networks. While some reviewers found the general approach interesting, all reviewers were critical of the lack of novelty, clarity and empirical rigour. R2 in particular raised concerns about the motivation, evaluation of computational savings (that FLOPS should be measured directly), and felt that the discussion of adversarial robustness was out of place and ""an afterthought"". Reviewers were unconvinced by rebuttals, and no attempts were made at improving the paper (additional experiments were promised, but not delivered). I therefore recommend rejection. """
99,"""Walking the Tightrope: An Investigation of the Convolutional Autoencoder Bottleneck""","['convolutional autoencoder', 'bottleneck', 'representation learning']","""In this paper, we present an in-depth investigation of the convolutional autoencoder (CAE) bottleneck. Autoencoders (AE), and especially their convolutional variants, play a vital role in the current deep learning toolbox. Researchers and practitioners employ CAEs for a variety of tasks, ranging from outlier detection and compression to transfer and representation learning. Despite their widespread adoption, we have limited insight into how the bottleneck shape impacts the emergent properties of the CAE. We demonstrate that increased height and width of the bottleneck drastically improves generalization, which in turn leads to better performance of the latent codes in downstream transfer learning tasks. The number of channels in the bottleneck, on the other hand, is secondary in importance. Furthermore, we show empirically, that, contrary to popular belief, CAEs do not learn to copy their input, even when the bottleneck has the same number of neurons as there are pixels in the input. Copying does not occur, despite training the CAE for 1,000 epochs on a tiny (~ 600 images) dataset. We believe that the findings in this paper are directly applicable and will lead to improvements in models that rely on CAEs.""","""The paper investigates the effect of convolutional information bottlenecks to generalization. The paper concludes that the width and height of the bottleneck can greatly influence generalization, whereas the number of channels has smaller effect. The paper also shows evidence against a common belief that CAEs with sufficiently large bottleneck will learn an identity map. During the rebuttal period, there was a long discussion mainly about the sufficiency of the experimental setup and the trustworthiness of the claims made in the paper. A paper that empirically investigates an exiting method or belief should include extensive experiments of high quality in to enable general conclusions. Im thus recommending rejection, but encourage the authors to improve the experiments and resubmitting."""
100,"""Deep Semi-Supervised Anomaly Detection""","['anomaly detection', 'deep learning', 'semi-supervised learning', 'unsupervised learning', 'outlier detection', 'one-class classification', 'deep anomaly detection', 'deep one-class classification']","""Deep approaches to anomaly detection have recently shown promising results over shallow methods on large and complex datasets. Typically anomaly detection is treated as an unsupervised learning problem. In practice however, one may have---in addition to a large set of unlabeled samples---access to a small pool of labeled samples, e.g. a subset verified by some domain expert as being normal or anomalous. Semi-supervised approaches to anomaly detection aim to utilize such labeled samples, but most proposed methods are limited to merely including labeled normal samples. Only a few methods take advantage of labeled anomalies, with existing deep approaches being domain-specific. In this work we present Deep SAD, an end-to-end deep methodology for general semi-supervised anomaly detection. We further introduce an information-theoretic framework for deep anomaly detection based on the idea that the entropy of the latent distribution for normal data should be lower than the entropy of the anomalous distribution, which can serve as a theoretical interpretation for our method. In extensive experiments on MNIST, Fashion-MNIST, and CIFAR-10, along with other anomaly detection benchmark datasets, we demonstrate that our method is on par or outperforms shallow, hybrid, and deep competitors, yielding appreciable performance improvements even when provided with only little labeled data.""","""Issues raised by the reviewers have been addressed by the authors, and thus I suggest the acceptance of this paper."""
101,"""EvoNet: A Neural Network for Predicting the Evolution of Dynamic Graphs""","['temporal graphs', 'graph neural network', 'graph generative model', 'graph topology prediction']","""Neural networks for structured data like graphs have been studied extensively in recent years. To date, the bulk of research activity has focused mainly on static graphs. However, most real-world networks are dynamic since their topology tends to change over time. Predicting the evolution of dynamic graphs is a task of high significance in the area of graph mining. Despite its practical importance, the task has not been explored in depth so far, mainly due to its challenging nature. In this paper, we propose a model that predicts the evolution of dynamic graphs. Specifically, we use a graph neural network along with a recurrent architecture to capture the temporal evolution patterns of dynamic graphs. Then, we employ a generative model which predicts the topology of the graph at the next time step and constructs a graph instance that corresponds to that topology. We evaluate the proposed model on several artificial datasets following common network evolving dynamics, as well as on real-world datasets. Results demonstrate the effectiveness of the proposed model. ""","""The paper proposes a combination graph neural networks and graph generation model (GraphRNN) to model the evolution of dynamic graphs for predicting the topology of next graph given a sequence of graphs. The problem to be addressed seems interesting, but lacks strong motivation. Therefore it would be better if some important applications can be specified. The proposed approach lacks novelty. It would be better to point out why the specific combination of two existing models is the most appropriate approach to address the task. The experiments are not fully convincing. Bigger and comprehensive datasets (with the right motivating applications) should be used to test the effectiveness of the proposed model. In short, the current version failed to raise excitement from readers due to the reasons above. A major revision addressing these issues could lead to a strong publication in the future. """
102,"""Step Size Optimization""","['Deep Learning', 'Step Size Adaptation', 'Nonconvex Optimization']","""This paper proposes a new approach for step size adaptation in gradient methods. The proposed method called step size optimization (SSO) formulates the step size adaptation as an optimization problem which minimizes the loss function with respect to the step size for the given model parameters and gradients. Then, the step size is optimized based on alternating direction method of multipliers (ADMM). SSO does not require the second-order information or any probabilistic models for adapting the step size, so it is efficient and easy to implement. Furthermore, we also introduce stochastic SSO for stochastic learning environments. In the experiments, we integrated SSO to vanilla SGD and Adam, and they outperformed state-of-the-art adaptive gradient methods including RMSProp, Adam, L4-Adam, and AdaBound on extensive benchmark datasets.""","""The paper is rejected based on unanimous reviews."""
103,"""HiLLoC: lossless image compression with hierarchical latent variable models""","['compression', 'variational inference', 'lossless compression', 'deep latent variable models']","""We make the following striking observation: fully convolutional VAE models trained on 32x32 ImageNet can generalize well, not just to 64x64 but also to far larger photographs, with no changes to the model. We use this property, applying fully convolutional models to lossless compression, demonstrating a method to scale the VAE-based 'Bits-Back with ANS' algorithm for lossless compression to large color photographs, and achieving state of the art for compression of full size ImageNet images. We release Craystack, an open source library for convenient prototyping of lossless compression using probabilistic models, along with full implementations of all of our compression results.""","""The paper proposes a lossless image compression consisting of a hierarchical VAE and using a bits-back version of ANS. Compared to previous work, the paper (i) improves the compression rate performance by adapting the discretization of latent space required for the entropy coder ANS (ii) increases compression speed by implementing a vectorized version of ANS (iii) shows that a model trained on a low-resolution imagenet 32 dataset can generalize its compression capabilities to higher resolution. The authors addressed properly reviewers' concerns. Main critics which remain are (i) the method is not practical yet (long compression time) (ii) results are not state of the art - but the contribution is nevertheless solid."""
104,"""Making Sense of Reinforcement Learning and Probabilistic Inference""","['Reinforcement learning', 'Bayesian inference', 'Exploration']","""Reinforcement learning (RL) combines a control problem with statistical estimation: The system dynamics are not known to the agent, but can be learned through experience. A recent line of research casts RL as inference and suggests a particular framework to generalize the RL problem as probabilistic inference. Our paper surfaces a key shortcoming in that approach, and clarifies the sense in which RL can be coherently cast as an inference problem. In particular, an RL agent must consider the effects of its actions upon future rewards and observations: The exploration-exploitation tradeoff. In all but the most simple settings, the resulting inference is computationally intractable so that practical RL algorithms must resort to approximation. We demonstrate that the popular RL as inference approximation can perform poorly in even very basic problems. However, we show that with a small modification the framework does yield algorithms that can provably perform well, and we show that the resulting algorithm is equivalent to the recently proposed K-learning, which we further connect with Thompson sampling. ""","""The paper explores in more detail the ""RL as inference"" viewpoint and highlights some issues with this approach, as well as ways to address these issues. The new version of the paper has effectively addressed some of the reviewers' initial concerns, resulting in an overall well-written paper with interesting insights."""
105,"""Towards Principled Objectives for Contrastive Disentanglement""","['Disentanglement', 'Contrastive']","""Unsupervised learning is an important tool that has received a significant amount of attention for decades. Its goal is `unsupervised recovery,' i.e., extracting salient factors/properties from unlabeled data. Because of the challenges in defining salient properties, recently, `contrastive disentanglement' has gained popularity to discover the additional variations that are enhanced in one dataset relative to another. %In fact, contrastive disentanglement and unsupervised recovery are often combined in that we seek additional variations that exhibit salient factors/properties. Existing formulations have devised a variety of losses for this task. However, all present day methods exhibit two major shortcomings: (1) encodings for data that does not exhibit salient factors is not pushed to carry no signal; and (2) introduced losses are often hard to estimate and require additional trainable parameters. We present a new formulation for contrastive disentanglement which avoids both shortcomings by carefully formulating a probabilistic model and by using non-parametric yet easily computable metrics. We show on four challenging datasets that the proposed approach is able to better disentangle salient factors. ""","""The paper proposes new regularizations on contrastive disentanglement. After reading the author's response, all the reviewers still think that the contribution is too limited and all agree to reject."""
106,"""Never Give Up: Learning Directed Exploration Strategies""","['deep reinforcement learning', 'exploration', 'intrinsic motivation']","""We propose a reinforcement learning agent to solve hard exploration games by learning a range of directed exploratory policies. We construct an episodic memory-based intrinsic reward using k-nearest neighbors over the agent's recent experience to train the directed exploratory policies, thereby encouraging the agent to repeatedly revisit all states in its environment. A self-supervised inverse dynamics model is used to train the embeddings of the nearest neighbour lookup, biasing the novelty signal towards what the agent can control. We employ the framework of Universal Value Function Approximators to simultaneously learn many directed exploration policies with the same neural network, with different trade-offs between exploration and exploitation. By using the same neural network for different degrees of exploration/exploitation, transfer is demonstrated from predominantly exploratory policies yielding effective exploitative policies. The proposed method can be incorporated to run with modern distributed RL agents that collect large amounts of experience from many actors running in parallel on separate environment instances. Our method doubles the performance of the base agent in all hard exploration in the Atari-57 suite while maintaining a very high score across the remaining games, obtaining a median human normalised score of 1344.0%. Notably, the proposed method is the first algorithm to achieve non-zero rewards (with a mean score of 8,400) in the game of Pitfall! without using demonstrations or hand-crafted features.""","""This paper tackles hard-exploration RL problems. The idea is to learn separate exploration and exploitation strategies using the same network (representation). The exploration is driven by intrinsic rewards, which are generated using an episodic memory and a lifelong novelty modules. Several experiments (simple and Atari domains) show that the proposed approach compares favourably with the baselines. The work is novel both in terms of the episodic curiosity metric and its integration with the life-long curiosity metric, and the results are convincing. All reviewers being positive about this paper, I therefore recommend acceptance."""
107,"""Continual Learning with Adaptive Weights (CLAW)""",['Continual learning'],"""Approaches to continual learning aim to successfully learn a set of related tasks that arrive in an online manner. Recently, several frameworks have been developed which enable deep learning to be deployed in this learning scenario. A key modelling decision is to what extent the architecture should be shared across tasks. On the one hand, separately modelling each task avoids catastrophic forgetting but it does not support transfer learning and leads to large models. On the other hand, rigidly specifying a shared component and a task-specific part enables task transfer and limits the model size, but it is vulnerable to catastrophic forgetting and restricts the form of task-transfer that can occur. Ideally, the network should adaptively identify which parts of the network to share in a data driven way. Here we introduce such an approach called Continual Learning with Adaptive Weights (CLAW), which is based on probabilistic modelling and variational inference. Experiments show that CLAW achieves state-of-the-art performance on six benchmarks in terms of overall continual learning performance, as measured by classification accuracy, and in terms of addressing catastrophic forgetting. ""","""The paper proposes a new variational-inference-based continual learning algorithm with strong performance. There was some disagreement in the reviews, with perhaps the one shared concern being the complexity of the proposed method. One reviewer brought up other potentially related work, but this was convincingly rebutted by the authors. Finally, one reviewer had an issue with the simplicity with the networks in the experiments, but the authors rightly pointed out that the architectures were simply designed to match those from the baselines. Continual learning has been an active area for quite some time and convincingly achieving SOTA in a new way is a strong contribution, and will be of interest to the community. Progress in a field is sometimes made by iteratively simplifying an initially complex solution, and this work lays in a brick in that direction. For these reasons, I recommend acceptance. """
108,"""Mixout: Effective Regularization to Finetune Large-scale Pretrained Language Models""","['regularization', 'finetuning', 'dropout', 'dropconnect', 'adaptive L2-penalty', 'BERT', 'pretrained language model']","""In natural language processing, it has been observed recently that generalization could be greatly improved by finetuning a large-scale language model pretrained on a large unlabeled corpus. Despite its recent success and wide adoption, finetuning a large pretrained language model on a downstream task is prone to degenerate performance when there are only a small number of training instances available. In this paper, we introduce a new regularization technique, to which we refer as mixout, motivated by dropout. Mixout stochastically mixes the parameters of two models. We show that our mixout technique regularizes learning to minimize the deviation from one of the two models and that the strength of regularization adapts along the optimization trajectory. We empirically evaluate the proposed mixout and its variants on finetuning a pretrained language model on downstream tasks. More specifically, we demonstrate that the stability of finetuning and the average accuracy greatly increase when we use the proposed approach to regularize finetuning of BERT on downstream tasks in GLUE.""","""This paper presents mixout, a regularization method that stochastically mixes parameters of a pretrained language model and a target language model. Experiments on GLUE show that the proposed technique improves the stability and accuracy of finetuning a pretrained BERT on several downstream tasks. The paper is well written and the proposed idea is applicable in many settings. The authors have addressed reviewers concerns' during the rebuttal period and all reviewers are now in agreement that this paper should be accepted. I think this paper would be a good addition to ICLR and recommend to accept it. """
109,"""Undersensitivity in Neural Reading Comprehension""","['reading comprehension', 'undersensitivity', 'adversarial questions', 'adversarial training', 'robustness', 'biased data setting']","""Neural reading comprehension models have recently achieved impressive gener- alisation results, yet still perform poorly when given adversarially selected input. Most prior work has studied semantically invariant text perturbations which cause a models prediction to change when it should not. In this work we focus on the complementary problem: excessive prediction undersensitivity where input text is meaningfully changed, and the models prediction does not change when it should. We formulate a noisy adversarial attack which searches among semantic variations of comprehension questions for which a model still erroneously pro- duces the same answer as the original question and with an even higher prob- ability. We show that despite comprising unanswerable questions SQuAD2.0 and NewsQA models are vulnerable to this attack and commit a substantial frac- tion of errors on adversarially generated questions. This indicates that current modelseven where they can correctly predict the answerrely on spurious sur- face patterns and are not necessarily aware of all information provided in a given comprehension question. Developing this further, we experiment with both data augmentation and adversarial training as defence strategies: both are able to sub- stantially decrease a models vulnerability to undersensitivity attacks on held out evaluation data. Finally, we demonstrate that adversarially robust models gener- alise better in a biased data setting with a train/evaluation distribution mismatch; they are less prone to overly rely on predictive cues only present in the training set and outperform a conventional model in the biased data setting by up to 11% F1.""","""The paper investigates the sensitivity of a QA model to perturbations in the input, by replacing content words, such as named entities and nouns, in questions to make the question not answerable by the document. Experimental analysis demonstrates while the original QA performance is not hurt, the models become significantly less vulnerable to such attacks. Reviewers all agree that the paper includes a thorough analysis, at the same time they all suggested extensions to the paper, such as comparison to earlier work, experimental results, which the authors made in the revision. However, reviewers also question the novelty of the approach, given data augmentation methods. Hence, I suggest rejecting the paper."""
110,"""A novel Bayesian estimation-based word embedding model for sentiment analysis""","['sentiment analysis', 'sentiment word embeddings', 'maximum likelihood estimation', 'Bayesian estimation']","""The word embedding models have achieved state-of-the-art results in a variety of natural language processing tasks. Whereas, current word embedding models mainly focus on the rich semantic meanings while are challenged by capturing the sentiment information. For this reason, we propose a novel sentiment word embedding model. In line with the working principle, the parameter estimating method is highlighted. On the task of semantic and sentiment embeddings, the parameters in the proposed model are determined by using both the maximum likelihood estimation and the Bayesian estimation. Experimental results show the proposed model significantly outperforms the baseline methods in sentiment analysis for low-frequency words and sentences. Besides, it is also effective in conventional semantic and sentiment analysis tasks.""","""This paper proposes a method to improve word embedding by incorporating sentiment probabilities. Reviewer appreciate the interesting and simple approach and acknowledges improved results in low-frequency words. However, reviewers find that the paper is lacking in two major aspects: 1) Writing is unclear, and thus it is difficult to understand and judge the contributions of this research. 2) Perhaps because of 1, it is not convincing that the improvements are significant and directly resulting from the modeling contributions. I thank the authors for submitting this work to ICLR, and I hope that the reviewers' comments are helpful in improving this research for future submission."""
111,"""Pseudo-LiDAR++: Accurate Depth for 3D Object Detection in Autonomous Driving""","['pseudo-LiDAR', '3D-object detection', 'stereo depth estimation', 'autonomous driving']","""Detecting objects such as cars and pedestrians in 3D plays an indispensable role in autonomous driving. Existing approaches largely rely on expensive LiDAR sensors for accurate depth information. While recently pseudo-LiDAR has been introduced as a promising alternative, at a much lower cost based solely on stereo images, there is still a notable performance gap. In this paper we provide substantial advances to the pseudo-LiDAR framework through improvements in stereo depth estimation. Concretely, we adapt the stereo network architecture and loss function to be more aligned with accurate depth estimation of faraway objects --- currently the primary weakness of pseudo-LiDAR. Further, we explore the idea to leverage cheaper but extremely sparse LiDAR sensors, which alone provide insufficient information for 3D detection, to de-bias our depth estimation. We propose a depth-propagation algorithm, guided by the initial depth estimates, to diffuse these few exact measurements across the entire depth map. We show on the KITTI object detection benchmark that our combined approach yields substantial improvements in depth estimation and stereo-based 3D object detection --- outperforming the previous state-of-the-art detection accuracy for faraway objects by 40%. Our code is available at pseudo-url.""","""Three knowledgable reviewers give a positive evaluation of the paper. The decision is to accept."""
112,"""Empirical Bayes Transductive Meta-Learning with Synthetic Gradients""","['Meta-learning', 'Empirical Bayes', 'Synthetic Gradient', 'Information Bottleneck']","""We propose a meta-learning approach that learns from multiple tasks in a transductive setting, by leveraging the unlabeled query set in addition to the support set to generate a more powerful model for each task. To develop our framework, we revisit the empirical Bayes formulation for multi-task learning. The evidence lower bound of the marginal log-likelihood of empirical Bayes decomposes as a sum of local KL divergences between the variational posterior and the true posterior on the query set of each task. We derive a novel amortized variational inference that couples all the variational posteriors via a meta-model, which consists of a synthetic gradient network and an initialization network. Each variational posterior is derived from synthetic gradient descent to approximate the true posterior on the query set, although where we do not have access to the true gradient. Our results on the Mini-ImageNet and CIFAR-FS benchmarks for episodic few-shot classification outperform previous state-of-the-art methods. Besides, we conduct two zero-shot learning experiments to further explore the potential of the synthetic gradient.""","""Three reviewers have assessed this paper and they have scored it 6/6/6 after rebuttal. Nonetheless, the reviewers have raised a number of criticisms and the authors are encouraged to resolve them for the camera-ready submission."""
113,"""Neural Network Out-of-Distribution Detection for Regression Tasks""","['Out-of-distribution', 'deep learning', 'regression']","""Neural network out-of-distribution (OOD) detection aims to identify when a model is unable to generalize to new inputs, either due to covariate shift or anomalous data. Most existing OOD methods only apply to classification tasks, as they assume a discrete set of possible predictions. In this paper, we propose a method for neural network OOD detection that can be applied to regression problems. We demonstrate that the hidden features for in-distribution data can be described by a highly concentrated, low dimensional distribution. Therefore, we can model these in-distribution features with an extremely simple generative model, such as a Gaussian mixture model (GMM) with 4 or fewer components. We demonstrate on several real-world benchmark data sets that GMM-based feature detection achieves state-of-the-art OOD detection results on several regression tasks. Moreover, this approach is simple to implement and computationally efficient.""","""The paper investigates out-of-distribution detection for regression tasks. The reviewers raised several concerns about novelty of the method relative to existing methods, motivation & theoretical justification and clarity of the presentation (in particular, the discussion around regression vs classification). I encourage the authors to revise the draft based on the reviewers feedback and resubmit to a different venue. """
114,"""Data Valuation using Reinforcement Learning""","['Data valuation', 'Domain adaptation', 'Robust learning', 'Corrupted sample discovery']","""Quantifying the value of data is a fundamental problem in machine learning. Data valuation has multiple important use cases: (1) building insights about the learning task, (2) domain adaptation, (3) corrupted sample discovery, and (4) robust learning. To adaptively learn data values jointly with the target task predictor model, we propose a meta learning framework which we name Data Valuation using Reinforcement Learning (DVRL). We employ a data value estimator (modeled by a deep neural network) to learn how likely each datum is used in training of the predictor model. We train the data value estimator using a reinforcement signal of the reward obtained on a small validation set that reflects performance on the target task. We demonstrate that DVRL yields superior data value estimates compared to alternative methods across different types of datasets and in a diverse set of application scenarios. The corrupted sample discovery performance of DVRL is close to optimal in many regimes (i.e. as if the noisy samples were known apriori), and for domain adaptation and robust learning DVRL significantly outperforms state-of-the-art by 14.6% and 10.8%, respectively. ""","""The paper suggests an RL-based approach to design a data valuation estimator. The reviewers agree that the proposed method is new and promising, but they also raised concerns about the empirical evaluations, including not comparing with other approaches of data valuation and limited ablation study. The authors provided a rebuttal to address these concerns. It improves the evaluation of one of the reviewers, but it is difficult to recommend acceptance given that we did not have a champion for this paper and the overall score is not high enough."""
115,"""Continual Learning via Principal Components Projection""","['Neural network', 'continual learning', 'catastrophic forgetting', 'lifelong learning']","""Continual learning in neural networks (NN) often suffers from catastrophic forgetting. That is, when learning a sequence of tasks on an NN, the learning of a new task will cause weight changes that may destroy the learned knowledge embedded in the weights for previous tasks. Without solving this problem, it is difficult to use an NN to perform continual or lifelong learning. Although researchers have attempted to solve the problem in many ways, it remains to be challenging. In this paper, we propose a new approach, called principal components projection (PCP). The idea is that in learning a new task, if we can ensure that the gradient updates will only occur in the orthogonal directions to the input vectors of the previous tasks, then the weight updates for learning the new task will not affect the previous tasks. We propose to compute the principal components of the input vectors and use them to transform the input and to project the gradient updates for learning each new task. PCP does not need to store any sampled data from previous tasks or to generate pseudo data of previous tasks and use them to help learn a new task. Empirical evaluation shows that the proposed method PCP markedly outperforms the state-of-the-art baseline methods.""","""There is no author response for this paper. The paper addresses the issue of catastrophic forgetting in continual learning. The authors build upon the idea from [Zheng,2019], namely finding gradient updates in the space perpendicular to the input vectors of the previous tasks resulting in less forgetting, and propose an improvement, namely to use principal component analysis to enable learning new tasks without restricting their solution space as in [Zheng,2019]. While the reviewers acknowledge the importance to study continual learning, they raised several concerns that were viewed by the AC as critical issues: (1) convincing experimental evaluation -- an analysis that clearly shows how and when the proposed method can solve the issue that [Zheng,2019] faces with (task similarity/dissimilarity scenario) would substantially strengthen the evaluation and would allow to assess the scope and contributions of this work; also see R3s detailed concerns and questions on empirical evaluation, R2s suggestion to follow the standard protocols, and R1s suggestion to use PackNet and HAT as baselines for comparison; (2) lack of presentation clarity -- see R2s concerns how to improve, and R1s suggestions on how to better position the paper. A general consensus among reviewers and AC suggests, in its current state the manuscript is not ready for a publication. It needs clarifications, more empirical studies and polish to achieve the desired goal. """
116,"""Using Hindsight to Anchor Past Knowledge in Continual Learning""","['Continual Learning', 'Lifelong Learning', 'Catastrophic Forgetting']","""In continual learning, the learner faces a stream of data whose distribution changes over time. Modern neural networks are known to suffer under this setting, as they quickly forget previously acquired knowledge. To address such catastrophic forgetting, state-of-the-art continual learning methods implement different types of experience replay, re-learning on past data stored in a small buffer known as episodic memory. In this work, we complement experience replay with a meta-learning technique that we call anchoring: the learner updates its knowledge on the current task, while keeping predictions on some anchor points of past tasks intact. These anchor points are learned using gradient-based optimization as to maximize forgetting of the current task, in hindsight, when the learner is fine-tuned on the episodic memory of past tasks. Experiments on several supervised learning benchmarks for continual learning demonstrate that our approach improves the state of the art in terms of both accuracy and forgetting metrics and for various sizes of episodic memories. ""","""This paper proposes a continual learning method that uses anchor points for experience replay. Anchor points are learned with gradient-based optimization to maximize forgetting on the current task. Experiments MNIST, CIFAR, and miniImageNet show the benefit of the proposed approach. As noted by other reviewers, there are some grammatical issues with the paper. It is missing some important details in the experiments. It is unclear to me whether the five random seeds how the datasets (tasks) are ordered in the experiments. Do the five random seeds correspond to five different dataset orderings? I think it would also be very interesting to see the anchor points that are chosen in practice. This issue is brought up by R4, and the authors responded that anchor points do not correspond to classes. Since the main idea of this paper is based on anchor points, it would be nice to analyze further to get a better understanding what they represent. Finally, the authors only evaluate their method on image classification. While I believe the technique can be applied in other domains (e.g., reinforcement learning, natural language processing) with some modifications, without providing concrete empirical evidence in the paper, the authors need to clearly state that their proposed method is only evaluated on image classification and not sell it as a general method (yet). The authors also miss citations to some prior work on memory-based parameter adaptation and its variants. Regardless all of the above issues, this is still a borderline paper. However, due to space constraint, I recommend to reject this paper for ICLR."""
117,"""Which Tasks Should Be Learned Together in Multi-task Learning?""","['multi-task learning', 'Computer Vision']","""Many computer vision applications require solving multiple tasks in real-time. A neural network can be trained to solve multiple tasks simultaneously using 'multi-task learning'. This saves computation at inference time as only a single network needs to be evaluated. Unfortunately, this often leads to inferior overall performance as task objectives compete, which consequently poses the question: which tasks should and should not be learned together in one network when employing multi-task learning? We systematically study task cooperation and competition and propose a framework for assigning tasks to a few neural networks such that cooperating tasks are computed by the same neural network, while competing tasks are computed by different networks. Our framework offers a time-accuracy trade-off and can produce better accuracy using less inference time than not only a single large multi-task neural network but also many single-task networks. ""","""An approach to make multi-task learning is presented, based on the idea of assigning tasks through the concepts of cooperation and competition. The main idea is well-motivated and explained well. The experiments demonstrate that the method is promising. However, there are a few concerns regarding fundamental aspects, such as: how are the decisions affected by the number of parameters? Could ad-hoc algorithms with human in the loop provide the same benefit, when the task-set is small? More importantly, identifying task groups for multi-task learning is an idea presented in prior work, e.g. [1,2,3]. This important body of prior work is not discussed at all in this paper. [1] Han and Zhang. ""Learning multi-level task groups in multi-task learning"" [2] Bonilla et al. ""Multi-task Gaussian process prediction"" [3] Zhang and Yang. ""A Survey on Multi-Task Learning"" """
118,"""HUBERT Untangles BERT to Improve Transfer across NLP Tasks""","['Tensor Product Representation', 'BERT', 'Transfer Learning', 'Neuro-Symbolic Learning']","""We introduce HUBERT which combines the structured-representational power of Tensor-Product Representations (TPRs) and BERT, a pre-trained bidirectional transformer language model. We validate the effectiveness of our model on the GLUE benchmark and HANS dataset. We also show that there is shared structure between different NLP datasets which HUBERT, but not BERT, is able to learn and leverage. Extensive transfer-learning experiments are conducted to confirm this proposition.""","""The paper introduces additional layers on top BERT type models for disentangling of semantic and positional information. The paper demonstrates (small) performance gains in transfer learning compared to pure BERT baseline. Both reviewers and authors have engaged in a constructive discussion of the merits of the proposed method. Although the reviewers appreciate the ideas and parts of the paper the consensus among the reviewers is that the evaluation of the method is not clearcut enough to warrant publication. Rejection is therefore recommended. Given the good ideas presented in the paper and the promising results the authors are encouraged to take the feedback into account and submit to the next ML conference. """
119,"""Towards Stable and comprehensive Domain Alignment: Max-Margin Domain-Adversarial Training""","['domain adaptation', 'transfer learning', 'adversarial training']",""" Domain adaptation tackles the problem of transferring knowledge from a label-rich source domain to an unlabeled or label-scarce target domain. Recently domain-adversarial training (DAT) has shown promising capacity to learn a domain-invariant feature space by reversing the gradient propagation of a domain classifier. However, DAT is still vulnerable in several aspects including (1) training instability due to the overwhelming discriminative ability of the domain classifier in adversarial training, (2) restrictive feature-level alignment, and (3) lack of interpretability or systematic explanation of the learned feature space. In this paper, we propose a novel Max-margin Domain-Adversarial Training (MDAT) by designing an Adversarial Reconstruction Network (ARN). The proposed MDAT stabilizes the gradient reversing in ARN by replacing the domain classifier with a reconstruction network, and in this manner ARN conducts both feature-level and pixel-level domain alignment without involving extra network structures. Furthermore, ARN demonstrates strong robustness to a wide range of hyper-parameters settings, greatly alleviating the task of model selection. Extensive empirical results validate that our approach outperforms other state-of-the-art domain alignment methods. Additionally, the reconstructed target samples are visualized to interpret the domain-invariant feature space which conforms with our intuition. ""","""This paper proposes max-margin domain adversarial training with an adversarial reconstruction network that stabilizes the gradient by replacing the domain classifier. Reviewers and AC think that the method is interesting and motivation is reasonable. Concerns were raised regarding weak experimental results in the diversity of datasets and the comparison to state-of-the-art methods. The paper needs to show how the method works with respect to stability and interpretability. The paper should also clearly relate the contrastive loss for reconstruction to previous work, given that both the loss and the reconstruction idea have been extensively explored for DA. Finally, the theoretical analysis is shallow and the gap between the theory and the algorithm needs to be closed. Overall this is a borderline paper. Considering the bar of ICLR and limited quota, I recommend rejection."""
120,"""Do Image Classifiers Generalize Across Time?""","['robustness', 'image classification', 'distribution shift']","""We study the robustness of image classifiers to temporal perturbations derived from videos. As part of this study, we construct ImageNet-Vid-Robust and YTBB-Robust, containing a total 57,897 images grouped into 3,139 sets of perceptually similar images. Our datasets were derived from ImageNet-Vid and Youtube-BB respectively and thoroughly re-annotated by human experts for image similarity. We evaluate a diverse array of classifiers pre-trained on ImageNet and show a median classification accuracy drop of 16 and 10 percent on our two datasets. Additionally, we evaluate three detection models and show that natural perturbations induce both classification as well as localization errors, leading to a median drop in detection mAP of 14 points. Our analysis demonstrates that perturbations occurring naturally in videos pose a substantial and realistic challenge to deploying convolutional neural networks in environments that require both reliable and low-latency predictions.""","""This paper proposed to evaluate the robustness of CNN models on similar video frames. The authors construct two carefully labeled video databases. Based on extensive experiments, they conclude that the state of the art classification and detection models are not robust when testing on very similar video frames. While Reviewer #1 is overall positive about this work, Reviewer #2 and #3 rated weak reject with various concerns. Reviewer #2 concerns limited contribution since the results are similar to our intuition. Reviewer #3 appreciates the value of the databases, but concerns that the defined metrics make the contribution look huge. The authors and Reviewer #3 have in-depth discussion on the metric, and Reviewer #3 is not convinced. Given the concerns raised by the reviewers, the ACs agree that this paper can not be accepted at its current state."""
121,"""Sparse Networks from Scratch: Faster Training without Losing Performance""","['sparse learning', 'sparse networks', 'sparsity', 'efficient deep learning', 'efficient training']","""We demonstrate the possibility of what we call sparse learning: accelerated training of deep neural networks that maintain sparse weights throughout training while achieving dense performance levels. We accomplish this by developing sparse momentum, an algorithm which uses exponentially smoothed gradients (momentum) to identify layers and weights which reduce the error efficiently. Sparse momentum redistributes pruned weights across layers according to the mean momentum magnitude of each layer. Within a layer, sparse momentum grows weights according to the momentum magnitude of zero-valued weights. We demonstrate state-of-the-art sparse performance on MNIST, CIFAR-10, and ImageNet, decreasing the mean error by a relative 8%, 15%, and 6% compared to other sparse algorithms. Furthermore, we show that sparse momentum reliably reproduces dense performance levels while providing up to 5.61x faster training. In our analysis, ablations show that the benefits of momentum redistribution and growth increase with the depth and size of the network. ""","""This paper presents a method for training sparse neural networks that also provides a speedup during training, in contrast to methods for training sparse networks which train dense networks (at normal speed) and then prune weights. The method provides modest theoretical speedups during training, never measured in wallclock time. The authors improved their paper considerably in response to the reviews. I would be inclined to accept this paper despite not being a big win empirically, however a couple points of sloppiness pointed out (and maintained post-rebuttal) by R1 tip the balance to reject, in my opinion. Specifically: 1) ""I do not agree that keeping the learning rate fixed across methods is the right approach."" This seems like a major problem with the experiments to me. 2) ""I would request the authors to slightly rewrite certain parts of their paper so as not to imply that momentum decreases the variance of the gradients in general."" I agree."""
122,"""Safe Policy Learning for Continuous Control""","['reinforcement learning', 'policy gradient', 'safety']","""We study continuous action reinforcement learning problems in which it is crucial that the agent interacts with the environment only through safe policies, i.e.,~policies that keep the agent in desirable situations, both during training and at convergence. We formulate these problems as {\em constrained} Markov decision processes (CMDPs) and present safe policy optimization algorithms that are based on a Lyapunov approach to solve them. Our algorithms can use any standard policy gradient (PG) method, such as deep deterministic policy gradient (DDPG) or proximal policy optimization (PPO), to train a neural network policy, while guaranteeing near-constraint satisfaction for every policy update by projecting either the policy parameter or the selected action onto the set of feasible solutions induced by the state-dependent linearized Lyapunov constraints. Compared to the existing constrained PG algorithms, ours are more data efficient as they are able to utilize both on-policy and off-policy data. Moreover, our action-projection algorithm often leads to less conservative policy updates and allows for natural integration into an end-to-end PG training pipeline. We evaluate our algorithms and compare them with the state-of-the-art baselines on several simulated (MuJoCo) tasks, as well as a real-world robot obstacle-avoidance problem, demonstrating their effectiveness in terms of balancing performance and constraint satisfaction.""","""The paper is about learning policies in RL while ensuring safety (avoid constraint violations) during training and testing. For this meta review, I ignore Reviewer #3 because that review is useless. The discussion between the authors and Reviewer #1 was useful. Overall, the paper introduces an interesting idea, and the wider context (safe learning) is very relevant. However, I also have some concerns. One of my biggest concerns is that the method proposed here relies heavily on linearizations to deal with nonlinearities. However, the fact that this leads to approximation errors is not being acknowledged much. There are also small things, such as the (average) KL divergence between parameters, which makes no sense to me because the parameters don't have distributions (section 3.1). In terms of experiments, I appreciate that the authors tested the proposed method on multiple environments. The results, however, show that safety cannot be guaranteed. For example, in Figure 1(c), SDDPG clearly violates the constraints. The figures are also misleading because they show the summary statistics of the trajectories (mean and standard deviation). If we were to look at individual trajectories, we would find trajectories that violate the constraints. This fact is brushed under the carpet in the evaluation, and the paper even claims that ""our algorithms quickly stabilize the constraint cost below the threshold"". This may be true on average, but not for all trajectories. A more careful analysis and a more honest discussion would have been useful. In the robotics experiment, I would like to understand why we allow for any collisions. Why can't we set pseudo-formula , thereby disallowing for collisions. The threshold in the paper looks pretty arbitrary. Again, the paper states that ""Figure 4a and Figure 4b show that the Lyapunov-based PG algorithms have higher success rates"". This is a pretty optimistic interpretation of the figure given the size of the error bars. There are some points in the conclusion, I also disagree with: 1) ""achieve safe learning"": Given that some trajectories violate the constraints, ""safe"" is maybe a bit of an overstatement 2) ""better data efficiency"": compared to what? 3) ""scalable to tackle real-world problems"": I disagree with this one as well because for all experiments you will need to run an excessive number of trials, which will not be feasible on a real-world system (assuming we are talking about robots). Overall, I think the paper has some potential, but it needs some more careful theoretical analysis (e.g., effect of linearization errors) and some better empirical analysis. Additionally, given that the paper is at around 9 pages (including the figures in the appendix, which the main paper cites), we are supposed to have higher standards on acceptance than an 8-pages paper. Therefore, I recommend to reject this paper."""
123,"""A Constructive Prediction of the Generalization Error Across Scales""","['neural networks', 'deep learning', 'generalization error', 'scaling', 'scalability', 'vision', 'language']","""The dependency of the generalization error of neural networks on model and dataset size is of critical importance both in practice and for understanding the theory of neural networks. Nevertheless, the functional form of this dependency remains elusive. In this work, we present a functional form which approximates well the generalization error in practice. Capitalizing on the successful concept of model scaling (e.g., width, depth), we are able to simultaneously construct such a form and specify the exact models which can attain it across model/data scales. Our construction follows insights obtained from observations conducted over a range of model/data scales, in various model types and datasets, in vision and language tasks. We show that the form both fits the observations well across scales, and provides accurate predictions from small- to large-scale models and data.""","""The paper presents a very interesting idea for estimating the held-out error of deep models as a function of model and data set size. The authors intuit what the shape of the error should be, then they fit the parameters of a function of the desired shape and show that this has predictive power. I find this idea quite refreshing and the paper is well written with good experiments. Please make sure that the final version contains the cross-validation results provided during the rebuttal."""
124,"""Calibration, Entropy Rates, and Memory in Language Models""","['information theory', 'natural language processing', 'calibration']","""Building accurate language models that capture meaningful long-term dependencies is a core challenge in natural language processing. Towards this end, we present a calibration-based approach to measure long-term discrepancies between a generative sequence model and the true distribution, and use these discrepancies to improve the model. Empirically, we show that state-of-the-art language models, including LSTMs and Transformers, are \emph{miscalibrated}: the entropy rates of their generations drift dramatically upward over time. We then provide provable methods to mitigate this phenomenon. Furthermore, we show how this calibration-based approach can also be used to measure the amount of memory that language models use for prediction.""","""This paper shows empirically that the state-of-the-art language models have a problem of increasing entropy when generating long sequences. The paper then proposes a method to mitigate this problem. As the authors re-iterated through their rebuttal, this paper approaches this problem theoretically, rather than through a comprehensive set of empirical comparisons. After discussions among the reviewers, this paper is not recommended to be accepted. Some skepticism and concerns remain as to whether the paper makes sufficiently clear and proven theoretical contributions. We all appreciate the approach and potential of this paper and encourage the authors to re-submit a revision to a future related venue."""
125,"""Enabling Deep Spiking Neural Networks with Hybrid Conversion and Spike Timing Dependent Backpropagation""","['spiking neural networks', 'ann-snn conversion', 'spike-based backpropagation', 'imagenet']","""Spiking Neural Networks (SNNs) operate with asynchronous discrete events (or spikes) which can potentially lead to higher energy-efficiency in neuromorphic hardware implementations. Many works have shown that an SNN for inference can be formed by copying the weights from a trained Artificial Neural Network (ANN) and setting the firing threshold for each layer as the maximum input received in that layer. These type of converted SNNs require a large number of time steps to achieve competitive accuracy which diminishes the energy savings. The number of time steps can be reduced by training SNNs with spike-based backpropagation from scratch, but that is computationally expensive and slow. To address these challenges, we present a computationally-efficient training technique for deep SNNs. We propose a hybrid training methodology: 1) take a converted SNN and use its weights and thresholds as an initialization step for spike-based backpropagation, and 2) perform incremental spike-timing dependent backpropagation (STDB) on this carefully initialized network to obtain an SNN that converges within few epochs and requires fewer time steps for input processing. STDB is performed with a novel surrogate gradient function defined using neuron's spike time. The weight update is proportional to the difference in spike timing between the current time step and the most recent time step the neuron generated an output spike. The SNNs trained with our hybrid conversion-and-STDB training perform at pseudo-formula fewer number of time steps and achieve similar accuracy compared to purely converted SNNs. The proposed training methodology converges in less than pseudo-formula epochs of spike-based backpropagation for most standard image classification datasets, thereby greatly reducing the training complexity compared to training SNNs from scratch. We perform experiments on CIFAR-10, CIFAR-100 and ImageNet datasets for both VGG and ResNet architectures. We achieve top-1 accuracy of pseudo-formula for ImageNet dataset on SNN with pseudo-formula time steps, which is pseudo-formula faster compared to converted SNNs with similar accuracy.""","""After the rebuttal, all reviewers rated this paper as a weak accept. The reviewer leaning towards rejection was satisfied with the author response and ended up raising their rating to a weak accept. The AC recommends acceptance."""
126,"""Modeling Winner-Take-All Competition in Sparse Binary Projections""","['Sparse Representation', 'Sparse Binary Projection', 'Winner-Take-All']","""Inspired by the advances in biological science, the study of sparse binary projection models has attracted considerable recent research attention. The models project dense input samples into a higher-dimensional space and output sparse binary data representations after Winner-Take-All competition, subject to the constraint that the projection matrix is also sparse and binary. Following the work along this line, we developed a supervised-WTA model when training samples with both input and output representations are available, from which the optimal projection matrix can be obtained with a simple, efficient yet effective algorithm. We further extended the model and the algorithm to an unsupervised setting where only the input representation of the samples is available. In a series of empirical evaluation on similarity search tasks, the proposed models reported significantly improved results over the state-of-the-art methods in both search accuracy and running time. The successful results give us strong confidence that the work provides a highly practical tool to real world applications. ""","""This paper proposes a WTA models for binary projection. While there are notable partial contributions, there is disagreement among the reviewers. I am most persuaded by the concern expressed that the experiments are not done on datasets that are large enough to be state-of-the-art compared to other random projection investigations."""
127,"""Generalization of Two-layer Neural Networks: An Asymptotic Viewpoint""","['Neural Networks', 'Generalization', 'High-dimensional Statistics']","""This paper investigates the generalization properties of two-layer neural networks in high-dimensions, i.e. when the number of samples pseudo-formula , features pseudo-formula , and neurons pseudo-formula tend to infinity at the same rate. Specifically, we derive the exact population risk of the unregularized least squares regression problem with two-layer neural networks when either the first or the second layer is trained using a gradient flow under different initialization setups. When only the second layer coefficients are optimized, we recover the \textit{double descent} phenomenon: a cusp in the population risk appears at n$ and further overparameterization decreases the risk. In contrast, when the first layer weights are optimized, we highlight how different scales of initialization lead to different inductive bias, and show that the resulting risk is \textit{independent} of overparameterization. Our theoretical and experimental results suggest that previously studied model setups that provably give rise to \textit{double descent} might not translate to optimizing two-layer neural networks.""","""This paper focuses on studying the double descent phenomenon in a one layer neural network training in an asymptotic regime where various dimensions go to infinity together with fixed ratios. The authors provide precise asymptotic characterization of the risk and use it to study various phenomena. In particular they characterize the role of various scales of the initialization and their effects. The reviewers all agree that this is an interesting paper with nice contributions. I concur with this assessment. I think this is a solid paper with very precise and concise theory. I recommend acceptance."""
128,"""Reducing Computation in Recurrent Networks by Selectively Updating State Neurons""","['recurrent neural networks', 'conditional computation', 'representation learning']","""Recurrent Neural Networks (RNN) are the state-of-the-art approach to sequential learning. However, standard RNNs use the same amount of computation at each timestep, regardless of the input data. As a result, even for high-dimensional hidden states, all dimensions are updated at each timestep regardless of the recurrent memory cell. Reducing this rigid assumption could allow for models with large hidden states to perform inference more quickly. Intuitively, not all hidden state dimensions need to be recomputed from scratch at each timestep. Thus, recent methods have begun studying this problem by imposing mainly a priori-determined patterns for updating the state. In contrast, we now design a fully-learned approach, SA-RNN, that augments any RNN by predicting discrete update patterns at the fine granularity of independent hidden state dimensions through the parameterization of a distribution of update-likelihoods driven entirely by the input data. We achieve this without imposing assumptions on the structure of the update pattern. Better yet, our method adapts the update patterns online, allowing different dimensions to be updated conditional to the input. To learn which to update, the model solves a multi-objective optimization problem, maximizing accuracy while minimizing the number of updates based on a unified control. Using publicly-available datasets we demonstrate that our method consistently achieves higher accuracy with fewer updates compared to state-of-the-art alternatives. Additionally, our method can be directly applied to a wide variety of models containing RNN architectures.""","""This paper introduces a new RNN architecture which uses a small network to decide which cells get updated at each time step, with the goal of reducing computational cost. The idea makes sense, although it requires the use of a heuristic gradient estimator because of the non-differentiability of the update gate. The main problem with this paper in my view is that the reduction in FLOPS was not demonstrated to correspond to a reduction in wallclock time, and I don't expect it would, since the sparse updates are different for each example in each batch, and only affect one hidden unit at a time. The only discussion of this problem is ""we compute the FLOPs for each method as a surrogate for wall-clock time, which is hardware-dependent and often fluctuates dramatically in practice."" Because this method reduces predictive accuracy, the reduction in FLOPS should be worth it! Minor criticism: 1) Figure 1 is confusing, showing not the proposed architecture in general but instead the connections remaining after computing the sparse updates. """
129,"""Deep Innovation Protection""","['Neuroevolution', 'innovation protection', 'world models', 'genetic algorithm']","""Evolutionary-based optimization approaches have recently shown promising results in domains such as Atari and robot locomotion but less so in solving 3D tasks directly from pixels. This paper presents a method called Deep Innovation Protection (DIP) that allows training complex world models end-to-end for such 3D environments. The main idea behind the approach is to employ multiobjective optimization to temporally reduce the selection pressure on specific components in a world model, allowing other components to adapt. We investigate the emergent representations of these evolved networks, which learn a model of the world without the need for a specific forward-prediction loss. ""","""This paper is a very borderline case. Mixed reviews. R2 score originally 4, moved to 5 (rounded up to WA 6), but still borderline. R1 was 6 (WA) and R3 was 3 (WR). R2 expert on this topic, R1 and R3 less so. AC has carefully read the reviews/rebuttal/comments and looked closely at the paper. AC feels that R2's review is spot on and that the contribution does not quite reach ICLR acceptance level, despite it being interesting work. So the AC feels the paper cannot be accepted at this time. But the work is definitely interesting -- the authors should improve their paper using R2's comments and resubmit. """
130,"""Selection via Proxy: Efficient Data Selection for Deep Learning""","['data selection', 'active-learning', 'core-set selection', 'deep learning', 'uncertainty sampling']","""Data selection methods, such as active learning and core-set selection, are useful tools for machine learning on large datasets. However, they can be prohibitively expensive to apply in deep learning because they depend on feature representations that need to be learned. In this work, we show that we can greatly improve the computational efficiency by using a small proxy model to perform data selection (e.g., selecting data points to label for active learning). By removing hidden layers from the target model, using smaller architectures, and training for fewer epochs, we create proxies that are an order of magnitude faster to train. Although these small proxy models have higher error rates, we find that they empirically provide useful signals for data selection. We evaluate this ""selection via proxy"" (SVP) approach on several data selection tasks across five datasets: CIFAR10, CIFAR100, ImageNet, Amazon Review Polarity, and Amazon Review Full. For active learning, applying SVP can give an order of magnitude improvement in data selection runtime (i.e., the time it takes to repeatedly train and select points) without significantly increasing the final error (often within 0.1%). For core-set selection on CIFAR10, proxies that are over 10 faster to train than their larger, more accurate targets can remove up to 50% of the data without harming the final accuracy of the target, leading to a 1.6 end-to-end training time improvement.""","""This paper proposes to perform sample selection for deep learning - which can be very computationally expensive - using a smaller and simpler proxy network. The paper shows that such proxies are faster to train and do not substantially harm the accuracy of the final network. The reviewers were all in agreement that the problem is important, and that the paper is comprehensive and well executed. I therefore recommend it should be accepted."""
131,"""Overcoming Catastrophic Forgetting via Hessian-free Curvature Estimates""","['catastrophic forgetting', 'multi-task learning', 'continual learning']","""Learning neural networks with gradient descent over a long sequence of tasks is problematic as their fine-tuning to new tasks overwrites the network weights that are important for previous tasks. This leads to a poor performance on old tasks a phenomenon framed as catastrophic forgetting. While early approaches use task rehearsal and growing networks that both limit the scalability of the task sequence orthogonal approaches build on regularization. Based on the Fisher information matrix (FIM) changes to parameters that are relevant to old tasks are penalized, which forces the task to be mapped into the available remaining capacity of the network. This requires to calculate the Hessian around a mode, which makes learning tractable. In this paper, we introduce Hessian-free curvature estimates as an alternative method to actually calculating the Hessian. In contrast to previous work, we exploit the fact that most regions in the loss surface are flat and hence only calculate a Hessian-vector-product around the surface that is relevant for the current task. Our experiments show that on a variety of well-known task sequences we either significantly outperform or are en par with previous work.""","""The reviewers have provided thorough reviews of your work. I encourage you to read them carefully should you decide to resubmit it to a later conference."""
132,"""On the Invertibility of Invertible Neural Networks""","['Invertible Neural Networks', 'Stability', 'Normalizing Flows', 'Generative Models', 'Evaluation of Generative Models']","""Guarantees in deep learning are hard to achieve due to the interplay of flexible modeling schemes and complex tasks. Invertible neural networks (INNs), however, provide several mathematical guarantees by design, such as the ability to approximate non-linear diffeomorphisms. One less studied advantage of INNs is that they enable the design of bi-Lipschitz functions. This property has been used implicitly by various works to design generative models, memory-saving gradient computation, regularize classifiers, and solve inverse problems. In this work, we study Lipschitz constants of invertible architectures in order to investigate guarantees on stability of their inverse and forward mapping. Our analysis reveals that commonly-used INN building blocks can easily become non-invertible, leading to questionable ``exact'' log likelihood computations and training difficulties. We introduce a set of numerical analysis tools to diagnose non-invertibility in practice. Finally, based on our theoretical analysis, we show how to guarantee numerical invertibility for one of the most common INN architectures.""","""This submission analyses the numerical invertibility of analytically invertible neural networks and shows that analytical invertibility does not guarantee numerical invertibility of some invertible networks under certain conditions (e.g. adversarial perturbation). Strengths: -The work is interesting and the theoretical analysis is insightful. Weaknesses: -The main concern shared by all reviewers was the weakness of the experimental section including (i) insufficient motivation of the decorrelation task; (ii) missing comparisons and experimental settings. -The paper clarity could be improved. Both weaknesses were not sufficiently addressed in the rebuttal. All reviewer recommendations were borderline to reject. """
133,"""Differential Privacy in Adversarial Learning with Provable Robustness""","['differential privacy', 'adversarial learning', 'robustness bound', 'adversarial example']","""In this paper, we aim to develop a novel mechanism to preserve differential privacy (DP) in adversarial learning for deep neural networks, with provable robustness to adversarial examples. We leverage the sequential composition theory in DP, to establish a new connection between DP preservation and provable robustness. To address the trade-off among model utility, privacy loss, and robustness, we design an original, differentially private, adversarial objective function, based on the post-processing property in DP, to tighten the sensitivity of our model. An end-to-end theoretical analysis and thorough evaluations show that our mechanism notably improves the robustness of DP deep neural networks.""","""The authors propose a framework for relating adversarial robustness, privacy and utility and show how one can train models to simultaneously attain these properties. The paper also makes interesting connections between the DP literature and the robustness literature thereby porting over composition theorems to this new setting. The paper makes very interesting contributions, but a few key points require some improvement: 1) The initial version of the paper relied on an approximation of the objective function in order to obtain DP guarantees. While the authors clarified how the approximation impacts model performance in the rebuttal and revision, the reviewers still had concerns about the utility-privacy-robustness tradeoff achieved by the algorithm. 2) The presentation of the paper seems tailored to audiences familiar with DP and is not easy for a broader audience to follow. Despite this limitations, the paper does make significant novel contributions on an improtant problem (simultaneously achieveing privacy, robustness and utility) and could be of interest. Overall, I consider this paper borderline and vote for rejection, but strongly encourage the authors to improve the paper wrt the above concerns and resubmit to a future venue."""
134,"""Towards Physics-informed Deep Learning for Turbulent Flow Prediction""",[],"""While deep learning has shown tremendous success in a wide range of domains, it remains a grand challenge to incorporate physical principles in a systematic manner to the design, training, and inference of such models. In this paper, we aim to predict turbulent flow by learning its highly nonlinear dynamics from spatiotemporal velocity fields of large-scale fluid flow simulations of relevance to turbulence modeling and climate modeling. We adopt a hybrid approach by marrying two well-established turbulent flow simulation techniques with deep learning. Specifically, we introduce trainable spectral filters in a coupled model of Reynolds-averaged Navier-Stokes (RANS) and Large Eddy Simulation (LES), followed by a specialized U-net for prediction. Our approach, which we call Turbulent-Flow Net (TF-Net), is grounded in a principled physics model, yet offers the flexibility of learned representations. We compare our model, TF-Net, with state-of-the-art baselines and observe significant reductions in error for predictions 60 frames ahead. Most significantly, our method predicts physical fields that obey desirable physical characteristics, such as conservation of mass, whilst faithfully emulating the turbulent kinetic energy field and spectrum, which are critical for accurate prediction of turbulent flows.""","""The reviewers all agree that this is an interesting paper with good results. The authors' rebuttal response was very helpful. However, given the competitiveness of the submissions this year, the submission did not make it. We encourage the authors to resubmit the work including the new results obtained during the rebuttal."""
135,"""Generalization through Memorization: Nearest Neighbor Language Models""","['language models', 'k-nearest neighbors']","""We introduce pseudo-formula NN-LMs, which extend a pre-trained neural language model (LM) by linearly interpolating it with a pseudo-formula -nearest neighbors ( pseudo-formula NN) model. The nearest neighbors are computed according to distance in the pre-trained LM embedding space, and can be drawn from any text collection, including the original LM training data. Applying this transformation to a strong Wikitext-103 LM, with neighbors drawn from the original training set, our pseudo-formula NN-LM achieves a new state-of-the-art perplexity of 15.79 -- a 2.9 point improvement with no additional training. We also show that this approach has implications for efficiently scaling up to larger training sets and allows for effective domain adaptation, by simply varying the nearest neighbor datastore, again without further training. Qualitatively, the model is particularly helpful in predicting rare patterns, such as factual knowledge. Together, these results strongly suggest that learning similarity between sequences of text is easier than predicting the next word, and that nearest neighbor search is an effective approach for language modeling in the long tail.""","""This paper proposes an idea of using a pre-trained language model on a potentially smaller set of text, and interpolating it with a k-nearest neighbor model over a large datastore. The authors provide extensive evaluation and insightful results. Two reviewers vote for accepting the paper, and one reviewer is negative. After considering the points made by reviewers, the AC decided that the paper carries value for the community and should be accepted."""
136,"""Attacking Lifelong Learning Models with Gradient Reversion""","['lifelong learning', 'adversarial learning']","""Lifelong learning aims at avoiding the catastrophic forgetting problem of traditional supervised learning models. Episodic memory based lifelong learning methods such as A-GEM (Chaudhry et al., 2018b) are shown to achieve the state-of-the-art results across the benchmarks. In A-GEM, a small episodic memory is utilized to store a random subset of the examples from previous tasks. While the model is trained on a new task, a reference gradient is computed on the episodic memory to guide the direction of the current update. While A-GEM has strong continual learning ability, it is not clear that if it can retain the performance in the presence of adversarial attacks. In this paper, we examine the robustness ofA-GEM against adversarial attacks to the examples in the episodic memory. We evaluate the effectiveness of traditional attack methods such as FGSM and PGD.The results show that A-GEM still possesses strong continual learning ability in the presence of adversarial examples in the memory and simple defense techniques such as label smoothing can further alleviate the adversarial effects. We presume that traditional attack methods are specially designed for standard supervised learning models rather than lifelong learning models. we therefore propose a principled way for attacking A-GEM called gradient reversion(GREV) which is shown to be more effective. Our results indicate that future lifelong learning research should bear adversarial attacks in mind to develop more robust lifelong learning algorithms.""","""The paper investigates questions around adversarial attacks in a continual learning algorithm, i.e., A-GEM. While reviewers agree that this is a novel topic of great importance, the contributions are quite narrow, since only a single model (A-GEM) is considered and it is not immediately clear whether this method transfers to other lifelong learning models (or even other models that belong to the same family as A-GEM). This is an interesting submission, but at the moment due to its very narrow scope, it seems more appropriate as a workshop submission investigating a very particular question (that of attacking A-GEM). As such, I cannot recommend acceptance."""
137,"""Multilingual Alignment of Contextual Word Representations""","['multilingual', 'natural language processing', 'embedding alignment', 'BERT', 'word embeddings', 'transfer']","""We propose procedures for evaluating and strengthening contextual embedding alignment and show that they are useful in analyzing and improving multilingual BERT. In particular, after our proposed alignment procedure, BERT exhibits significantly improved zero-shot performance on XNLI compared to the base model, remarkably matching pseudo-fully-supervised translate-train models for Bulgarian and Greek. Further, to measure the degree of alignment, we introduce a contextual version of word retrieval and show that it correlates well with downstream zero-shot transfer. Using this word retrieval task, we also analyze BERT and find that it exhibits systematic deficiencies, e.g. worse alignment for open-class parts-of-speech and word pairs written in different scripts, that are corrected by the alignment procedure. These results support contextual alignment as a useful concept for understanding large multilingual pre-trained models.""","""This paper proposes a method to improve alignments of a multilingual contextual embedding model (e.g., multilingual BERT) using parallel corpora as an anchor. The authors show the benefit of their approach in a zero-shot XNLI experiment and present a word retrieval analysis to better understand multilingual BERT. All reviewers agree that this is an interesting paper with valuable contributions. The authors and reviewers have been engaged in a thorough discussion during the rebuttal period and the revised paper has addressed most of the reviewers concerns. I think this paper would be a good addition to ICLR so I recommend accepting this paper."""
138,"""Multi-Scale Representation Learning for Spatial Feature Distributions using Grid Cells""","['Grid cell', 'space encoding', 'spatially explicit model', 'multi-scale periodic representation', 'unsupervised learning']","""Unsupervised text encoding models have recently fueled substantial progress in NLP. The key idea is to use neural networks to convert words in texts to vector space representations (embeddings) based on word positions in a sentence and their contexts, which are suitable for end-to-end training of downstream tasks. We see a strikingly similar situation in spatial analysis, which focuses on incorporating both absolute positions and spatial contexts of geographic objects such as POIs into models. A general-purpose representation model for space is valuable for a multitude of tasks. However, no such general model exists to date beyond simply applying discretization or feed-forwardnets to coordinates, and little effort has been put into jointly modeling distributions with vastly different characteristics, which commonly emerges from GIS data. Meanwhile, Nobel Prize-winning Neuroscience research shows that grid cells in mammals provide a multi-scale periodic representation that functions as a metric for location encoding and is critical for recognizing places and for path-integration. Therefore, we propose a representation learning model called Space2Vec to encode the absolute positions and spatial relationships of places. We conduct experiments on two real-world geographic data for two different tasks: 1) predicting types of POIs given their positions and context, 2) image classification leveraging their geo-locations. Results show that because of its multi-scale representations, Space2Vec outperforms well-established ML approaches such as RBF kernels, multi-layer feed-forward nets, and tile embedding approaches for location modeling and image classification tasks. Detailed analysis shows that all baselines can at most well handle distribution at one scale but show poor performances in other scales. In contrast, Space2Vec s multi-scale representation can handle distributions at different scales.""","""This paper proposes to follow inspiration from NLP method that use position embeddings and adapt them to spatial analysis that also makes use of both absolute and contextual information, and presents a representation learning approach called space2vec to capture absolute positions and spatial relationships of places. Experiments show promising results on real data compared to a number of existing approaches. Reviewers recognize the promise of this approach and suggested a few additional experiments such as using this spatial encoding as part of other tasks such as image classification, as well as clarification and further explanations on many important points. Authors performed these experiments and incorporated the results in their revisions, further strengthening the submission. They also provided more analyses and explanations about the granularity of locality and motivation for their approach, which answered the main concerns of reviewers. Overall, the revised paper is solid and we recommend acceptance."""
139,"""Understanding Why Neural Networks Generalize Well Through GSNR of Parameters""","['DNN', 'generalization', 'GSNR', 'gradient descent']","""As deep neural networks (DNNs) achieve tremendous success across many application domains, researchers tried to explore in many aspects on why they generalize well. In this paper, we provide a novel perspective on these issues using the gradient signal to noise ratio (GSNR) of parameters during training process of DNNs. The GSNR of a parameter is simply defined as the ratio between its gradient's squared mean and variance, over the data distribution. Based on several approximations, we establish a quantitative relationship between model parameters' GSNR and the generalization gap. This relationship indicates that larger GSNR during training process leads to better generalization performance. Futher, we show that, different from that of shallow models (e.g. logistic regression, support vector machines), the gradient descent optimization dynamics of DNNs naturally produces large GSNR during training, which is probably the key to DNNs remarkable generalization ability.""","""Quoting a reviewer for a very nice summary: ""In this work, the authors suggest a new point of view on generalization through the lens of the distribution of the per-sample gradients. The authors consider the variance and mean of the per-sample gradients for each parameter of the model and define for each parameter the Gradient Signal to Noise ratio (GSNR). The GSNR of a parameter is the ratio between the mean squared of the gradient per parameter per sample (computed over the samples) and the variance of the gradient per parameter per sample (also computed over the samples). The GSNR is promising as a measure of generalization and the authors provide a nice leading order derivation of the GSNR as a proxy for the measure of the generalization gap in the model."" The majority of the reviewers vote to accept this paper. We can view the 3 as a weak signal as that reviewer stated in his review that he struggled to rate the paper because it contained a lot of math."""
140,"""Implicit Bias of Gradient Descent based Adversarial Training on Separable Data""","['implicit bias', 'adversarial training', 'robustness', 'gradient descent']","""Adversarial training is a principled approach for training robust neural networks. Despite of tremendous successes in practice, its theoretical properties still remain largely unexplored. In this paper, we provide new theoretical insights of gradient descent based adversarial training by studying its computational properties, specifically on its implicit bias. We take the binary classification task on linearly separable data as an illustrative example, where the loss asymptotically attains its infimum as the parameter diverges to infinity along certain directions. Specifically, we show that for any fixed iteration pseudo-formula , when the adversarial perturbation during training has proper bounded L2 norm, the classifier learned by gradient descent based adversarial training converges in direction to the maximum L2 norm margin classifier at the rate of pseudo-formula , significantly faster than the rate T}$ of training with clean data. In addition, when the adversarial perturbation during training has bounded Lq norm, the resulting classifier converges in direction to a maximum mixed-norm margin classifier, which has a natural interpretation of robustness, as being the maximum L2 norm margin classifier under worst-case bounded Lq norm perturbation to the data. Our findings provide theoretical backups for adversarial training that it indeed promotes robustness against adversarial perturbation.""","""This paper provides theoretical guarantees for adversarial training. While the reviews raise a variety of criticisms (e.g., the results are under a variety of assumptions), overall the paper constitutes valuable progress on an emerging problem."""
141,"""Learning robust visual representations using data augmentation invariance""","['deep neural networks', 'visual cortex', 'invariance', 'data augmentation']","""Deep convolutional neural networks trained for image object categorization have shown remarkable similarities with representations found across the primate ventral visual stream. Yet, artificial and biological networks still exhibit important differences. Here we investigate one such property: increasing invariance to identity-preserving image transformations found along the ventral stream. Despite theoretical evidence that invariance should emerge naturally from the optimization process, we present empirical evidence that the activations of convolutional neural networks trained for object categorization are not robust to identity-preserving image transformations commonly used in data augmentation. As a solution, we propose data augmentation invariance, an unsupervised learning objective which improves the robustness of the learned representations by promoting the similarity between the activations of augmented image samples. Our results show that this approach is a simple, yet effective and efficient (10 % increase in training time) way of increasing the invariance of the models while obtaining similar categorization performance.""","""This paper introduces an unsupervised learning objective that attempts to improve the robustness of the learnt representations. This approach is empirically demonstrated on cifar10 and tiny imagenet with different network architectures including all convolutional net, wide residual net and dense net. Two of three reviewers felt that the paper was not suitable for publication at ICLR in its current form. Self supervision based on preserving network outputs despite data transformations is a relatively minor contribution, the framing of the approach as inspired by biological vision notwithstanding. Several references, including at a past ICLR include: pseudo-url and Gidaris, P. Singh, and N. Komodakis. Unsupervised representation learning by predicting image rotations. In International Conference on Learning Representations (ICLR), 2018."""
142,"""Doubly Robust Bias Reduction in Infinite Horizon Off-Policy Estimation""","['off-policy evaluation', 'infinite horizon', 'doubly robust', 'reinforcement learning']","""Infinite horizon off-policy policy evaluation is a highly challenging task due to the excessively large variance of typical importance sampling (IS) estimators. Recently, Liu et al. (2018) proposed an approach that significantly reduces the variance of infinite-horizon off-policy evaluation by estimating the stationary density ratio, but at the cost of introducing potentially high risks due to the error in density ratio estimation. In this paper, we develop a bias-reduced augmentation of their method, which can take advantage of a learned value function to obtain higher accuracy. Our method is doubly robust in that the bias vanishes when either the density ratio or value function estimation is perfect. In general, when either of them is accurate, the bias can also be reduced. Both theoretical and empirical results show that our method yields significant advantages over previous methods.""","""The paper proposes a doubly robust off-policy evaluation method that uses both stationary density ratio as well as a learned value function in order to reduce bias. The reviewers unanimously recommend acceptance of this paper."""
143,"""Yet another but more efficient black-box adversarial attack: tiling and evolution strategies""","['adversarial examples', 'black-box attacks', 'derivative free optimization', 'deep learning']","""We introduce a new black-box attack achieving state of the art performances. Our approach is based on a new objective function, borrowing ideas from pseudo-formula -white box attacks, and particularly designed to fit derivative-free optimization requirements. It only requires to have access to the logits of the classifier without any other information which is a more realistic scenario. Not only we introduce a new objective function, we extend previous works on black box adversarial attacks to a larger spectrum of evolution strategies and other derivative-free optimization methods. We also highlight a new intriguing property that deep neural networks are not robust to single shot tiled attacks. Our models achieve, with a budget limited to pseudo-formula queries, results up to pseudo-formula of success rate against InceptionV3 classifier with pseudo-formula queries to the network on average in the untargeted attacks setting, which is an improvement by pseudo-formula queries of the current state of the art. In the targeted setting, we are able to reach, with a limited budget of pseudo-formula , pseudo-formula of success rate with a budget of pseudo-formula queries on average, i.e. we need pseudo-formula queries less than the current state of the art.""","""This paper proposes a new black-box adversarial attack based on tiling and evolution strategies. While the experimental results look promising, the main concern of the reviewers is the novelty of the proposed algorithm, and many things need to be improved in terms of clarity and experiments. The paper does not gather sufficient support from the reviewers even after author response. I encourage the authors to improve this paper and resubmit to future conference."""
144,"""Variance Reduced Local SGD with Lower Communication Complexity""","['variance reduction', 'local SGD', 'distributed optimization']","""To accelerate the training of machine learning models, distributed stochastic gradient descent (SGD) and its variants have been widely adopted, which apply multiple workers in parallel to speed up training. Among them, Local SGD has gained much attention due to its lower communication cost. Nevertheless, when the data distribution on workers is non-identical, Local SGD requires N^{\frac{3}{4}}) communications to maintain its \emph{linear iteration speedup} property, where pseudo-formula is the total number of iterations and pseudo-formula is the number of workers. In this paper, we propose Variance Reduced Local SGD (VRL-SGD) to further reduce the communication complexity. Benefiting from eliminating the dependency on the gradient variance among workers, we theoretically prove that VRL-SGD achieves a \emph{linear iteration speedup} with a lower communication complexity N^{\frac{3}{2}})$ even if workers access non-identical datasets. We conduct experiments on three machine learning tasks, and the experimental results demonstrate that VRL-SGD performs impressively better than Local SGD when the data among workers are quite diverse.""","""The paper presents a novel variance reduction algorithm for SGD. The presentation is clear. But the theory is not good enough. The reivewers worry about the converge results and the technical part is not sound."""
145,"""On Federated Learning of Deep Networks from Non-IID Data: Parameter Divergence and the Effects of Hyperparametric Methods""","['Federated learning', 'Iterative parameter averaging', 'Deep networks', 'Decentralized non-IID data', 'Hyperparameter optimization methods']","""Federated learning, where a global model is trained by iterative parameter averaging of locally-computed updates, is a promising approach for distributed training of deep networks; it provides high communication-efficiency and privacy-preservability, which allows to fit well into decentralized data environments, e.g., mobile-cloud ecosystems. However, despite the advantages, the federated learning-based methods still have a challenge in dealing with non-IID training data of local devices (i.e., learners). In this regard, we study the effects of a variety of hyperparametric conditions under the non-IID environments, to answer important concerns in practical implementations: (i) We first investigate parameter divergence of local updates to explain performance degradation from non-IID data. The origin of the parameter divergence is also found both empirically and theoretically. (ii) We then revisit the effects of optimizers, network depth/width, and regularization techniques; our observations show that the well-known advantages of the hyperparameter optimization strategies could rather yield diminishing returns with non-IID data. (iii) We finally provide the reasons of the failure cases in a categorized way, mainly based on metrics of the parameter divergence.""","""This paper studies the problem of federated learning for non-i.i.d. data, and looks at the hyperparameter optimization in this setting. As the reviewers have noted, this is a purely empirical paper. There are certain aspects of the experiments that need further discussion, especially the learning rate selection for different architectures. That said, the submission may not be ready for publication at its current stage."""
146,"""Best feature performance in codeswitched hate speech texts""","['Hate Speech', 'Code-switching', 'feature selection', 'representation learning']","""How well can hate speech concept be abstracted in order to inform automatic classification in codeswitched texts by machine learning classifiers? We explore different representations and empirically evaluate their predictiveness using both conventional and deep learning algorithms in identifying hate speech in a ~48k human-annotated dataset that contain mixed languages, a phenomenon common among multilingual speakers. This paper espouses a novel approach to handle this challenge by introducing a hierarchical approach that employs Latent Dirichlet Allocation to generate topic models that feed into another high-level feature set that we acronym PDC. PDC groups similar meaning words in word families during the preprocessing stage for supervised learning models. The high-level PDC features generated are based on Ombui et al, (2019) hate speech annotation framework that is informed by the triangular theory of hate (Stanberg,2003). Results obtained from frequency-based models using the PDC feature on the annotated dataset of ~48k short messages comprising of tweets generated during the 2012 and 2017 Kenyan presidential elections indicate an improvement on classification accuracy in identifying hate speech as compared to the baseline""","""This paper focuses on hate speech detection and compares several classification methods including Naive Bayes, SVM, KNN, CNN, and many others. The most valuable contribution of this work is a dataset of ~400,000 tweets from 2017 Kenyan general election, although it is unclear whether the authors plan to release the dataset in the future. The paper is difficult to follow, uses an incorrect ICLR format, and is full of typos. All three reviewers agree that while this paper deals with an important topic in social media analysis, it is not ready for publication in its current state. The authors did not provide a rebuttal to reviewers' concerns. I recommend rejecting this paper for ICLR."""
147,"""Unsupervised Temperature Scaling: Robust Post-processing Calibration for Domain Shift""","['calibration', 'domain shift', 'uncertainty prediction', 'deep neural networks', 'temperature scaling']","""The uncertainty estimation is critical in real-world decision making applications, especially when distributional shift between the training and test data are prevalent. Many calibration methods in the literature have been proposed to improve the predictive uncertainty of DNNs which are generally not well-calibrated. However, none of them is specifically designed to work properly under domain shift condition. In this paper, we propose Unsupervised Temperature Scaling (UTS) as a robust calibration method to domain shift. It exploits test samples to adjust the uncertainty prediction of deep models towards the test distribution. UTS utilizes a novel loss function, weighted NLL, that allows unsupervised calibration. We evaluate UTS on a wide range of model-datasets which shows the possibility of calibration without labels and demonstrate the robustness of UTS compared to other methods (e.g., TS, MC-dropout, SVI, ensembles) in shifted domains. ""","""The paper proposes a method called unsupervised temperature scaling (UTS) for improving calibration under domain shift. The reviewers agree that this is an interesting research question, but raised concerns about clarity of the text, depth of the empirical evaluation, and validity of some of the assumptions. While the author rebuttal addressed some of these concerns, the reviewers felt that the current version of the paper is not ready for publication. I encourage the authors to revise and resubmit to a different venue."""
148,"""Probabilistic modeling the hidden layers of deep neural networks""","['Neural Networks', 'Gaussian Process', 'Probabilistic Representation for Deep Learning']","""In this paper, we demonstrate that the parameters of Deep Neural Networks (DNNs) cannot satisfy the i.i.d. prior assumption and activations being i.i.d. is not valid for all the hidden layers of DNNs. Hence, the Gaussian Process cannot correctly explain all the hidden layers of DNNs. Alternatively, we introduce a novel probabilistic representation for the hidden layers of DNNs in two aspects: (i) a hidden layer formulates a Gibbs distribution, in which neurons define the energy function, and (ii) the connection between two adjacent layers can be modeled by a product of experts model. Based on the probabilistic representation, we demonstrate that the entire architecture of DNNs can be explained as a Bayesian hierarchical model. Moreover, the proposed probabilistic representation indicates that DNNs have explicit regularizations defined by the hidden layers serving as prior distributions. Based on the Bayesian explanation for the regularization of DNNs, we propose a novel regularization approach to improve the generalization performance of DNNs. Simulation results validate the proposed theories. ""","""This paper makes a claim that the iid assumption for NN parameters does not hold. The paper then expresses the joint distribution as a Gibbs distribution and PoE. Finally, there are some results on SGD as VI. Reviewers have mixed opinion about the paper and it is clear that the starting point of the paper (regarding iid assumption) is unclear. I myself read through the paper and discussed this with the reviewer, and it is clear that there are many issues with this paper. Here are my concerns: - The parameters of DNN are not iid *after* training. They are not supposed to be. So the empirical results where the correlation matrix is shown does not make the point that the paper is trying to make. - I agree with R2 that the prior is subjective and can be anything, and it is true that the ""trained"" NN may not correspond to a GP. This is actually well known which is why it is difficult to match the performance of a trained GP and trained NN. - The whole contribution about connection to Gibbs distribution and PoE is not insightful. These things are already known, so I don't know why this is a contribution. - Regarding connection between SGD and VI, they do *not* really prove anything. The derivation is *wrong*. In eq 85 in Appendix J2, the VI problem is written as KL(P||Q), but it should be KL(Q||P). Then this is argued to be the same as Eq. 88 obtained with SGD. This is not correct. Given these issues and based on reviewers' reaction to the content, I recommend to reject this paper. """
149,"""When Robustness Doesnt Promote Robustness: Synthetic vs. Natural Distribution Shifts on ImageNet""","['robustness', 'distribution shift', 'image corruptions', 'adversarial robustness', 'reliable machine learning']","""We conduct a large experimental comparison of various robustness metrics for image classification. The main question of our study is to what extent current synthetic robustness interventions (lp-adversarial examples, noise corruptions, etc.) promote robustness under natural distribution shifts occurring in real data. To this end, we evaluate 147 ImageNet models under 199 different evaluation settings. We find that no current robustness intervention improves robustness on natural distribution shifts beyond a baseline given by standard models without a robustness intervention. The only exception is the use of larger training datasets, which provides a small increase in robustness on one natural distribution shift. Our results indicate that robustness improvements on real data may require new methodology and more evaluations on natural distribution shifts.""","""The authors show that models trained to satisfy adversarial robustness properties do not possess robustness to naturally occuring distribution shifts. The majority of the reviewers agree that this is not a surprising result especially for the choice of natural distribution shifts chosen by the authors (for instance it would be better if the authors compare to natural distribution shifts that look similar to the adversarial corruptions). Moreover, this is a survey study and no novel algorithms are presented, so the paper cannot be accepted on that merit either."""
150,"""Antifragile and Robust Heteroscedastic Bayesian Optimisation""","['Bayesian Optimisation', 'Gaussian Processeses', 'Heteroscedasticity']",""" Bayesian Optimisation is an important decision-making tool for high-stakes applications in drug discovery and materials design. An oft-overlooked modelling consideration however is the representation of input-dependent or heteroscedastic aleatoric uncertainty. The cost of misrepresenting this uncertainty as being homoscedastic could be high in drug discovery applications where neglecting heteroscedasticity in high throughput virtual screening could lead to a failed drug discovery program. In this paper, we propose a heteroscedastic Bayesian Optimisation scheme which both represents and optimises aleatoric noise in the suggestions. We consider cases such as drug discovery where we would like to minimise or be robust to aleatoric uncertainty but also applications such as materials discovery where it may be beneficial to maximise or be antifragile to aleatoric uncertainty. Our scheme features a heteroscedastic Gaussian Process (GP) as the surrogate model in conjunction with two acquisition heuristics. First, we extend the augmented expected improvement (AEI) heuristic to the heteroscedastic setting and second, we introduce a new acquisition function, aleatoric-penalised expected improvement (ANPEI) based on a simple scalarisation of the performance and noise objective. Both methods are capable of penalising or promoting aleatoric noise in the suggestions and yield improved performance relative to a naive implementation of homoscedastic Bayesian Optimisation on toy problems as well as a real-world optimisation problem.""","""The reviewers initially gave scores of 1,1,3 citing primarily weak empirical results and a lack of theoretical justification. The experiments are presented on synthetic examples, which is a great start but the reviewers found that this doesn't give strong enough evidence that the methods developed in the paper would work well in practice. The authors did not submit an author response to the reviewers and as such the scores did not change during discussion. This paper would be significantly strengthened with the addition of experiments on actual problems e.g. related to drug discovery which is the motivation in the paper."""
151,"""Targeted sampling of enlarged neighborhood via Monte Carlo tree search for TSP""","['Travelling salesman problem', 'Monte Carlo tree search', 'Reinforcement learning', 'Variable neighborhood search']","""The travelling salesman problem (TSP) is a well-known combinatorial optimization problem with a variety of real-life applications. We tackle TSP by incorporating machine learning methodology and leveraging the variable neighborhood search strategy. More precisely, the search process is considered as a Markov decision process (MDP), where a 2-opt local search is used to search within a small neighborhood, while a Monte Carlo tree search (MCTS) method (which iterates through simulation, selection and back-propagation steps), is used to sample a number of targeted actions within an enlarged neighborhood. This new paradigm clearly distinguishes itself from the existing machine learning (ML) based paradigms for solving the TSP, which either uses an end-to-end ML model, or simply applies traditional techniques after ML for post optimization. Experiments based on two public data sets show that, our approach clearly dominates all the existing learning based TSP algorithms in terms of performance, demonstrating its high potential on the TSP. More importantly, as a general framework without complicated hand-crafted rules, it can be readily extended to many other combinatorial optimization problems.""","""This paper contributes to the recently emerging literature about applying reinforcement learning methods to combinatorial optimization problems. The authors consider TSPs and propose a search method that interleaves greedy local search with Monte Carlo Tree Search (MCTS). This approach does not contain learned function approximation for transferring knowledge across problem instances, which is usually considered the main motivation for applying RL to comb opt problems. The reviewers state that, although the approach is a relatively straight-forward combination of two existing methods, it is in principle somewhat interesting. However, the experiments indicate a large gap to SOTA solvers for TSPs. No rebuttal was submitted. In absence of both SOTA results and methodological novelty, as assessed by the reviewers and my owm reading, I recommend to reject the paper in its current form."""
152,"""AdaScale SGD: A Scale-Invariant Algorithm for Distributed Training""","['Large-batch SGD', 'large-scale learning', 'distributed training']","""When using distributed training to speed up stochastic gradient descent, learning rates must adapt to new scales in order to maintain training effectiveness. Re-tuning these parameters is resource intensive, while fixed scaling rules often degrade model quality. We propose AdaScale SGD, a practical and principled algorithm that is approximately scale invariant. By continually adapting to the gradients variance, AdaScale often trains at a wide range of scales with nearly identical results. We describe this invariance formally through AdaScales convergence bounds. As the batch size increases, the bounds maintain final objective values, while smoothly transitioning away from linear speed-ups. In empirical comparisons, AdaScale trains well beyond the batch size limits of popular linear learning rate scaling rules. This includes large-scale training without model degradation for machine translation, image classification, object detection, and speech recognition tasks. The algorithm introduces negligible computational overhead and no tuning parameters, making AdaScale an attractive choice for large-scale training. ""","""Main summary: Novel rule for scaling learning rate, known as gain ration, for which the effective batch size is increased. Discussion: reviewer 2: main concern is reviewer can't tell if it's better of worse than linear learning rate scaling from their experiment section. reviewer 3: novlty/contribution is a bit too low for ICLR. reviewer 1: algorthmic clarity lacking. Recommendation: all 3 reviewers recommend reject, I agree."""
153,"""Learning Efficient Parameter Server Synchronization Policies for Distributed SGD""","['Distributed SGD', 'Paramter-Server', 'Synchronization Policy', 'Reinforcement Learning']","""We apply a reinforcement learning (RL) based approach to learning optimal synchronization policies used for Parameter Server-based distributed training of machine learning models with Stochastic Gradient Descent (SGD). Utilizing a formal synchronization policy description in the PS-setting, we are able to derive a suitable and compact description of states and actions, allowing us to efficiently use the standard off-the-shelf deep Q-learning algorithm. As a result, we are able to learn synchronization policies which generalize to different cluster environments, different training datasets and small model variations and (most importantly) lead to considerable decreases in training time when compared to standard policies such as bulk synchronous parallel (BSP), asynchronous parallel (ASP), or stale synchronous parallel (SSP). To support our claims we present extensive numerical results obtained from experiments performed in simulated cluster environments. In our experiments training time is reduced by 44 on average and learned policies generalize to multiple unseen circumstances.""","""The authors consider a parameter-server setup where the learner acts a server communicating updated weights to workers and receiving gradient updates from them. A major question then relates in the synchronisation of the gradient updates, for which couple of *fixed* heuristics exists that trade-off accuracy of updates (BSP) for speed (ASP) or even combine the two allowing workers to be at most k steps out-of-sync. Instead, the authors propose to learn a synchronisation policy using RL. The authors perform results on a simulated and real environment. Overall, the RL-based method seems to provide some improvement over the fixed protocols, however the margin between the fixed and the RL get smaller in the real clusters. This is actually the main concern raised by the reviewers as well (especially R2) -- the paper in its initial submission did not include the real cluster results, rather these were added at the rebuttal. I find this to be an interesting real-world application of RL and I think it provides an alternative environment for testing RL algorithms beyond simulated environments. As such, Im recommending acceptance. However, I do ask the authors to be upfront with the real cluster results and move them in the main paper. """
154,"""Learning Reusable Options for Multi-Task Reinforcement Learning""","['Reinforcement Learning', 'Temporal Abstraction', 'Options', 'Multi-Task RL']","""Reinforcement learning (RL) has become an increasingly active area of research in recent years. Although there are many algorithms that allow an agent to solve tasks efficiently, they often ignore the possibility that prior experience related to the task at hand might be available. For many practical applications, it might be unfeasible for an agent to learn how to solve a task from scratch, given that it is generally a computationally expensive process; however, prior experience could be leveraged to make these problems tractable in practice. In this paper, we propose a framework for exploiting existing experience by learning reusable options. We show that after an agent learns policies for solving a small number of problems, we are able to use the trajectories generated from those policies to learn reusable options that allow an agent to quickly learn how to solve novel and related problems.""","""This paper presents a novel option discovery mechanism through incrementally learning reusasble options from a small number of policies that are usable across multiple tasks. The primary concern with this paper was with a number of issues around the experiments. Specifically, the reviewers took issue with the definition of novel tasks in the Atari context. A more robust discussion and analysis around what tasks are considered novel would be useful. Comparisons to other option discovery papers on the Atari domains is also required. Additionally, one reviewer had concerns on the hard limit of option execution length which remain unresolved following the discussion. While this is really promising work, it is not ready to be accepted at this stage."""
155,"""Learning from Partially-Observed Multimodal Data with Variational Autoencoders""","['data imputation', 'variational autoencoders', 'generative models']","""Learning from only partially-observed data for imputation has been an active research area. Despite promising progress on unimodal data imputation (e.g., image in-painting), models designed for multimodal data imputation are far from satisfactory. In this paper, we propose variational selective autoencoders (VSAE) for this task. Different from previous works, our proposed VSAE learns only from partially-observed data. The proposed VSAE is capable of learning the joint distribution of observed and unobserved modalities as well as the imputation mask, resulting in a unified model for various down-stream tasks including data generation and imputation. Evaluation on both synthetic high-dimensional and challenging low-dimensional multi-modality datasets shows significant improvement over the state-of-the-art data imputation models.""","""This submission proposes a VAE-based method for jointly inferring latent variables and data generation. The method learns from partially-observed multimodal data. Strengths: -Learning to generate from partially-observed data is an important and challenging problem. -The proposed idea is novel and promising. Weaknesses: -Some experimental protocols are not fully explained. -The experiments are not sufficiently comprehensive (comparisons to key baselines are missing). -More analysis of some surprising results is needed. -The presentation has much to improve. The method is promising but the mentioned weaknesses were not sufficiently addressed during discussion. AC agrees with the majority recommendation to reject. """
156,"""Revisiting the Generalization of Adaptive Gradient Methods""","['Adaptive Methods', 'AdaGrad', 'Generalization']","""A commonplace belief in the machine learning community is that using adaptive gradient methods hurts generalization. We re-examine this belief both theoretically and experimentally, in light of insights and trends from recent years. We revisit some previous oft-cited experiments and theoretical accounts in more depth, and provide a new set of experiments in larger-scale, state-of-the-art settings. We conclude that with proper tuning, the improved training performance of adaptive optimizers does not in general carry an overfitting penalty, especially in contemporary deep learning. Finally, we synthesize a ``user's guide'' to adaptive optimizers, including some proposed modifications to AdaGrad to mitigate some of its empirical shortcomings.""","""The paper combines several recent optimizer tricks to provide empirical evidence that goes against the common belief that adaptive methods result in larger generalization errors. The contribution of this paper is rather small: no new strategies are introduced and no new theory is presented. The paper makes a good workshop paper, but does not meet the bar for publication at ICLR. """
157,"""Budgeted Training: Rethinking Deep Neural Network Training Under Resource Constraints""","['budgeted training', 'learning rate schedule', 'linear schedule', 'annealing', 'learning rate decay']","""In most practical settings and theoretical analyses, one assumes that a model can be trained until convergence. However, the growing complexity of machine learning datasets and models may violate such assumptions. Indeed, current approaches for hyper-parameter tuning and neural architecture search tend to be limited by practical resource constraints. Therefore, we introduce a formal setting for studying training under the non-asymptotic, resource-constrained regime, i.e., budgeted training. We analyze the following problem: ""given a dataset, algorithm, and fixed resource budget, what is the best achievable performance?"" We focus on the number of optimization iterations as the representative resource. Under such a setting, we show that it is critical to adjust the learning rate schedule according to the given budget. Among budget-aware learning schedules, we find simple linear decay to be both robust and high-performing. We support our claim through extensive experiments with state-of-the-art models on ImageNet (image classification), Kinetics (video classification), MS COCO (object detection and instance segmentation), and Cityscapes (semantic segmentation). We also analyze our results and find that the key to a good schedule is budgeted convergence, a phenomenon whereby the gradient vanishes at the end of each allowed budget. We also revisit existing approaches for fast convergence and show that budget-aware learning schedules readily outperform such approaches under (the practical but under-explored) budgeted training setting.""","""This paper formalizes the problem of training deep networks in the presence of a budget, expressed here as a maximum total number of optimization iterations, and evaluates various budget-aware learning schedules, finding simple linear decay to work well. Post-discussion, the reviewers all felt that this was a good paper. There were some concerns about the lack of theoretical justification for linear decay, but these were overruled by the practical use of these papers to the community. Therefore I am recommending it be accepted."""
158,"""Evolutionary Reinforcement Learning for Sample-Efficient Multiagent Coordination""","['reinforcement learning', 'multiagent', 'neuroevolution']","""Many cooperative multiagent reinforcement learning environments provide agents with a sparse team-based reward as well as a dense agent-specific reward that incentivizes learning basic skills. Training policies solely on the team-based reward is often difficult due to its sparsity. Also, relying solely on the agent-specific reward is sub-optimal because it usually does not capture the team coordination objective. A common approach is to use reward shaping to construct a proxy reward by combining the individual rewards. However, this requires manual tuning for each environment. We introduce Multiagent Evolutionary Reinforcement Learning (MERL), a split-level training platform that handles the two objectives separately through two optimization processes. An evolutionary algorithm maximizes the sparse team-based objective through neuroevolution on a population of teams. Concurrently, a gradient-based optimizer trains policies to only maximize the dense agent-specific rewards. The gradient-based policies are periodically added to the evolutionary population as a way of information transfer between the two optimization processes. This enables the evolutionary algorithm to use skills learned via the agent-specific rewards toward optimizing the global objective. Results demonstrate that MERL significantly outperforms state-of-the-art methods such as MADDPG on a number of difficult coordination benchmarks. ""","""This work has a lot of promise; however, the author response was not sufficient to address the concerns expressed by reviewer 1, leading to an aggregate rating that is just not sufficient to justify an acceptance recommendation. The AC recommends rejection."""
159,"""Moniqua: Modulo Quantized Communication in Decentralized SGD""","['decentralized training', 'quantization', 'communicaiton', 'stochastic gradient descent']","""Decentralized stochastic gradient descent (SGD), where parallel workers are connected to form a graph and communicate adjacently, has shown promising results both theoretically and empirically. In this paper we propose Moniqua, a technique that allows decentralized SGD to use quantized communication. We prove in theory that Moniqua communicates a provably bounded number of bits per iteration, while converging at the same asymptotic rate as the original algorithm does with full-precision communication. Moniqua improves upon prior works in that it (1) requires no additional memory, (2) applies to non-convex objectives, and (3) supports biased/linear quantizers. We demonstrate empirically that Moniqua converges faster with respect to wall clock time than other quantized decentralized algorithms. We also show that Moniqua is robust to very low bit-budgets, allowing less than 4-bits-per-parameter communication without affecting convergence when training VGG16 on CIFAR10.""","""This papers proposed an interesting idea for distributed decentralized training with quantized communication. The method is interesting and elegant. However, it is incremental, does not support arbitrary communication compression, and does not have a convincing explanation why modulo operation makes the algorithm better. The experiments are not convincing. Comparison is shown only for the beginning of the optimization where the algorithm does not achieve state of the art accuracy. Moreover, the modular hyperparameter is not easy to choose and seems cannot help achieve consensus."""
160,"""ReMixMatch: Semi-Supervised Learning with Distribution Matching and Augmentation Anchoring""",['semi-supervised learning'],"""We improve the recently-proposed ``MixMatch semi-supervised learning algorithm by introducing two new techniques: distribution alignment and augmentation anchoring. - Distribution alignment encourages the marginal distribution of predictions on unlabeled data to be close to the marginal distribution of ground-truth labels. - Augmentation anchoring} feeds multiple strongly augmented versions of an input into the model and encourages each output to be close to the prediction for a weakly-augmented version of the same input. To produce strong augmentations, we propose a variant of AutoAugment which learns the augmentation policy while the model is being trained. Our new algorithm, dubbed ReMixMatch, is significantly more data-efficient than prior work, requiring between 5 times and 16 times less data to reach the same accuracy. For example, on CIFAR-10 with 250 labeled examples we reach 93.73% accuracy (compared to MixMatch's accuracy of 93.58% with 4000 examples) and a median accuracy of 84.92% with just four labels per class. ""","""This works improves the MixMatch semi-supervised algorithm along the two directions of distribution alignment and augmentation anchoring, which together make the approach more data-efficient than prior work. All reviewers agree that the impressive empirical results in the paper are its main strength, but express concern that the method is overly complicated and hacking together many known pieces, as well as doubt as to the extent of the contribution of the augmentation method itself, with requests for better augmentation controls. While some of these concerns have not been addressed by authors in their response, the strength of empirical results seems enough to justify an acceptance recommendation."""
161,"""Real or Not Real, that is the Question""","['GAN', 'generalization', 'realness', 'loss function']","""While generative adversarial networks (GAN) have been widely adopted in various topics, in this paper we generalize the standard GAN to a new perspective by treating realness as a random variable that can be estimated from multiple angles. In this generalized framework, referred to as RealnessGAN, the discriminator outputs a distribution as the measure of realness. While RealnessGAN shares similar theoretical guarantees with the standard GAN, it provides more insights on adversarial learning. More importantly, compared to multiple baselines, RealnessGAN provides stronger guidance for the generator, achieving improvements on both synthetic and real-world datasets. Moreover, it enables the basic DCGAN architecture to generate realistic images at 1024*1024 resolution when trained from scratch.""","""The paper proposes a novel GAN formulation where the discriminator outputs discrete distributions instead of a scalar. The objective uses two ""anchor"" distributions that correspond to real and fake data. There were some concerns about the choice of these distributions but authors have addressed it in their response. The empirical results are impressive and the method will be of interest to the wide generative models community. """
162,"""BANANAS: Bayesian Optimization with Neural Networks for Neural Architecture Search""","['neural architecture search', 'Bayesian optimization']","""Neural Architecture Search (NAS) has seen an explosion of research in the past few years. A variety of methods have been proposed to perform NAS, including reinforcement learning, Bayesian optimization with a Gaussian process model, evolutionary search, and gradient descent. In this work, we design a NAS algorithm that performs Bayesian optimization using a neural network model. We develop a path-based encoding scheme to featurize the neural architectures that are used to train the neural network model. This strategy is particularly effective for encoding architectures in cell-based search spaces. After training on just 200 random neural architectures, we are able to predict the validation accuracy of a new architecture to within one percent of its true accuracy on average. This may be of independent interest beyond Bayesian neural architecture search. We test our algorithm on the NASBench dataset (Ying et al. 2019), and show that our algorithm significantly outperforms other NAS methods including evolutionary search, reinforcement learning, and AlphaX (Wang et al. 2019). Our algorithm is over 100x more efficient than random search, and 3.8x more efficient than the next-best algorithm. We also test our algorithm on the search space used in DARTS (Liu et al. 2018), and show that our algorithm is competitive with state-of-the-art NAS algorithms on this search space.""","""This paper uses Bayesian optimization with neural networks for neural architecture search. One of the contributions is a path-based encoding that enumerates every possible path through a cell search space. This encoding is shown to be surprisingly powerful, but it will not scale to large cell-based search spaces or non-cell-based search spaces. The availability of code, as well as the careful attention to reproducibility is much appreciated and a factor in favor of the paper. In the discussion, it surfaced that a comparison to existing Bayesian optimization approaches using neural networks would have been possible, while the authors initially did not think that this would be the case. The authors promised to include these comparisons in the final version, but, as was also discussed in the private discussion between reviewers and AC, this is problematic since it is not clear what these results will show. Therefore, the one reviewer who was debating about increasing their score did in the end not do so (but would be inclined to accept a future version with a clean and thorough comparison to baselines). All reviewers stuck with their score of ""weak reject"", leaning to borderline. I read the paper myself and concur with this judgement. I recommend rejection of the current version, with an encouragement to submit to another venue after including a comparison to BO methods based on neural networks."""
163,"""Mode Connectivity and Sparse Neural Networks""","['sparsity', 'mode connectivity', 'lottery ticket', 'optimization landscape']","""We uncover a connection between two seemingly unrelated empirical phenomena: mode connectivity and sparsity. On the one hand, there is growing catalog of situations where, across multiple runs, SGD learns weights that fall into minima that are connected (mode connectivity). A striking example is described by Nagarajan & Kolter (2019). They observe that test error on MNIST does not change along the linear path connecting the end points of two independent SGD runs, starting from the same random initialization. On the other hand, there is the lottery ticket hypothesis of Frankle & Carbin (2019), where dense, randomly initialized networks have sparse subnetworks capable of training in isolation to full accuracy. However, neither phenomenon scales beyond small vision networks. We start by proposing a technique to find sparse subnetworks after initialization. We observe that these subnetworks match the accuracy of the full network only when two SGD runs for the same subnetwork are connected by linear paths with the no change in test error. Our findings connect the existence of sparse subnetworks that train to high accuracy with the dynamics of optimization via mode connectivity. In doing so, we identify analogues of the phenomena uncovered by Nagarajan & Kolter and Frankle & Carbin in ImageNet-scale architectures at state-of-the-art sparsity levels.""","""This paper investigates theories related to networks sparsification, related to mode connectivity and the so-called lottery ticket hypothesis. The paper is interesting and has merit, but on balance I find the contributions not sufficiently clear to warrant acceptance. The authors made substantial changes to the paper which are admirable and which bring it to borderline status. """
164,"""Stabilizing Transformers for Reinforcement Learning""","['Deep Reinforcement Learning', 'Transformer', 'Reinforcement Learning', 'Self-Attention', 'Memory', 'Memory for Reinforcement Learning']","""Owing to their ability to both effectively integrate information over long time horizons and scale to massive amounts of data, self-attention architectures have recently shown breakthrough success in natural language processing (NLP), achieving state-of-the-art results in domains such as language modeling and machine translation. Harnessing the transformer's ability to process long time horizons of information could provide a similar performance boost in partially-observable reinforcement learning (RL) domains, but the large-scale transformers used in NLP have yet to be successfully applied to the RL setting. In this work we demonstrate that the standard transformer architecture is difficult to optimize, which was previously observed in the supervised learning setting but becomes especially pronounced with RL objectives. We propose architectural modifications that substantially improve the stability and learning speed of the original Transformer and XL variant. The proposed architecture, the Gated Transformer-XL (GTrXL), surpasses LSTMs on challenging memory environments and achieves state-of-the-art results on the multi-task DMLab-30 benchmark suite, exceeding the performance of an external memory architecture. We show that the GTrXL, trained using the same losses, has stability and performance that consistently matches or exceeds a competitive LSTM baseline, including on more reactive tasks where memory is less critical. GTrXL offers an easy-to-train, simple-to-implement but substantially more expressive architectural alternative to the standard multi-layer LSTM ubiquitously used for RL agents in partially-observable environments. ""","""This paper proposes architectural modifications to transformers, which are promising for sequential tasks requiring memory but can be unstable to optimize, and applies the resulting method to the RL setting, evaluated in the DMLab-30 benchmark. While I thought the approach was interesting and the results promising, the reviewers unanimously felt that the experimental evaluation could be more thorough, and were concerned with the motivation behind of some of the proposed changes. """
165,"""Semi-Supervised Boosting via Self Labelling""","['semi-supervised learning', 'boosting', 'noise-resistant']","""Attention to semi-supervised learning grows in machine learning as the price to expertly label data increases. Like most previous works in the area, we focus on improving an algorithm's ability to discover the inherent property of the entire dataset from a few expertly labelled samples. In this paper we introduce Boosting via Self Labelling (BSL), a solution to semi-supervised boosting when there is only limited access to labelled instances. Our goal is to learn a classifier that is trained on a data set that is generated by combining the generalization of different algorithms which have been trained with a limited amount of supervised training samples. Our method builds upon a combination of several different components. First, an inference aided ensemble algorithm developed on a set of weak classifiers will offer the initial noisy labels. Second, an agreement based estimation approach will return the average error rates of the noisy labels. Third and finally, a noise-resistant boosting algorithm will train over the noisy labels and their error rates to describe the underlying structure as closely as possible. We provide both analytical justifications and experimental results to back the performance of our model. Based on several benchmark datasets, our results demonstrate that BSL is able to outperform state-of-the-art semi-supervised methods consistently, achieving over 90% test accuracy with only 10% of the data being labelled.""","""The paper presents a new semi-supervised boosting approach. As reviewers pointed out and AC acknowledge, the paper is not ready to publish in various aspects: (a) limited novelty/contribution, (b) reproducibility issue and (c) arguable assumptions. Hence, I recommend rejection."""
166,"""Semi-supervised Semantic Segmentation using Auxiliary Network""","['deep learning', 'semi-supervised segmentation', 'semantic segmentation', 'CNN']","""Recently, the convolutional neural networks (CNNs) have shown great success on semantic segmentation task. However, for practical applications such as autonomous driving, the popular supervised learning method faces two challenges: the demand of low computational complexity and the need of huge training dataset accompanied by ground truth. Our focus in this paper is semi-supervised learning. We wish to use both labeled and unlabeled data in the training process. A highly efficient semantic segmentation network is our platform, which achieves high segmentation accuracy at low model size and high inference speed. We propose a semi-supervised learning approach to improve segmentation accuracy by including extra images without labels. While most existing semi-supervised learning methods are designed based on the adversarial learning techniques, we present a new and different approach, which trains an auxiliary CNN network that validates labels (ground-truth) on the unlabeled images. Therefore, in the supervised training phase, both the segmentation network and the auxiliary network are trained using labeled images. Then, in the unsupervised training phase, the unlabeled images are segmented and a subset of image pixels are picked up by the auxiliary network; and then they are used as ground truth to train the segmentation network. Thus, at the end, all dataset images can be used for retraining the segmentation network to improve the segmentation results. We use Cityscapes and CamVid datasets to verify the effectiveness of our semi-supervised scheme, and our experimental results show that it can improve the mean IoU for about 1.2% to 2.9% on the challenging Cityscapes dataset.""","""The paper presents a semi-supervised learning approach to handle semantic classification (pixel-level classification). The approach extends Hung et al. 18, using a confidence map generated by an auxiliary network, aimed to improve the identification of small objects. The reviews state that the paper novelty is limited compared to the state of the art; the reviewers made several suggestions to improve the processing pipeline (including all images, including the confidence weights). The reviews also state that the paper needs be carefully polished. The area chair hopes that the suggestions about the contents and writing of the paper will help to prepare an improved version of the paper. """
167,"""Universal Approximation with Deep Narrow Networks""","['deep learning', 'universal approximation', 'deep narrow networks']","""The classical Universal Approximation Theorem certifies that the universal approximation property holds for the class of neural networks of arbitrary width. Here we consider the natural `dual' theorem for width-bounded networks of arbitrary depth. Precisely, let pseudo-formula be the number of inputs neurons, pseudo-formula be the number of output neurons, and let pseudo-formula be any nonaffine continuous function, with a continuous nonzero derivative at some point. Then we show that the class of neural networks of arbitrary depth, width + m + 2 and activation function pseudo-formula , exhibits the universal approximation property with respect to the uniform norm on compact subsets of pseudo-formula . This covers every activation function possible to use in practice; in particular this includes polynomial activation functions, making this genuinely different to the classical case. We go on to consider extensions of this result. First we show an analogous result for a certain class of nowhere differentiable activation functions. Second we establish an analogous result for noncompact domains, by showing that deep narrow networks with the ReLU activation function exhibit the universal approximation property with respect to the pseudo-formula -norm on pseudo-formula . Finally we show that width of only + m + 1$ suffices for `most' activation functions.""","""This article studies universal approximation with deep narrow networks, targeting the minimum width. The central contribution is described as providing results for general activation functions. The technique is described as straightforward, but robust enough to handle a variety of activation functions. The reviewers found the method elegant. The most positive position was that the article develops non trivial techniques that extend existing universal approximation results for deep narrow networks to essentially all activation functions. However, the reviewers also expressed reservations mentioning that the results could be on the incremental side, with derivations similar to previous works, and possibly of limited interest. In all, the article makes a reasonable theoretical contribution to the analysis of deep narrow neural networks. Although this is a reasonably good article, it is not good enough, given the very high acceptance bar for this year's ICLR. """
168,"""Learning scalable and transferable multi-robot/machine sequential assignment planning via graph embedding""","['reinforcement learning', 'multi-robot/machine', 'scheduling', 'planning', 'scalability', 'transferability', 'mean-field inference', 'graph embedding']","""Can the success of reinforcement learning methods for simple combinatorial optimization problems be extended to multi-robot sequential assignment planning? In addition to the challenge of achieving near-optimal performance in large problems, transferability to an unseen number of robots and tasks is another key challenge for real-world applications. In this paper, we suggest a method that achieves the first success in both challenges for robot/machine scheduling problems. Our method comprises of three components. First, we show any robot scheduling problem can be expressed as a random probabilistic graphical model (PGM). We develop a mean-field inference method for random PGM and use it for Q-function inference. Second, we show that transferability can be achieved by carefully designing two-step sequential encoding of problem state. Third, we resolve the computational scalability issue of fitted Q-iteration by suggesting a heuristic auction-based Q-iteration fitting method enabled by transferability we achieved. We apply our method to discrete-time, discrete space problems (Multi-Robot Reward Collection (MRRC)) and scalably achieve 97% optimality with transferability. This optimality is maintained under stochastic contexts. By extending our method to continuous time, continuous space formulation, we claim to be the first learning-based method with scalable performance in any type of multi-machine scheduling problems; our method scalability achieves comparable performance to popular metaheuristics in Identical parallel machine scheduling (IPMS) problems.""","""Unfortunately, the reviewers of the paper are all not certain about their review, none of them being RL experts. Assessing the paper myselfnot being an RL expert but having experiencethe authors have addressed all points of the reviewers thoroughly. """
169,"""Evaluating Lossy Compression Rates of Deep Generative Models""","['Deep Learning', 'Generative Models', 'Information Theory', 'Rate Distortion Theory']","""Deep generative models have achieved remarkable progress in recent years. Despite this progress, quantitative evaluation and comparison of generative models remains as one of the important challenges. One of the most popular metrics for evaluating generative models is the log-likelihood. While the direct computation of log-likelihood can be intractable, it has been recently shown that the log-likelihood of some of the most interesting generative models such as variational autoencoders (VAE) or generative adversarial networks (GAN) can be efficiently estimated using annealed importance sampling (AIS). In this work, we argue that the log-likelihood metric by itself cannot represent all the different performance characteristics of generative models, and propose to use rate distortion curves to evaluate and compare deep generative models. We show that we can approximate the entire rate distortion curve using one single run of AIS for roughly the same computational cost as a single log-likelihood estimate. We evaluate lossy compression rates of different deep generative models such as VAEs, GANs (and its variants) and adversarial autoencoders (AAE) on MNIST and CIFAR10, and arrive at a number of insights not obtainable from log-likelihoods alone.""","""The paper proposed a method to evaluate latent variable based generative models by estimating the compression in the latents (rate) and the distortion in the resulting reconstructions. While reviewers have clearly appreciated the theoretical novelty in using AIS to get an upper bound on the rate, there are concerns on missing empirical comparison with other related metrics (precision-recall) and limited practical applicability of the method due to large computational cost. Authors should consider comparing with PR metric and discuss some directions that can make the method practically as relevant as other related metrics. """
170,"""Neural Network Branching for Neural Network Verification ""","['Neural Network Verification', 'Branch and Bound', 'Graph Neural Network', 'Learning to branch']","""Formal verification of neural networks is essential for their deployment in safety-critical areas. Many available formal verification methods have been shown to be instances of a unified Branch and Bound (BaB) formulation. We propose a novel framework for designing an effective branching strategy for BaB. Specifically, we learn a graph neural network (GNN) to imitate the strong branching heuristic behaviour. Our framework differs from previous methods for learning to branch in two main aspects. Firstly, our framework directly treats the neural network we want to verify as a graph input for the GNN. Secondly, we develop an intuitive forward and backward embedding update schedule. Empirically, our framework achieves roughly pseudo-formula reduction in both the number of branches and the time required for verification on various convolutional networks when compared to the best available hand-designed branching strategy. In addition, we show that our GNN model enjoys both horizontal and vertical transferability. Horizontally, the model trained on easy properties performs well on properties of increased difficulty levels. Vertically, the model trained on small neural networks achieves similar performance on large neural networks.""","""The authors develop a strategy to learn branching strategies for branch-and-bound based neural network verification algorithms, based on GNNs that imitate strong branching. This allows the authors to obtain significant speedups in branch and bound based neural network verification algorithms relative to strong baselines considered in prior work. The reviewers were in consensus and the quality of the paper and minor concerns raised in the initial reviews were adequately addressed in the rebuttal phase. Therefore, I strongly recommend acceptance."""
171,"""X-Forest: Approximate Random Projection Trees for Similarity Measurement""",[],"""Similarity measurement plays a central role in various data mining and machine learning tasks. Generally, a similarity measurement solution should, in an ideal state, possess the following three properties: accuracy, efficiency and independence from prior knowledge. Yet unfortunately, vital as similarity measurements are, no previous works have addressed all of them. In this paper, we propose X-Forest, consisting of a group of approximate Random Projection Trees, such that all three targets mentioned above are tackled simultaneously. Our key techniques are as follows. First, we introduced RP Trees into the tasks of similarity measurement such that accuracy is improved. In addition, we enforce certain layers in each tree to share identical projection vectors, such that exalted efficiency is achieved. Last but not least, we introduce randomness into partition to eliminate its reliance on prior knowledge. We conduct experiments on three real-world datasets, whose results demonstrate that our model, X-Forest, reaches an efficiency of up to 3.5 times higher than RP Trees with negligible compromising on its accuracy, while also being able to outperform traditional Euclidean distance-based similarity metrics by as much as 20% with respect to clustering tasks. We have released codes in github anonymously so as to meet the demand of reproducibility.""","""This paper proposes a new method for measuring pairwise similarity between data points. The method is based on the idea that similarity between two data points to be the probability (over the randomness in constructing the trees) that they are close in a Random Projection tree. Reviewers found important limitations in this work, pertaining to clarity of mathematical statements and novelty. Unfortunately, the authors did not provide a rebuttal, so these concerns remain. Moreover, the program committee was made aware of the striking similarities between this submission and the preprint pseudo-url from Yan et al., which by itself would be grounds for rejection due to concerns of potential plagiarism. As a result, the AC recommends rejection at this time. """
172,"""Modeling Fake News in Social Networks with Deep Multi-Agent Reinforcement Learning""","['deep multi-agent reinforcement learning', 'fake news', 'social networks', 'information aggregation']","""We develop a practical and flexible computational model of fake news on social networks in which agents act according to learned best response functions. We achieve this by extending an information aggregation game to allow for fake news and by representing agents as recurrent deep Q-networks (DQN) trained by independent Q-learning. In the game, agents repeatedly guess whether a claim is true or false taking into account an informative private signal and observations of actions of their neighbors on the social network in the previous period. We incorporate fake news into the model by adding an adversarial agent, the attacker, that either provides biased private signals to or takes over a subset of agents. The attacker can follow either a hand-tuned or trained policy. Our model allows us to tackle questions that are analytically intractable in fully rational models, while ensuring that agents follow reasonable best response functions. Our results highlight the importance of awareness, privacy and social connectivity in curbing the adverse effects of fake news. ""","""The paper aims to model fake news by drawing tools from multi-agent reinforcement learning. After the discussion period, there is a consensus among the reviewers that the paper lacks novel technical contributions. The reviewers also acknowledge that paper also doesn't quite deliver a practical solution as claimed by the authors."""
173,"""Advantage Weighted Regression: Simple and Scalable Off-Policy Reinforcement Learning""","['reinforcement learning', 'policy search', 'control']","""In this paper, we aim to develop a simple and scalable reinforcement learning algorithm that uses standard supervised learning methods as subroutines. Our goal is an algorithm that utilizes only simple and convergent maximum likelihood loss functions, while also being able to leverage off-policy data. Our proposed approach, which we refer to as advantage-weighted regression (AWR), consists of two standard supervised learning steps: one to regress onto target values for a value function, and another to regress onto weighted target actions for the policy. The method is simple and general, can accommodate continuous and discrete actions, and can be implemented in just a few lines of code on top of standard supervised learning methods. We provide a theoretical motivation for AWR and analyze its properties when incorporating off-policy data from experience replay. We evaluate AWR on a suite of standard OpenAI Gym benchmark tasks, and show that it achieves competitive performance compared to a number of well-established state-of-the-art RL algorithms. AWR is also able to acquire more effective policies than most off-policy algorithms when learning from purely static datasets with no additional environmental interactions. Furthermore, we demonstrate our algorithm on challenging continuous control tasks with highly complex simulated characters.""","""This paper caused a lot of discussions before and after the rebuttal. The concerns are related to the novelty of this paper, which seems to be relatively limited. Since we do not have a champion among positive reviewers, and the overall score is not high enough, I cannot recommend its acceptance at this stage. """
174,"""Emergence of Collective Policies Inside Simulations with Biased Representations""","['collective policy', 'biased representation', 'model-based RL', 'simulation', 'imagination', 'virtual environment']","""We consider a setting where biases are involved when agents internalise an environment. Agents have different biases, all of which resulting in imperfect evidence collected for taking optimal actions. Throughout the interactions, each agent asynchronously internalises their own predictive model of the environment and forms a virtual simulation within which the agent plays trials of the episodes in entirety. In this research, we focus on developing a collective policy trained solely inside agents' simulations, which can then be transferred to the real-world environment. The key idea is to let agents imagine together; make them take turns to host virtual episodes within which all agents participate and interact with their own biased representations. Since agents' biases vary, the collective policy developed while sequentially visiting the internal simulations complement one another's shortcomings. In our experiment, the collective policies consistently achieve significantly higher returns than the best individually trained policies.""","""This paper presents an ensemble method for reinforcement learning. The method trains an ensemble of transition and reward models. Each element of this ensemble has a different view of the data (for example, ablated observation pixels) and a different latent space for its models. A single (collective) policy is then trained, by learning from trajectories generated from each of the models in the ensemble. The collective policy makes direct use of the latent spaces and models in the ensemble by means of a translator that maps one latent space into all the other latent spaces, and an aggregator that combines all the model outputs. The method is evaluated on the CarRacing and VizDoom environments. The reviewers raised several concerns about the paper. The evaluations were not convincing with artificially weak baselines and only worked well in one of the two tested environments (reviewer 2). The paper does not adequately connect to related work on model-based RL (reviewer 1 and 2). The paper does not motivate its artificial setting (reviewer 2 and 1). The paper's presentation lacks clarity from using non-standard terminology and notation without adequate explanation (reviewer 1 and 3). Technical aspects of the translator component were also unclear to multiple reviewers (reviewers 1, 2 and 3). The authors found the review comments to be helpful for future work, but provided no additional clarifications. The paper is not ready for publication."""
175,"""Deformable Kernels: Adapting Effective Receptive Fields for Object Deformation""","['Effective Receptive Fields', 'Deformation Modeling', 'Dynamic Inference']","""Convolutional networks are not aware of an object's geometric variations, which leads to inefficient utilization of model and data capacity. To overcome this issue, recent works on deformation modeling seek to spatially reconfigure the data towards a common arrangement such that semantic recognition suffers less from deformation. This is typically done by augmenting static operators with learned free-form sampling grids in the image space, dynamically tuned to the data and task for adapting the receptive field. Yet adapting the receptive field does not quite reach the actual goal -- what really matters to the network is the *effective* receptive field (ERF), which reflects how much each pixel contributes. It is thus natural to design other approaches to adapt the ERF directly during runtime. In this work, we instantiate one possible solution as Deformable Kernels (DKs), a family of novel and generic convolutional operators for handling object deformations by directly adapting the ERF while leaving the receptive field untouched. At the heart of our method is the ability to resample the original kernel space towards recovering the deformation of objects. This approach is justified with theoretical insights that the ERF is strictly determined by data sampling locations and kernel values. We implement DKs as generic drop-in replacements of rigid kernels and conduct a series of empirical studies whose results conform with our theories. Over several tasks and standard base models, our approach compares favorably against prior works that adapt during runtime. In addition, further experiments suggest a working mechanism orthogonal and complementary to previous works.""","""In my opinion, this paper is borderline (but my expertise is not in this area) and the reviewers are too uncertain to be of help in making an informed decision."""
176,"""Three-Head Neural Network Architecture for AlphaZero Learning""","['alphazero', 'reinforcement learning', 'two-player games', 'heuristic search', 'deep neural networks']","""The search-based reinforcement learning algorithm AlphaZero has been used as a general method for mastering two-player games Go, chess and Shogi. One crucial ingredient in AlphaZero (and its predecessor AlphaGo Zero) is the two-head network architecture that outputs two estimates --- policy and value --- for one input game state. The merit of such an architecture is that letting policy and value learning share the same representation substantially improved generalization of the neural net. A three-head network architecture has been recently proposed that can learn a third action-value head on a fixed dataset the same as for two-head net. Also, using the action-value head in Monte Carlo tree search (MCTS) improved the search efficiency. However, effectiveness of the three-head network has not been investigated in an AlphaZero style learning paradigm. In this paper, using the game of Hex as a test domain, we conduct an empirical study of the three-head network architecture in AlpahZero learning. We show that the architecture is also advantageous at the zero-style iterative learning. Specifically, we find that three-head network can induce the following benefits: (1) learning can become faster as search takes advantage of the additional action-value head; (2) better prediction results than two-head architecture can be achieved when using additional action-value learning as an auxiliary task.""","""The authors provide an empirical study of the recent 3-head architecture applied to AlphaZero style learning. They thoroughly evaluate this approach using the game Hex as a test domain. Initially, reviewers were concerned about how well the hyper parameters for tuned for different methods. The authors did a commendable job addressing the reviewers concerns in their revision. However, the reviewers agreed that with the additional results showing the gap between the 2 headed architecture and the three-headed architecture narrowed, the focus of the paper has changed substantially from the initial version. They suggest that a substantial rewrite of the paper would make the most sense before publication. As a result, at this time, I'm going to recommend rejection, but I encourage the authors to incorporate the reviewers feedback. I believe this paper has the potential to be a strong submission in the future. """
177,"""Efficient Probabilistic Logic Reasoning with Graph Neural Networks""","['probabilistic logic reasoning', 'Markov Logic Networks', 'graph neural networks']","""Markov Logic Networks (MLNs), which elegantly combine logic rules and probabilistic graphical models, can be used to address many knowledge graph problems. However, inference in MLN is computationally intensive, making the industrial-scale application of MLN very difficult. In recent years, graph neural networks (GNNs) have emerged as efficient and effective tools for large-scale graph problems. Nevertheless, GNNs do not explicitly incorporate prior logic rules into the models, and may require many labeled examples for a target task. In this paper, we explore the combination of MLNs and GNNs, and use graph neural networks for variational inference in MLN. We propose a GNN variant, named ExpressGNN, which strikes a nice balance between the representation power and the simplicity of the model. Our extensive experiments on several benchmark datasets demonstrate that ExpressGNN leads to effective and efficient probabilistic logic reasoning.""","""This paper is far more borderline than the review scores indicate. The authors certainly did themselves no favours by posting a response so close to the end of the discussion period, but there was sufficient time to consider the responses after this, and it is somewhat disappointing that the reviewers did not engage. Reviewer 2 states that their only reason for not recommending acceptance is the lack of experiments on more than one KG. The authors point out they have experiments on more than one KG in the paper. From my reading, this is the case. I will consider R2 in favour of the paper in the absence of a response. Reviewer 3 gives a fairly clear initial review which states the main reasons they do not recommend acceptance. While not an expert on the topic of GNNs, I have enough of a technical understanding to deem that the detailed response from the authors to each of the points does address these concerns. In the absence of a response from the reviewer, it is difficult to ascertain whether they would agree, but I will lean towards assuming they are satisfied. Reviewer 1 gives a positive sounding review, with as main criticism ""Overall, the work of this paper seems technically sound but I dont find the contributions particularly surprising or novel. Along with plogicnet, there have been many extensions and applications of Gnns, and I didnt find that the paper expands this perspective in any surprising way."" This statement is simply re-asserted after the author response. I find this style of review entirely inappropriate and unfair: it is not a the role of a good scientific publication to ""surprise"". If it is technically sound, and in an area that the reviewer admits generates interest from reviewers, vague weasel words do not a reason for rejection make. I recommend acceptance."""
178,"""MixUp as Directional Adversarial Training""","['MixUp', 'Adversarial Training', 'Untied MixUp']","""MixUp is a data augmentation scheme in which pairs of training samples and their corresponding labels are mixed using linear coefficients. Without label mixing, MixUp becomes a more conventional scheme: input samples are moved but their original labels are retained. Because samples are preferentially moved in the direction of other classes \iffalse -- which are typically clustered in input space -- \fi we refer to this method as directional adversarial training, or DAT. We show that under two mild conditions, MixUp asymptotically convergences to a subset of DAT. We define untied MixUp (UMixUp), a superset of MixUp wherein training labels are mixed with different linear coefficients to those of their corresponding samples. We show that under the same mild conditions, untied MixUp converges to the entire class of DAT schemes. Motivated by the understanding that UMixUp is both a generalization of MixUp and a form of adversarial training, we experiment with different datasets and loss functions to show that UMixUp provides improved performance over MixUp. In short, we present a novel interpretation of MixUp as belonging to a class highly analogous to adversarial training, and on this basis we introduce a simple generalization which outperforms MixUp.""","""This paper builds a connection between MixUp and adversarial training. It introduces untied MixUp (UMixUp), which generalizes the methods of MixUp. Then, it also shows that DAT and UMixUp use the same method of MixUp for generating samples but use different label mixing ratios. Though it has some valuable theoretical contributions, I agree with the reviewers that its important to include results on adversarial robustness, where both adversarial training and MixUp are playing an important role."""
179,"""Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness""","['Invariance', 'Robustness', 'Adversarial Examples']","""Adversarial examples are malicious inputs crafted to cause a model to misclassify them. In their most common instantiation, ""perturbation-based"" adversarial examples introduce changes to the input that leave its true label unchanged, yet result in a different model prediction. Conversely, ""invariance-based"" adversarial examples insert changes to the input that leave the model's prediction unaffected despite the underlying input's label having changed. So far, the relationship between these two notions of adversarial examples has not been studied, we close this gap. We demonstrate that solely achieving perturbation-based robustness is insufficient for complete adversarial robustness. Worse, we find that classifiers trained to be Lp-norm robust are more vulnerable to invariance-based adversarial examples than their undefended counterparts. We construct theoretical arguments and analytical examples to justify why this is the case. We then illustrate empirically that the consequences of excessive perturbation-robustness can be exploited to craft new attacks. Finally, we show how to attack a provably robust defense --- certified on the MNIST test set to have at least 87% accuracy (with respect to the original test labels) under perturbations of Linfinity-norm below epsilon=0.4 --- and reduce its accuracy (under this threat model with respect to an ensemble of human labelers) to 60% with an automated attack, or just 12% with human-crafted adversarial examples.""","""The paper considers the relationship betwee: - perturbations to an input x which change predictions of a model but not the ground truth label - perturbations to an input x which do not change a model's prediction but do chance the ground truth label. The authors show that achieving robustness to the former need not guarantee robustness to the latter. While these ideas are interesting, the reviewers would like to see a tighter connection between the two forms of robustness developed. """
180,"""Discriminability Distillation in Group Representation Learning""",[],"""Learning group representation is a commonly concerned issue in tasks where the basic unit is a group, set or sequence. The computer vision community tries to tackle it by aggregating the elements in a group based on an indicator either defined by human such as the quality or saliency of an element, or generated by a black box such as the attention score or output of a RNN. This article provides a more essential and explicable view. We claim the most significant indicator to show whether the group representation can be benefited from an element is not the quality, or an inexplicable score, but the \textit{discrimiability}. Our key insight is to explicitly design the \textit{discrimiability} using embedded class centroids on a proxy set, and show the discrimiability distribution \textit{w.r.t.} the element space can be distilled by a light-weight auxiliary distillation network. This processing is called \textit{discriminability distillation learning} (DDL). We show the proposed DDL can be flexibly plugged into many group based recognition tasks without influencing the training procedure of the original tasks. Comprehensive experiments on set-to-set face recognition and action recognition valid the advantage of DDL on both accuracy and efficiency, and it pushes forward the state-of-the-art results on these tasks by an impressive margin.""","""This paper proposes discriminability distillation learning (DDL) for learning group representations. The core idea is to learn a discriminability weight for each instance which are a member of a group, set or sequence. The discriminability score is learned by first training a standard supervised base model and using the features from this model, computing class-centroids on a proxy set, and computing the iter and intra-class distances. A function of these distance computations are then used as supervision for a distillation style small network (DDNet) which may predict the discriminability score (DDR score). A group representation is then created through a combination of known instances, weighted using their DDR score. The method is validated on face recognition and action recognition. This work initially received mixed scores, with two reviewers recommending acceptance and two recommending rejection. After reading all the reviews, rebuttals, and discussions, it seems that a key point of concern is low clarity of presentation. During the rebuttal period, the authors have revised their manuscript and interacted with reviewers. One reviewer has chosen to update their recommendation to weak acceptance in response. The main unresolved issues are related to novelty and experimental evaluation. Namely, for novelty comparison and discussion against attention based approaches and other metric learning based approaches would benefit the work, though the proposed solution does present some novelty. For the experiments there was a suggestion to evaluate the model on more complex datasets where performance is not already maxed out. The authors have provided such experiments during the rebuttal period. Despite the slight positive leanings post rebuttal, the ACs have discussed this case and determine the paper is not ready for publication."""
181,"""Distillation pseudo-formula Early Stopping? Harvesting Dark Knowledge Utilizing Anisotropic Information Retrieval For Overparameterized NN""","['Distillation', 'Learning Thoery', 'Corrupted Label']","""Distillation is a method to transfer knowledge from one model to another and often achieves higher accuracy with the same capacity. In this paper, we aim to provide a theoretical understanding on what mainly helps with the distillation. Our answer is ""early stopping"". Assuming that the teacher network is overparameterized, we argue that the teacher network is essentially harvesting dark knowledge from the data via early stopping. This can be justified by a new concept, Anisotropic In- formation Retrieval (AIR), which means that the neural network tends to fit the informative information first and the non-informative information (including noise) later. Motivated by the recent development on theoretically analyzing overparame- terized neural networks, we can characterize AIR by the eigenspace of the Neural Tangent Kernel(NTK). AIR facilities a new understanding of distillation. With that, we further utilize distillation to refine noisy labels. We propose a self-distillation al- gorithm to sequentially distill knowledge from the network in the previous training epoch to avoid memorizing the wrong labels. We also demonstrate, both theoret- ically and empirically, that self-distillation can benefit from more than just early stopping. Theoretically, we prove convergence of the proposed algorithm to the ground truth labels for randomly initialized overparameterized neural networks in terms of l2 distance, while the previous result was on convergence in 0-1 loss. The theoretical result ensures the learned neural network enjoy a margin on the training data which leads to better generalization. Empirically, we achieve better testing accuracy and entirely avoid early stopping which makes the algorithm more user-friendly. ""","""This paper tries to bridge early stopping and distillation. 1) In Section 2, the authors empirically show more distillation effect when early stopping. 2) In Section 3, the authors propose a new provable algorithm for training noisy labels. In the discussion phase, all reviewers discussed a lot. In particular, a reviewer highlights the importance of Section 3. On the other hand, other reviewers pointed out ""what is the role of Section 2"", as the abstract/intro tends to emphasize the content of Section 2. I mostly agree all pros and cons pointed out by reviewers. I agree that the paper proposed an interesting idea for refining noisy labels with theoretical guarantees. However, the major reason for my reject decision is that the current write-up is a bit below the borderline to be accepted considering the high standard of ICLR, e.g., many typos (what is the172norm in page 4?) and misleading intro/abstract/organization. In overall, it was also hard for me to read the paper. I do believe that the paper should be much improved if the authors make more significant editorial efforts considering a more broad range of readers. I have additional suggestions for improving the paper, which I hope are useful. * Put Section 3 earlier (i.e., put Section 2 later) and revise intro/abstract so that the reader can clearly understand what is the main contribution. * Section 2.1 is weak to claim more distillation effect when early stopping. More experimental or theoretical study are necessary, e.g., you can control temperature parameter T of knowledge distillation to provide the ""early stopping"" effect without actual ""early stopping"" (the choice of T is not mentioned in the draft as it is the important hyper-parameter). * More experimental supports for your algorithm should be desirable, e.g., consider more datasets, state-of-the-art baselines, noisy types, and neural architectures (e.g., NLP models). * Softening some sentences for avoiding some potential over-claims to some readers. """
182,"""Improving Generalization in Meta Reinforcement Learning using Learned Objectives""","['meta reinforcement learning', 'meta learning', 'reinforcement learning']","""Biological evolution has distilled the experiences of many learners into the general learning algorithms of humans. Our novel meta reinforcement learning algorithm MetaGenRL is inspired by this process. MetaGenRL distills the experiences of many complex agents to meta-learn a low-complexity neural objective function that decides how future individuals will learn. Unlike recent meta-RL algorithms, MetaGenRL can generalize to new environments that are entirely different from those used for meta-training. In some cases, it even outperforms human-engineered RL algorithms. MetaGenRL uses off-policy second-order gradients during meta-training that greatly increase its sample efficiency.""","""This paper proposes a meta-RL algorithm that learns an objective function whose gradients can be used to efficiently train a learner on entirely new tasks from those seen during meta-training. Building off-policy gradient-based meta-RL methods is challenging, and had not been previously demonstrated. Further, the demonstrated generalization capabilities are a substantial improvement in capabilities over prior meta-learning methods. There are a couple related works that are quite relevant (and somewhat similar in methodology) and overlooked -- see [1,2]. Further, we strongly encourage the authors to run the method on multiple meta-training environments and to report results with more seeds, as promised. The contributions are significant and should be seen by the ICLR community. Hence, I recommend an oral presentation. [1] Yu et al. One-Shot Imitation from Observing Humans via Domain-Adaptive Meta-Learning [2] Sung et al. Meta-critic networks"""
183,"""Chameleon: Adaptive Code Optimization for Expedited Deep Neural Network Compilation""","['Reinforcement Learning', 'Learning to Optimize', 'Combinatorial Optimization', 'Compilers', 'Code Optimization', 'Neural Networks', 'ML for Systems', 'Learning for Systems']","""Achieving faster execution with shorter compilation time can foster further diversity and innovation in neural networks. However, the current paradigm of executing neural networks either relies on hand-optimized libraries, traditional compilation heuristics, or very recently genetic algorithms and other stochastic methods. These methods suffer from frequent costly hardware measurements rendering them not only too time consuming but also suboptimal. As such, we devise a solution that can learn to quickly adapt to a previously unseen design space for code optimization, both accelerating the search and improving the output performance. This solution dubbed Chameleon leverages reinforcement learning whose solution takes fewer steps to converge, and develops an adaptive sampling algorithm that not only focuses on the costly samples (real hardware measurements) on representative points but also uses a domain-knowledge inspired logic to improve the samples itself. Experimentation with real hardware shows that Chameleon provides 4.45x speed up in optimization time over AutoTVM, while also improving inference time of the modern deep networks by 5.6%.""","""This paper proposes to optimize the code optimal code in DNN compilers using adaptive sampling and reinforcement learning. This method achieves significant speedup in compilation time and execution time. The authors made strong efforts in addressing the problems raised by the reviewers, and promised to make the code publicly available, which is of particular importance for works of this nature. """
184,"""Incorporating Horizontal Connections in Convolution by Spatial Shuffling""","['shuffle', 'convolution', 'receptive field', 'classification', 'horizontal connections']","""Convolutional Neural Networks (CNNs) are composed of multiple convolution layers and show elegant performance in vision tasks. The design of the regular convolution is based on the Receptive Field (RF) where the information within a specific region is processed. In the view of the regular convolution's RF, the outputs of neurons in lower layers with smaller RF are bundled to create neurons in higher layers with larger RF. As a result, the neurons in high layers are able to capture the global context even though the neurons in low layers only see the local information. However, in lower layers of the biological brain, the information outside of the RF changes the properties of neurons. In this work, we extend the regular convolution and propose spatially shuffled convolution (ss convolution). In ss convolution, the regular convolution is able to use the information outside of its RF by spatial shuffling which is a simple and lightweight operation. We perform experiments on CIFAR-10 and ImageNet-1k dataset, and show that ss convolution improves the classification performance across various CNNs.""","""The paper is well-motivated by neuroscience that our brains use information from outside the receptive field of convolutive processes through top-down mechanisms. However, reviewers feel that the results are not near the state of the art and the paper needs further experiments and need to scale to larger datasets. """
185,"""Superseding Model Scaling by Penalizing Dead Units and Points with Separation Constraints""","['Dead Point', 'Dead Unit', 'Model Scaling', 'Separation Constraints', 'Dying ReLU', 'Constant Width', 'Deep Neural Networks', 'Backpropagation']","""In this article, we study a proposal that enables to train extremely thin (4 or 8 neurons per layer) and relatively deep (more than 100 layers) feedforward networks without resorting to any architectural modification such as Residual or Dense connections, data normalization or model scaling. We accomplish that by alleviating two problems. One of them are neurons whose output is zero for all the dataset, which renders them useless. This problem is known to the academic community as \emph{dead neurons}. The other is a less studied problem, dead points. Dead points refers to data points that are mapped to zero during the forward pass of the network. As such, the gradient generated by those points is not propagated back past the layer where they die, thus having no effect in the training process. In this work, we characterize both problems and propose a constraint formulation that added to the standard loss function solves them both. As an additional benefit, the proposed method allows to initialize the network weights with constant or even zero values and still allowing the network to converge to reasonable results. We show very promising results on a toy, MNIST, and CIFAR-10 datasets.""","""This paper proposes constraints to tackle the problems of dead neurons and dead points. The reviewers point out that the experiments are only done on small datasets and it is not clear if the experiments will scale further. I encourage the authors to carry out further experiments and submit to another venue."""
186,"""Neural Architecture Search by Learning Action Space for Monte Carlo Tree Search""","['MCTS', 'Neural Architecture Search', 'Search']","""Neural Architecture Search (NAS) has emerged as a promising technique for automatic neural network design. However, existing NAS approaches often utilize manually designed action space, which is not directly related to the performance metric to be optimized (e.g., accuracy). As a result, using manually designed action space to perform NAS often leads to sample-inefficient explorations of architectures and thus can be sub-optimal. In order to improve sample efficiency, this paper proposes Latent Action Neural Architecture Search (LaNAS) that learns actions to recursively partition the search space into good or bad regions that contain networks with concentrated performance metrics, i.e., low variance. During the search phase, as different architecture search action sequences lead to regions of different performance, the search efficiency can be significantly improved by biasing towards the good regions. On the largest NAS dataset NASBench-101, our experimental results demonstrated that LaNAS is 22x, 14.6x, 12.4x, 6.8x, 16.5x more sample-efficient than Random Search, Regularized Evolution, Monte Carlo Tree Search, Neural Architecture Optimization, and Bayesian Optimization, respectively. When applied to the open domain, LaNAS achieves 98.0% accuracy on CIFAR-10 and 75.0% top1 accuracy on ImageNet in only 803 samples, outperforming SOTA AmoebaNet with 33x fewer samples.""","""This paper proposes an MCTS method for neural architecture search (NAS). Evaluations on NAS-Bench-101 and other datasets are promising. Unfortunately, no code is provided, which is very important in NAS to overcome the reproducibility crisis. Discussion: The authors were able to answer several questions of the reviewers. I also do not share the concern of AnonReviewer2 that MCTS hasn't been used for NAS before; in contrast, this appears to be a point in favor of the paper's novelty. However, the authors' reply concerning Bayesian optimization and the optimization of its acquisition function is strange: using the ConvNet-60K dataset with 1364 networks, it does not appear to make sense to use only 1% or even only 0.01% of the dataset size as a budget for optimizing the acquisition function. The reviewers stuck to their rating of 6,3,3. Overall, I therefore recommend rejection. """
187,"""Classification Attention for Chinese NER""","['Chinese NER', 'NER', 'tagging', 'deeplearning', 'nlp']","""The character-based model, such as BERT, has achieved remarkable success in Chinese named entity recognition (NER). However, such model would likely miss the overall information of the entity words. In this paper, we propose to combine priori entity information with BERT. Instead of relying on additional lexicons or pre-trained word embeddings, our model has generated entity classification embeddings directly on the pre-trained BERT, having the merit of increasing model practicability and avoiding OOV problem. Experiments show that our model has achieved state-of-the-art results on 3 Chinese NER datasets.""","""The paper is interested in Chinese Name Entity Recognition, building on a BERT pre-trained model. All reviewers agree that the contribution has limited novelty. Motivation leading to the chosen architecture is also missing. In addition, the writing of the paper should be improved. """
188,"""On Computation and Generalization of Generative Adversarial Imitation Learning""",[],"""Generative Adversarial Imitation Learning (GAIL) is a powerful and practical approach for learning sequential decision-making policies. Different from Reinforcement Learning (RL), GAIL takes advantage of demonstration data by experts (e.g., human), and learns both the policy and reward function of the unknown environment. Despite the significant empirical progresses, the theory behind GAIL is still largely unknown. The major difficulty comes from the underlying temporal dependency of the demonstration data and the minimax computational formulation of GAIL without convex-concave structure. To bridge such a gap between theory and practice, this paper investigates the theoretical properties of GAIL. Specifically, we show: (1) For GAIL with general reward parameterization, the generalization can be guaranteed as long as the class of the reward functions is properly controlled; (2) For GAIL, where the reward is parameterized as a reproducing kernel function, GAIL can be efficiently solved by stochastic first order optimization algorithms, which attain sublinear convergence to a stationary solution. To the best of our knowledge, these are the first results on statistical and computational guarantees of imitation learning with reward/policy function ap- proximation. Numerical experiments are provided to support our analysis. ""","""The paper provides a theoretical analysis of the recent and popular Generative Adversarial Imitation Learning (GAIL) approach. Valuable new insights on generalization and convergence are developed, and put GAIL on a stronger theoretical foundation. Reviewer questions and suggestions were largely addressed during the rebuttal."""
189,"""Certifiably Robust Interpretation in Deep Learning""","['deep learning interpretation', 'robustness certificates', 'adversarial examples']","""Deep learning interpretation is essential to explain the reasoning behind model predictions. Understanding the robustness of interpretation methods is important especially in sensitive domains such as medical applications since interpretation results are often used in downstream tasks. Although gradient-based saliency maps are popular methods for deep learning interpretation, recent works show that they can be vulnerable to adversarial attacks. In this paper, we address this problem and provide a certifiable defense method for deep learning interpretation. We show that a sparsified version of the popular SmoothGrad method, which computes the average saliency maps over random perturbations of the input, is certifiably robust against adversarial perturbations. We obtain this result by extending recent bounds for certifiably robust smooth classifiers to the interpretation setting. Experiments on ImageNet samples validate our theory.""","""This paper discusses new methods to perform adversarial attacks on salience maps. In its current form, this paper in its current form has unfortunately has not convinced several of the reviewers/commenters of the motivation behind proposing such a method. I tend to share the same opinion. I would encourage the authors to re-think the motivation of the work, and if there are indeed solid use cases to express them explicitly in the next version of the paper."""
190,"""Rethinking Softmax Cross-Entropy Loss for Adversarial Robustness""","['Trustworthy Machine Learning', 'Adversarial Robustness', 'Training Objective', 'Sample Density']","""Previous work shows that adversarially robust generalization requires larger sample complexity, and the same dataset, e.g., CIFAR-10, which enables good standard accuracy may not suffice to train robust models. Since collecting new training data could be costly, we focus on better utilizing the given data by inducing the regions with high sample density in the feature space, which could lead to locally sufficient samples for robust learning. We first formally show that the softmax cross-entropy (SCE) loss and its variants convey inappropriate supervisory signals, which encourage the learned feature points to spread over the space sparsely in training. This inspires us to propose the Max-Mahalanobis center (MMC) loss to explicitly induce dense feature regions in order to benefit robustness. Namely, the MMC loss encourages the model to concentrate on learning ordered and compact representations, which gather around the preset optimal centers for different classes. We empirically demonstrate that applying the MMC loss can significantly improve robustness even under strong adaptive attacks, while keeping state-of-the-art accuracy on clean inputs with little extra computation compared to the SCE loss.""","""This paper proposes an alternative loss function, the max-mahalanobis center loss, that is claimed to improve adversarial robustness. In terms of quality, the reviewers commented on the convincing experiments and theoretical results, and were happy to see the sample density analysis. In terms of clarity, the reviewers commented that the paper is well-written. The problem of adversarial robustness is relevant to the ICLR community, and the proposed approach is a novel and significant contribution in this area. The authors have also convincingly answered the questions of the authors and even provided new theoretical and experimental results in their final upload. """
191,"""Learning Neural Surrogate Model for Warm-Starting Bayesian Optimization""","['Bayesian optimization', 'meta learning', 'neural network', 'surrogate model', 'hyper-parameters tuning']","""Bayesian optimization is an effective tool to optimize black-box functions and popular for hyper-parameter tuning in machine learning. Traditional Bayesian optimization methods are based on Gaussian process (GP), relying on a GP-based surrogate model for sampling points of the function of interest. In this work, we consider transferring knowledge from related problems to target problem by learning an initial surrogate model for warm-starting Bayesian optimization. We propose a neural network-based surrogate model to estimate the function mean value in GP. Then we design a novel weighted Reptile algorithm with sampling strategy to learn an initial surrogate model from meta train set. The initial surrogate model is learned to be able to well adapt to new tasks. Extensive experiments show that this warm-starting technique enables us to find better minimizer or hyper-parameters than traditional GP and previous warm-starting methods.""","""This paper is concerned with warm-starting Bayesian optimization (i.e. starting with a better surrogate model) through transfer learning among related problems. While the key motivation for warm-starting BO is certainly important (although not novel), there are important shortcomings in the way the method is developed and demonstrated. Firstly, the reviewers questioned design decisions, such as why combine NNs and GPs in this particular way or why the posterior variance of the hybrid model is not calculated. Moreover, there are issues with the experimental methodology that do not allow extraction of confident conclusions (e.g. repeating the experiments for different initial points is highly desirable). Finally, there are presentation issues. The authors replied only to some of these concerns, but ultimately the shortcomings seem to persist and hint towards a paper that needs more work. """
192,"""Improving the Generalization of Visual Navigation Policies using Invariance Regularization""","['Generalization', 'Deep Reinforcement Learning', 'Invariant Representation']","""Training agents to operate in one environment often yields overfitted models that are unable to generalize to the changes in that environment. However, due to the numerous variations that can occur in the real-world, the agent is often required to be robust in order to be useful. This has not been the case for agents trained with reinforcement learning (RL) algorithms. In this paper, we investigate the overfitting of RL agents to the training environments in visual navigation tasks. Our experiments show that deep RL agents can overfit even when trained on multiple environments simultaneously. We propose a regularization method which combines RL with supervised learning methods by adding a term to the RL objective that would encourage the invariance of a policy to variations in the observations that ought not to affect the action taken. The results of this method, called invariance regularization, show an improvement in the generalization of policies to environments not seen during training. ""","""All the reviewers recommend rejecting the submission. There is no basis for acceptance."""
193,"""Empirical confidence estimates for classification by deep neural networks""","['confidence', 'classification', 'uncertainty', 'anomaly', 'robustness']","""How well can we estimate the probability that the classification predicted by a deep neural network is correct (or in the Top 5)? It is well-known that the softmax values of the network are not estimates of the probabilities of class labels. However, there is a misconception that these values are not informative. We define the notion of implied loss and prove that if an uncertainty measure is an implied loss, then low uncertainty means high probability of correct (or Top-k) classification on the test set. We demonstrate empirically that these values can be used to measure the confidence that the classification is correct. Our method is simple to use on existing networks: we proposed confidence measures for Top-k which can be evaluated by binning values on the test set. ""","""The paper proposes to model uncertainty using expected Bayes factors, and empirically show that the proposed measure correlates well with the probability that the classification is correct. All the reviewers agreed that the idea of using Bayes factors for uncertainty estimation is an interesting approach. However, the reviewers also found the presentation a bit hard to follow. While the rebuttal addressed some of these concerns, there were still some remaining concerns (see R3's comments). I think this is a really promising direction of research and I appreciate the authors' efforts to revise the draft during the rebuttal (which led to some reviewers increasing the score). This is a borderline paper right now but I feel that the paper has the potential to turn into a great paper with another round of revision. I encourage the authors to revise the draft and resubmit to a different venue."""
194,"""On Stochastic Sign Descent Methods""","['non-convex optimization', 'stochastic optimization', 'gradient compression']","""Various gradient compression schemes have been proposed to mitigate the communication cost in distributed training of large scale machine learning models. Sign-based methods, such as signSGD (Bernstein et al., 2018), have recently been gaining popularity because of their simple compression rule and connection to adaptive gradient methods, like ADAM. In this paper, we perform a general analysis of sign-based methods for non-convex optimization. Our analysis is built on intuitive bounds on success probabilities and does not rely on special noise distributions nor on the boundedness of the variance of stochastic gradients. Extending the theory to distributed setting within a parameter server framework, we assure exponentially fast variance reduction with respect to number of nodes, maintaining 1-bit compression in both directions and using small mini-batch sizes. We validate our theoretical findings experimentally.""","""This paper proposes an analysis of signSGD in some special cases. SignGD has been shown to be of interest, whether because of its similarity to Adam or in quasi-convex settings. The complaint shared by reviewers was the strength of the conditions. SGC is really strong, I have yet to see increasing mini-batch sizes to be used in practice (although there are quite a few papers mentioning this technique to get a convergence rate) and the strength of the other two are harder to assess. With that said, the improvement compared to existing work such as Karimireddy et. al. 2019 is unclear. I encourage the authors to address the comment of the reviewers and to submit an improved version to a later, or perhaps to a more theoretical, convergence."""
195,"""Classification-Based Anomaly Detection for General Data""",['anomaly detection'],"""Anomaly detection, finding patterns that substantially deviate from those seen previously, is one of the fundamental problems of artificial intelligence. Recently, classification-based methods were shown to achieve superior results on this task. In this work, we present a unifying view and propose an open-set method, GOAD, to relax current generalization assumptions. Furthermore, we extend the applicability of transformation-based methods to non-image data using random affine transformations. Our method is shown to obtain state-of-the-art accuracy and is applicable to broad data types. The strong performance of our method is extensively validated on multiple datasets from different domains. ""","""The paper presents a method that unifies classification-based approaches for outlier detection and (one-class) anomaly detection. The paper also extends the applicability to non-image data. In the end, all the reviewers agreed that the paper makes a valuable contribution and I'm happy to recommend acceptance."""
196,"""Semi-supervised semantic segmentation needs strong, high-dimensional perturbations""","['computer vision', 'semantic segmentation', 'semi-supervised', 'consistency regularisation']","""Consistency regularization describes a class of approaches that have yielded ground breaking results in semi-supervised classification problems. Prior work has established the cluster assumption\,---\,under which the data distribution consists of uniform class clusters of samples separated by low density regions\,---\,as key to its success. We analyze the problem of semantic segmentation and find that the data distribution does not exhibit low density regions separating classes and offer this as an explanation for why semi-supervised segmentation is a challenging problem. We then identify the conditions that allow consistency regularization to work even without such low-density regions. This allows us to generalize the recently proposed CutMix augmentation technique to a powerful masked variant, CowMix, leading to a successful application of consistency regularization in the semi-supervised semantic segmentation setting and reaching state-of-the-art results in several standard datasets.""","""This paper proposes a method for semi-supervised semantic segmentation through consistency (with respect to various perturbations) regularization. While the reviewers believe that this paper contains interesting ideas and that it has been substantially improved from its original form, it is not yet ready for acceptance to ICLR-2020. With a little bit of polish, this paper is likely to be accepted at another venue."""
197,"""HOW IMPORTANT ARE NETWORK WEIGHTS? TO WHAT EXTENT DO THEY NEED AN UPDATE?""","['weights update', 'weights importance', 'weight freezing']","""In the context of optimization, a gradient of a neural network indicates the amount a specific weight should change with respect to the loss. Therefore, small gradients indicate a good value of the weight that requires no change and can be kept frozen during training. This paper provides an experimental study on the importance of a neural network weights, and to which extent do they need to be updated. We wish to show that starting from the third epoch, freezing weights which have no informative gradient and are less likely to be changed during training, results in a very slight drop in the overall accuracy (and in sometimes better). We experiment on the MNIST, CIFAR10 and Flickr8k datasets using several architectures (VGG19, ResNet-110 and DenseNet-121). On CIFAR10, we show that freezing 80% of the VGG19 network parameters from the third epoch onwards results in 0.24% drop in accuracy, while freezing 50% of Resnet-110 parameters results in 0.9% drop in accuracy and finally freezing 70% of Densnet-121 parameters results in 0.57% drop in accuracy. Furthermore, to experiemnt with real-life applications, we train an image captioning model with attention mechanism on the Flickr8k dataset using LSTM networks, freezing 60% of the parameters from the third epoch onwards, resulting in a better BLEU-4 score than the fully trained model. Our source code can be found in the appendix.""","""The authors demonstrate that starting from the 3rd epoch, freezing a large fraction of the weights (based on gradient information), but not entire layers, results in slight drops in performance. Given existing literature, the reviewers did not find this surprising, even though freezing only some of a layers weights has not been explicitly analyzed before. Although this is an interesting observation, the authors did not explain why this finding is important and it is unclear what the impact of such a finding will be. The authors are encouraged to expand on the implications of their finding and theoretical basis for it. Furthermore, reviewers raised concerns about the extensiveness of the empirical evaluation. This paper falls below the bar for ICLR, so I recommend rejection."""
198,"""Federated Adversarial Domain Adaptation""","['Federated Learning', 'Domain Adaptation', 'Transfer Learning', 'Feature Disentanglement']","""Federated learning improves data privacy and efficiency in machine learning performed over networks of distributed devices, such as mobile phones, IoT and wearable devices, etc. Yet models trained with federated learning can still fail to generalize to new devices due to the problem of domain shift. Domain shift occurs when the labeled data collected by source nodes statistically differs from the target node's unlabeled data. In this work, we present a principled approach to the problem of federated domain adaptation, which aims to align the representations learned among the different nodes with the data distribution of the target node. Our approach extends adversarial adaptation techniques to the constraints of the federated setting. In addition, we devise a dynamic attention mechanism and leverage feature disentanglement to enhance knowledge transfer. Empirically, we perform extensive experiments on several image and text classification tasks and show promising results under unsupervised federated domain adaptation setting.""","""This paper studies an interesting new problem, federated domain adaptation, and proposes an approach based on dynamic attention, federated adversarial alignment, and representation disentanglement. Reviewers generally agree that the paper contributes a novel approach to an interesting problem with theoretical guarantees and empirical justification. While many professional concerns were raised by the reviewers, the authors managed to perform an effective rebuttal with a major revision, which addressed the concerns convincingly. AC believes that the updated version is acceptable. Hence I recommend acceptance."""
199,"""Global Relational Models of Source Code""","['Models of Source Code', 'Graph Neural Networks', 'Structured Learning']","""Models of code can learn distributed representations of a program's syntax and semantics to predict many non-trivial properties of a program. Recent state-of-the-art models leverage highly structured representations of programs, such as trees, graphs and paths therein (e.g. data-flow relations), which are precise and abundantly available for code. This provides a strong inductive bias towards semantically meaningful relations, yielding more generalizable representations than classical sequence-based models. Unfortunately, these models primarily rely on graph-based message passing to represent relations in code, which makes them de facto local due to the high cost of message-passing steps, quite in contrast to modern, global sequence-based models, such as the Transformer. In this work, we bridge this divide between global and structured models by introducing two new hybrid model families that are both global and incorporate structural bias: Graph Sandwiches, which wrap traditional (gated) graph message-passing layers in sequential message-passing layers; and Graph Relational Embedding Attention Transformers (GREAT for short), which bias traditional Transformers with relational information from graph edge types. By studying a popular, non-trivial program repair task, variable-misuse identification, we explore the relative merits of traditional and hybrid model families for code representation. Starting with a graph-based model that already improves upon the prior state-of-the-art for this task by 20%, we show that our proposed hybrid models improve an additional 10-15%, while training both faster and using fewer parameters.""","""The paper investigates hybrid NN architectures to represent programs, involving both local (RNN, Transformer) and global (Gated Graph NN) structures, with the goal of exploiting the program structure while permitting the fast flow of information through the whole program. The proof of concept for the quality of the representation is the performance on the VarMisuse task (identifying where a variable was replaced by another one, and which variable was the correct one). Other criteria regard the computational cost of training and number of parameters. Varied architectures, involving fast and local transmission with and without attention mechanisms, are investigated, comparing full graphs and compressed (leaves-only) graphs. The lessons learned concern the trade-off between the architecture of the model, the computational time and the learning curve. It is suggested that the Transformer learns from scratch to connect the tokens as appropriate; and that interleaving RNN and GNN allows for more effective processing, with less message passes and less parameters with improved accuracy. A first issue raised by the reviewers concerns the computational time (ca 100 hours on P100 GPUs); the authors focus on the performance gain w.r.t. GGNN in terms of computational time (significant) and in terms of epochs. Another concern raised by the reviewers is the moderate originality of the proposed architecture. I strongly recommend that the authors make their architecture public; this is imo the best way to evidence the originality of the proposed solution. The authors did a good job in answering the other concerns, in particular concerning the computational time and the choice of the samples. I thus recommend acceptance. """
200,"""Deep Network Classification by Scattering and Homotopy Dictionary Learning""","['dictionary learning', 'scattering transform', 'sparse coding', 'imagenet']","""We introduce a sparse scattering deep convolutional neural network, which provides a simple model to analyze properties of deep representation learning for classification. Learning a single dictionary matrix with a classifier yields a higher classification accuracy than AlexNet over the ImageNet 2012 dataset. The network first applies a scattering transform that linearizes variabilities due to geometric transformations such as translations and small deformations. A sparse pseudo-formula dictionary coding reduces intra-class variability while preserving class separation through projections over unions of linear spaces. It is implemented in a deep convolutional network with a homotopy algorithm having an exponential convergence. A convergence proof is given in a general framework that includes ALISTA. Classification results are analyzed on ImageNet.""","""After the rebuttal period the ratings on this paper increased and it now has a strong assessment across reviewers. The AC recommends acceptance."""
201,"""Localized Generations with Deep Neural Networks for Multi-Scale Structured Datasets""","['Variational autoencoder', 'Local learning', 'Model-agnostic meta-learning', 'Disentangled representation']","""Extracting the hidden structure of the external environment is an essential component of intelligent agents and human learning. The real-world datasets that we are interested in are often characterized by the locality: the change in the structural relationship between the data points depending on location in observation space. The local learning approach extracts semantic representations for these datasets by training the embedding model from scratch for each local neighborhood, respectively. However, this approach is only limited to use with a simple model, since the complex model, including deep neural networks, requires a massive amount of data and extended training time. In this study, we overcome this trade-off based on the insight that the real-world dataset often shares some structural similarity between each neighborhood. We propose to utilize the embedding model for the other local structure as a weak form of supervision. Our proposed model, the Local VAE, generalize the Variational Autoencoder to have the different model parameters for each local subset and train these local parameters by the gradient-based meta-learning. Our experimental results showed that the Local VAE succeeded in learning the semantic representations for the dataset with local structure, including the 3D Shapes Dataset, and generated high-quality images.""","""The paper presents a structured VAE, where the model parameters depend on a local structure (such as distance in feature or local space), and it uses the meta-learning framework to adjust the dependency of the model parameters to the local neighborhood. The idea is natural, as pointed by Rev#1. It incurs an extra learning cost, as noted by Rev#1 and #2, asking for details about the extra-cost. The authors' reply is (last alinea in first reply to Rev#1): we did not comment (...) because in essence, using neighborhoods in a naive way is not affordable. The area chair would like to know the actual computational time of Local VAE compared to that of the baselines. More details (for instance visualization) about the results on Cars3D and NORB would also be needed to better appreciate the impact of the locality structure. The fact that the optimal value (wrt Disentanglement) is rather low ( pseudo-formula ) would need be discussed, and assessed w.r.t. the standard deviation. In summary, the paper presents a good idea. More details about its impacts on the VAE quality, and its computation costs, are needed to fully appreciate its merits. """
202,"""LOGAN: Latent Optimisation for Generative Adversarial Networks""","['GAN', 'adversarial training', 'generative model', 'game theory']","""Training generative adversarial networks requires balancing of delicate adversarial dynamics. Even with careful tuning, training may diverge or end up in a bad equilibrium with dropped modes. In this work, we introduce a new form of latent optimisation inspired by the CS-GAN and show that it improves adversarial dynamics by enhancing interactions between the discriminator and the generator. We develop supporting theoretical analysis from the perspectives of differentiable games and stochastic approximation. Our experiments demonstrate that latent optimisation can significantly improve GAN training, obtaining state-of-the-art performance for the ImageNet (128 x 128) dataset. Our model achieves an Inception Score (IS) of 148 and an Frechet Inception Distance (FID) of 3.4, an improvement of 17% and 32% in IS and FID respectively, compared with the baseline BigGAN-deep model with the same architecture and number of parameters.""","""The authors propose to overcome challenges in GAN training through latent optimization, i.e. updating the latent code, motivated by natural gradients. The authors show improvement over previous methods. The work is well-motivated, but in my opinion, further experiments and comparisons need to be made before the work can be ready for publication. The authors write that ""Unfortunately, SGA is expensive to scale because computing the second-order derivatives with respect to all parameters is expensive"" and further ""Crucially, latent optimization approximates SGA using only second-order derivatives with respect to the latent z and parameters of the discriminator and generator separately. The second-order terms involving parameters of both the discriminator and the generator which are extremely expensive to compute are not used. For latent zs with dimensions typically used in GANs (e.g., 128256, orders of magnitude less than the number of parameters), these can be computed efficiently. In short, latent optimization efficiently couples the gradients of the discriminator and generator, as prescribed by SGA, but using the much lower-dimensional latent source z which makes the adjustment scalable."" However, this is not true. Computing the Hessian vector product is not that expensive. In fact, it can be computed at a cost comparable to gradient evaluations using automatic differentiation (Pearlmutter (1994)). In frameworks such as PyTorch, this can be done efficiently using double backpropagation, so only twice the cost. Based on the above, one of the main claims of improvement over existing methods, which is furthermore not investigated experimentally, is false. It is unacceptable that the authors do not compare with SGA: both in terms of quality and computational cost since that is the premise of the paper. The authors also miss recent works that successfully ran methods with Hessian-vector products: pseudo-url pseudo-url"""
203,"""VAENAS: Sampling Matters in Neural Architecture Search""",[],"""Neural Architecture Search (NAS) aims at automatically finding neural network architectures within an enormous designed search space. The search space usually contains billions of network architectures which causes extremely expensive computing costs in searching for the best-performing architecture. One-shot and gradient-based NAS approaches have recently shown to achieve superior results on various computer vision tasks such as image recognition. With the weight sharing mechanism, these methods lead to efficient model search. Despite their success, however, current sampling methods are either fixed or hand-crafted and thus ineffective. In this paper, we propose a learnable sampling module based on variational auto-encoder (VAE) for neural architecture search (NAS), named as VAENAS, which can be easily embedded into existing weight sharing NAS framework, e.g., one-shot approach and gradient-based approach, and significantly improve the performance of searching results. VAENAS generates a series of competitive results on CIFAR-10 and ImageNet in NasNet-like search space. Moreover, combined with one-shot approach, our method achieves a new state-of-the-art result for ImageNet classification model under 400M FLOPs with 77.4\% in ShuffleNet-like search space. Finally, we conduct a thorough analysis of VAENAS on NAS-bench-101 dataset, which demonstrates the effectiveness of our proposed methods. ""","""This paper proposes to represent the distribution w.r.t. which neural architecture search (NAS) samples architectures through a variational autoencoder, rather than through a fully factorized distribution (as previous work did). In the discussion, a few things improved (causing one reviewer to increase his/her score from 1 to 3), but it became clear that the empirical evaluation has issues, with a different search space being used for the method than for the baselines. There was unanimous agreement for rejection. I agree with this judgement and thus recommend rejection."""
204,"""Sentence embedding with contrastive multi-views learning""","['contrastive', 'multi-views', 'linguistic', 'embedding']","""In this work, we propose a self-supervised method to learn sentence representations with an injection of linguistic knowledge. Multiple linguistic frameworks propose diverse sentence structures from which semantic meaning might be expressed out of compositional words operations. We aim to take advantage of this linguist diversity and learn to represent sentences by contrasting these diverse views. Formally, multiple views of the same sentence are mapped to close representations. On the contrary, views from other sentences are mapped further. By contrasting different linguistic views, we aim at building embeddings which better capture semantic and which are less sensitive to the sentence outward form. ""","""This paper proposes a method to learn sentence representations that incorporates linguistic knowledge in the form of dependency trees using contrastive learning. Experiments on SentEval and probing tasks show that the proposed method underperform baseline methods. All reviewers agree that the results are not strong enough to support the claim of the paper and have some concerns about the scalability of the implementation. They also agree that the writing of the paper can be improved (details included in their reviews below). The authors acknowledged these concerns and mentioned that they will use them to improve the paper for future work, so I recommend rejecting this paper for ICLR."""
205,"""Continual Learning using the SHDL Framework with Skewed Replay Distributions""","['Continual Learning', 'Catastrophic Forgetting', 'SHDL', 'CIFAR-100']","""Human and animals continuously acquire, adapt as well as transfer knowledge throughout their lifespan. The ability to learn continuously is crucial for the effective functioning of agents interacting with the real world and processing continuous streams of information. Continuous learning has been a long-standing challenge for neural networks as the repeated acquisition of information from non-uniform data distributions generally lead to catastrophic forgetting or interference. This work proposes a modular architecture capable of continuous acquisition of tasks while averting catastrophic forgetting. Specifically, our contributions are: (i) Efficient Architecture: a modular architecture emulating the visual cortex that can learn meaningful representations with limited labelled examples, (ii) Knowledge Retention: retention of learned knowledge via limited replay of past experiences, (iii) Forward Transfer: efficient and relatively faster learning on new tasks, and (iv) Naturally Skewed Distributions: The learning in the above-mentioned claims is performed on non-uniform data distributions which better represent the natural statistics of our ongoing experience. Several experiments that substantiate the above-mentioned claims are demonstrated on the CIFAR-100 dataset.""","""The paper adapts a previously proposed modular deep network architecture (SHDL) for supervised learning in a continual learning setting. One problem in this setting is catastrophic forgetting. The proposed solution replays a small fraction of the data from old tasks to avoid forgetting, on top of a modular architecture that facilitates fast transfer when new tasks are added. The method is developed for image inputs and evaluated experimentally on CIFAR-100. The reviews were in agreement that this paper is not ready for publication. All the reviews had concerns about the lack of explanation of the proposed solution and the experimental methods. The reviewers were concerned about the choice of metrics not being comparable or justified: Reviewer4 wanted an apples-to-apples comparison, Reviewer1 suggested the paper follow the evaluation paradigm used in earlier papers, and Reviewer2 described the absence of an explained baseline value. Two reviewers (Reviewer4 and Reviewer2) described the lack of details on the parameters, architecture, and training regime used for the experiments. The paper did not not justify which aspects of the modular system contributed to the observed performance (Reviewer4 and Reviewer1). Several additional concerns were also raised. The authors did not respond to any of the concerns raised by the reviewers. """
206,"""Semi-supervised Pose Estimation with Geometric Latent Representations""","['Semi-supervised learning', 'pose estimation', 'angle estimation', 'variational autoencoders']","""Pose estimation is the task of finding the orientation of an object within an image with respect to a fixed frame of reference. Current classification and regression approaches to the task require large quantities of labelled data for their purposes. The amount of labelled data for pose estimation is relatively limited. With this in mind, we propose the use of Conditional Variational Autoencoders (CVAEs) \cite{Kingma2014a} with circular latent representations to estimate the corresponding 2D rotations of an object. The method is capable of training with datasets that have an arbitrary amount of labelled images providing relatively similar performance for cases in which 10-20% of the labels for images is missing. ""","""This paper addresses the problem of rotation estimation in 2D images. The method attempted to reduce the labeling need by learning in a semi-supervised fashion. The approach learns a VAE where the latent code is be factored into the latent vector and the object rotation. All reviewers agreed that this paper is not ready for acceptance. The reviewers did express promise in the direction of this work. However, there were a few main concerns. First, the focus on 2D instead of 3D orientation. The general consensus was that 3D would be more pertinent use case and that extension of the proposed approach from 2D to 3D is likely non-trivial. The second issue is that minimal technical novelty. The reviewers argue that the proposed solution is a combination of existing techniques to a new problem area. Since the work does not have sufficient technical novelty to compare against other disentanglement works and is being applied to a less relevant experimental setting, the AC does not recommend acceptance. """
207,"""GUIDEGAN: ATTENTION BASED SPATIAL GUIDANCE FOR IMAGE-TO-IMAGE TRANSLATION""","['Image-to-Image translation', 'Attention Learning', 'GAN']","""Recently, Generative Adversarial Network (GAN) and numbers of its variants have been widely used to solve the image-to-image translation problem and achieved extraordinary results in both a supervised and unsupervised manner. However, most GAN-based methods suffer from the imbalance problem between the generator and discriminator in practice. Namely, the relative model capacities of the generator and discriminator do not match, leading to mode collapse and/or diminished gradients. To tackle this problem, we propose a GuideGAN based on attention mechanism. More specifically, we arm the discriminator with an attention mechanism so not only it estimates the probability that its input is real, but also does it create an attention map that highlights the critical features for such prediction. This attention map then assists the generator to produce more plausible and realistic images. We extensively evaluate the proposed GuideGAN framework on a number of image transfer tasks. Both qualitative results and quantitative comparison demonstrate the superiority of our proposed approach.""","""The paper proposes to augment the conditional GAN discriminator with an attention mechanism, with the aim to help the generator, in the context of image to image translation. The reviewers raise several issues in their reviews. One theoretical concern has to do with how the training of the attention mechanism (which seems to be collaborative) would interact with the minimax, zero-sum nature of a GAN objective; another with the discrepancy in how the attention map is used during training and testing. The experimental results were not significant enough, and the reviewers also recommend additional experiment results to clearly demonstrate the benefit of the method. """
208,"""Meta-Dataset: A Dataset of Datasets for Learning to Learn from Few Examples""","['few-shot learning', 'meta-learning', 'few-shot classification']","""Few-shot classification refers to learning a classifier for new classes given only a few examples. While a plethora of models have emerged to tackle it, we find the procedure and datasets that are used to assess their progress lacking. To address this limitation, we propose Meta-Dataset: a new benchmark for training and evaluating models that is large-scale, consists of diverse datasets, and presents more realistic tasks. We experiment with popular baselines and meta-learners on Meta-Dataset, along with a competitive method that we propose. We analyze performance as a function of various characteristics of test tasks and examine the models ability to leverage diverse training sources for improving their generalization. We also propose a new set of baselines for quantifying the benefit of meta-learning in Meta-Dataset. Our extensive experimentation has uncovered important research challenges and we hope to inspire work in these directions.""","""While the reviewers have some outstanding issues regarding the organization and clarity of the paper, the overall consensus is that the proposed evaluation methods is a useful improvement over current standards for meta-learning."""
209,"""Beyond Linearization: On Quadratic and Higher-Order Approximation of Wide Neural Networks""","['Neural Tangent Kernels', 'over-parametrized neural networks', 'deep learning theory']","""Recent theoretical work has established connections between over-parametrized neural networks and linearized models governed by the Neural Tangent Kernels (NTKs). NTK theory leads to concrete convergence and generalization results, yet the empirical performance of neural networks are observed to exceed their linearized models, suggesting insufficiency of this theory. Towards closing this gap, we investigate the training of over-parametrized neural networks that are beyond the NTK regime yet still governed by the Taylor expansion of the network. We bring forward the idea of randomizing the neural networks, which allows them to escape their NTK and couple with quadratic models. We show that the optimization landscape of randomized two-layer networks are nice and amenable to escaping-saddle algorithms. We prove concrete generalization and expressivity results on these randomized networks, which lead to sample complexity bounds (of learning certain simple functions) that match the NTK and can in addition be better by a dimension factor when mild distributional assumptions are present. We demonstrate that our randomization technique can be generalized systematically beyond the quadratic case, by using it to find networks that are coupled with higher-order terms in their Taylor series. ""","""This paper studies the training of over-parameterized two-layer neural networks by considering high-order Taylor approximation, and randomizing the network to remove the first order term in the networks Taylor expansion. This enables the neural network training go beyond the recently so-called neural tangent kernel (NTK) regime. The authors also established the optimization landscape, generalization error and expressive power results under the proposed analysis framework. They showed that when learning polynomials, the proposed randomized networks with quadratic Taylor approximation outperform standard NTK by a factor of the input dimension. This is a very nice work, and provides a new perspective on NTK and beyond. All reviewers are in support of accepting this paper. """
210,"""Kronecker Attention Networks""",[],"""Attention operators have been applied on both 1-D data like texts and higher-order data such as images and videos. Use of attention operators on high-order data requires flattening of the spatial or spatial-temporal dimensions into a vector, which is assumed to follow a multivariate normal distribution. This not only incurs excessive requirements on computational resources, but also fails to preserve structures in data. In this work, we propose to avoid flattening by developing Kronecker attention operators (KAOs) that operate on high-order tensor data directly. KAOs lead to dramatic reductions in computational resources. Moreover, we analyze KAOs theoretically from a probabilistic perspective and point out that KAOs assume the data follow matrix-variate normal distributions. Experimental results show that KAOs reduce the amount of required computational resources by a factor of hundreds, with larger factors for higher-dimensional and higher-order data. Results also show that networks with KAOs outperform models without attention, while achieving competitive performance as those with original attention operators.""","""This submission has been assessed by three reviewers who scored it as 3/3/3. The main criticism includes lack of motivation for sections 3.1 and 3.2, comparisons to mere regular self-attention without encompassing more works on this topic, a connection between Theorem 1 and the rest of the paper seems missing. Finally, there exists a strong resemblance to another submission by the same authors which is also raises the questions about potentially a dual submission. Even excluding the last argument, lack of responses to reviewers does not help this case. Thus, this paper cannot be accepted by ICLR2020."""
211,"""INSTANCE CROSS ENTROPY FOR DEEP METRIC LEARNING""","['Deep Metric Learning', 'Instance Cross Entropy', 'Sample Mining/Weighting', 'Image Retrieval']","""Loss functions play a crucial role in deep metric learning thus a variety of them have been proposed. Some supervise the learning process by pairwise or tripletwise similarity constraints while others take the advantage of structured similarity information among multiple data points. In this work, we approach deep metric learning from a novel perspective. We propose instance cross entropy (ICE) which measures the difference between an estimated instance-level matching distribution and its ground-truth one. ICE has three main appealing properties. Firstly, similar to categorical cross entropy (CCE), ICE has clear probabilistic interpretation and exploits structured semantic similarity information for learning supervision. Secondly, ICE is scalable to infinite training data as it learns on mini-batches iteratively and is independent of the training set size. Thirdly, motivated by our relative weight analysis, seamless sample reweighting is incorporated. It rescales samples gradients to control the differentiation degree over training examples instead of truncating them by sample mining. In addition to its simplicity and intuitiveness, extensive experiments on three real-world benchmarks demonstrate the superiority of ICE.""","""The paper proposes a new objective function called ICE for metric learning. There was a substantial discussion with the authors about this paper. The two reviewers most experienced in the field found the novelty compared to the vast existing literature lacking, and remained unconvinced after the discussion. Some reviewers also found the technical presentation and interpretations to need improvement, and this was partially addressed by a new revision. Based on this discussion, I recommend a rejection at this time, but encourage the authors to incorporate the feedback and in particular place the work in context more fully, and resubmit to another venue."""
212,"""A closer look at the approximation capabilities of neural networks""","['deep learning', 'approximation', 'universal approximation theorem']","""The universal approximation theorem, in one of its most general versions, says that if we consider only continuous activation functions , then a standard feedforward neural network with one hidden layer is able to approximate any continuous multivariate function f to any given approximation threshold , if and only if is non-polynomial. In this paper, we give a direct algebraic proof of the theorem. Furthermore we shall explicitly quantify the number of hidden units required for approximation. Specifically, if X in R^n is compact, then a neural network with n input units, m output units, and a single hidden layer with {n+d choose d} hidden units (independent of m and ), can uniformly approximate any polynomial function f:X -> R^m whose total degree is at most d for each of its m coordinate functions. In the general case that f is any continuous function, we show there exists some N in O(^{-n}) (independent of m), such that N hidden units would suffice to approximate f. We also show that this uniform approximation property (UAP) still holds even under seemingly strong conditions imposed on the weights. We highlight several consequences: (i) For any > 0, the UAP still holds if we restrict all non-bias weights w in the last layer to satisfy |w| < . (ii) There exists some >0 (depending only on f and ), such that the UAP still holds if we restrict all non-bias weights w in the first layer to satisfy |w|>. (iii) If the non-bias weights in the first layer are *fixed* and randomly chosen from a suitable range, then the UAP holds with probability 1.""","""This is a nice paper on the classical problem of universal approximation, but giving a direct proof with good approximation rates, and providing many refinements and ties to the literature. If possible, I urge the authors to revise the paper further for camera ready; there are various technical oversights (e.g., 1/lambda should appear in the approximation rates in theorem 3.1), and the proof of theorem 3.1 is an uninterrupted 2.5 page block (splitting it into lemmas would make it cleaner, and also those lemmas could be useful to other authors)."""
213,"""Knowledge Hypergraphs: Prediction Beyond Binary Relations""","['knowledge graphs', 'knowledge hypergraphs', 'knowledge hypergraph completion']","""A Knowledge Hypergraph is a knowledge base where relations are defined on two or more entities. In this work, we introduce two embedding-based models that perform link prediction in knowledge hypergraphs: (1) HSimplE is a shift-based method that is inspired by an existing model operating on knowledge graphs, in which the representation of an entity is a function of its position in the relation, and (2) HypE is a convolution-based method which disentangles the representation of an entity from its position in the relation. We test our models on two new knowledge hypergraph datasets that we obtain from Freebase, and show that both HSimplE and HypE are more effective in predicting links in knowledge hypergraphs than the proposed baselines and existing methods. Our experiments show that HypE outperforms HSimplE when trained with fewer parameters and when tested on samples that contain at least one entity in a position never encountered during training.""","""The paper proposes two methods for link prediction in knowledge hypergraphs. The first method concatenates the embedding of all entities and relations in a hyperedge. The second method combines an entity embedding, a relation embedding, and a weighted convolution of positions. The authors demonstrate on two datasets (derived by the authors from Freebase), that the proposed methods work well compared to baselines. The paper proposes direct generalizations of knowledge graph approaches, and unfortunately does not yet provide a comprehensive coverage of the possible design space of the two proposed extensions. The authors should be commended for providing the source code for reproducibility. One of the reviewers (who was unfortunately also the most negative), was time pressed. Unfortunately, the discussion period was not used by the reviewers to respond to the authors' rebuttal of their concerns. Even discounting the most negative review, this paper is on the borderline, and given the large number of submissions to ICLR, it unfortunately falls below the acceptance threshold in its current form. """
214,"""Evaluating The Search Phase of Neural Architecture Search""","['Neural architecture search', 'parameter sharing', 'random search', 'evaluation framework']",""" Neural Architecture Search (NAS) aims to facilitate the design of deep networks for new tasks. Existing techniques rely on two stages: searching over the architecture space and validating the best architecture. NAS algorithms are currently compared solely based on their results on the downstream task. While intuitive, this fails to explicitly evaluate the effectiveness of their search strategies. In this paper, we propose to evaluate the NAS search phase. To this end, we compare the quality of the solutions obtained by NAS search policies with that of random architecture selection. We find that: (i) On average, the state-of-the-art NAS algorithms perform similarly to the random policy; (ii) the widely-used weight sharing strategy degrades the ranking of the NAS candidates to the point of not reflecting their true performance, thus reducing the effectiveness of the search process. We believe that our evaluation framework will be key to designing NAS strategies that consistently discover architectures superior to random ones.""","""This is one of several recent parallel papers that pointed out issues with neural architecture search (NAS). It shows that several NAS algorithms do not perform better than random search and finds that their weight sharing mechanism leads to low correlations of the search performance and final evaluation performance. Code is available to ensure reproducibility of the work. After the discussion period, all reviewers are mildly in favour of accepting the paper. My recommendation is therefore to accept the paper. The paper's results may in part appear to be old news by now, but they were not when the paper first appeared on arXiv (in parallel to Li & Talwalkar, so similarities to that work should not be held against this paper)."""
215,"""Learning to Learn by Zeroth-Order Oracle""","['learning to learn', 'zeroth-order optimization', 'black-box adversarial attack']","""In the learning to learn (L2L) framework, we cast the design of optimization algorithms as a machine learning problem and use deep neural networks to learn the update rules. In this paper, we extend the L2L framework to zeroth-order (ZO) optimization setting, where no explicit gradient information is available. Our learned optimizer, modeled as recurrent neural network (RNN), first approximates gradient by ZO gradient estimator and then produces parameter update utilizing the knowledge of previous iterations. To reduce high variance effect due to ZO gradient estimator, we further introduce another RNN to learn the Gaussian sampling rule and dynamically guide the query direction sampling. Our learned optimizer outperforms hand-designed algorithms in terms of convergence rate and final solution on both synthetic and practical ZO optimization tasks (in particular, the black-box adversarial attack task, which is one of the most widely used tasks of ZO optimization). We finally conduct extensive analytical experiments to demonstrate the effectiveness of our proposed optimizer.""","""This paper proposes to extend learning to learn framework based on zeroth-order optimization. Generally, the paper is well presented and easy to follow. The core idea is to incorporate another RNN to adaptively to learn the Gaussian sampling rule. Although the method does not seem to have a strong theorical support, its effectiveness is evaluated in the well-organized experiments including realistic tasks like black-box adversarial attack. All reviewers including two experts in this field admit the novelty of the methods and are positive to the acceptance. Id like to support their opinions and recommend accepting the paper. As R#1 still finds some details unclear, please try to clarify these points in the final version of the paper."""
216,"""High-Frequency guided Curriculum Learning for Class-specific Object Boundary Detection""","['Computer Vision', 'Object Contour Detection', 'Curriculum Learning', 'Wavelets', 'Aerial Imagery']","""This work addresses class-specific object boundary extraction, i.e., retrieving boundary pixels that belong to a class of objects in the given image. Although recent ConvNet-based approaches demonstrate impressive results, we notice that they produce several false-alarms and misdetections when used in real-world applications. We hypothesize that although boundary detection is simple at some pixels that are rooted in identifiable high-frequency locations, other pixels pose a higher level of difficulties, for instance, region pixels with an appearance similar to the boundaries; or boundary pixels with insignificant edge strengths. Therefore, the training process needs to account for different levels of learning complexity in different regions to overcome false alarms. In this work, we devise a curriculum-learning-based training process for object boundary detection. This multi-stage training process first trains the network at simpler pixels (with sufficient edge strengths) and then at harder pixels in the later stages of the curriculum. We also propose a novel system for object boundary detection that relies on a fully convolutional neural network (FCN) and wavelet decomposition of image frequencies. This system uses high-frequency bands from the wavelet pyramid and augments them to conv features from different layers of FCN. Our ablation studies with contourMNIST dataset, a simulated digit contours from MNIST, demonstrate that this explicit high-frequency augmentation helps the model to converge faster. Our model trained by the proposed curriculum scheme outperforms a state-of-the-art object boundary detection method by a significant margin on a challenging aerial image dataset. ""","""This paper received all negative reviewers, and the scores were kept after the rebuttal. The authors are encouraged to submit their work to a computer vision conference where this kind of work may be more appreciated. Furthermore, including stronger baselines such as Acuna et al is recommended."""
217,"""Exploring Model-based Planning with Policy Networks""","['reinforcement learning', 'model-based reinforcement learning', 'planning']","""Model-based reinforcement learning (MBRL) with model-predictive control or online planning has shown great potential for locomotion control tasks in both sample efficiency and asymptotic performance. Despite the successes, the existing planning methods search from candidate sequences randomly generated in the action space, which is inefficient in complex high-dimensional environments. In this paper, we propose a novel MBRL algorithm, model-based policy planning (POPLIN), that combines policy networks with online planning. More specifically, we formulate action planning at each time-step as an optimization problem using neural networks. We experiment with both optimization w.r.t. the action sequences initialized from the policy network, and also online optimization directly w.r.t. the parameters of the policy network. We show that POPLIN obtains state-of-the-art performance in the MuJoCo benchmarking environments, being about 3x more sample efficient than the state-of-the-art algorithms, such as PETS, TD3 and SAC. To explain the effectiveness of our algorithm, we show that the optimization surface in parameter space is smoother than in action space. Further more, we found the distilled policy network can be effectively applied without the expansive model predictive control during test time for some environments such as Cheetah. Code is released.""","""This paper proposes a model-based policy optimization approach that uses both a policy and model to plan online at test time. The paper includes significant contributions and strong results in comparison to a number of prior works, and is quite relevant to the ICLR community. There are a couple of related works that are missing [1,2] that combine learned policies and learned models, but generally the discussion of prior work is thorough. Overall, the paper is clearly above the bar for acceptance. [1] pseudo-url [2] pseudo-url"""
218,"""Variational Diffusion Autoencoders with Random Walk Sampling""","['generative models', 'variational inference', 'manifold learning', 'diffusion maps']","""Variational inference (VI) methods and especially variational autoencoders (VAEs) specify scalable generative models that enjoy an intuitive connection to manifold learning --- with many default priors the posterior/likelihood pair pseudo-formula / pseudo-formula can be viewed as an approximate homeomorphism (and its inverse) between the data manifold and a latent Euclidean space. However, these approximations are well-documented to become degenerate in training. Unless the subjective prior is carefully chosen, the topologies of the prior and data distributions often will not match. Conversely, diffusion maps (DM) automatically \textit{infer} the data topology and enjoy a rigorous connection to manifold learning, but do not scale easily or provide the inverse homeomorphism. In this paper, we propose \textbf{a)} a principled measure for recognizing the mismatch between data and latent distributions and \textbf{b)} a method that combines the advantages of variational inference and diffusion maps to learn a homeomorphic generative model. The measure, the \textit{locally bi-Lipschitz property}, is a sufficient condition for a homeomorphism and easy to compute and interpret. The method, the \textit{variational diffusion autoencoder} (VDAE), is a novel generative algorithm that first infers the topology of the data distribution, then models a diffusion random walk over the data. To achieve efficient computation in VDAEs, we use stochastic versions of both variational inference and manifold learning optimization. We prove approximation theoretic results for the dimension dependence of VDAEs, and that locally isotropic sampling in the latent space results in a random walk over the reconstructed manifold. Finally, we demonstrate the utility of our method on various real and synthetic datasets, and show that it exhibits performance superior to other generative models.""","""This paper proposes to train latent-variable models (VAEs) based on diffusion maps on the data-manifold. While this is an interesting idea, there are substantial problems with the current draft regarding clarity, novelty and scalability. In its current form, it is unlikely that the proposed model will have a substantial impact on the community."""
219,"""What Can Learned Intrinsic Rewards Capture?""","['reinforcement learning', 'deep reinforcement learning', 'intrinsic movitation']","""Reinforcement learning agents can include different components, such as policies, value functions, state representations, and environment models. Any or all of these can be the loci of knowledge, i.e., structures where knowledge, whether given or learned, can be deposited and reused. Regardless of its composition, the objective of an agent is behave so as to maximise the sum of suitable scalar functions of state: the rewards. As far as the learning algorithm is concerned, these rewards are typically given and immutable. In this paper we instead consider the proposition that the reward function itself may be a good locus of knowledge. This is consistent with a common use, in the literature, of hand-designed intrinsic rewards to improve the learning dynamics of an agent. We adopt a multi-lifetime setting of the Optimal Rewards Framework, and investigate how meta-learning can be used to find good reward functions in a data-driven way. To this end, we propose to meta-learn an intrinsic reward function that allows agents to maximise their extrinsic rewards accumulated until the end of their lifetimes. This long-term lifetime objective allows our learned intrinsic reward to generate systematic multi-episode exploratory behaviour. Through proof-of-concept experiments, we elucidate interesting forms of knowledge that may be captured by a suitably trained intrinsic reward such as the usefulness of exploring uncertain states and rewards.""","""The authors present a metalearning-based approach to learning intrinsic rewards that improve RL performance across distributions of problems. This is essentially a more computationally efficient approach to approaches suggested by Singh (2009/10). The reviewers agreed that the core idea was good, if a bit incremental, but were also concerned about the similarity to the Singh et al. work, the simplicity of the toy domains tested, and comparison to relevant methods. The reviewers felt that the authors addressed their main concerns and significantly improved the paper; however the similarity to Singh et al. remains, and thus the concerns about incrementalism. Thus, I recommend this paper for rejection at this time."""
220,"""Multi-step Greedy Policies in Model-Free Deep Reinforcement Learning""","['Reinforcement Learning', 'Multi-step greedy policies', 'Model free Reinforcement Learning']","""Multi-step greedy policies have been extensively used in model-based Reinforcement Learning (RL) and in the case when a model of the environment is available (e.g., in the game of Go). In this work, we explore the benefits of multi-step greedy policies in model-free RL when employed in the framework of multi-step Dynamic Programming (DP): multi-step Policy and Value Iteration. These algorithms iteratively solve short-horizon decision problems and converge to the optimal solution of the original one. By using model-free algorithms as solvers of the short-horizon problems we derive fully model-free algorithms which are instances of the multi-step DP framework. As model-free algorithms are prone to instabilities w.r.t. the decision problem horizon, this simple approach can help in mitigating these instabilities and results in an improved model-free algorithms. We test this approach and show results on both discrete and continuous control problems.""","""This paper extends recent multi-step dynamic programming algorithms to reinforcement learning with function approximation. In particular, the paper extends h-step optimal Bellman operators (and associated k-PI and k-VI algorithms) to deep reinforcement learning. The paper describes new extensions to DQN and TRPO algorithms. This approach is claimed to reduce the instability of model-free algorithms, and the approach is tested on Atari and Mujoco domains. The reviewers noticed several limitations of the work. The reviewers found little theoretical contribution in this work and they were unsatisfied with the empirical contributions. The reviewers were unconvinced of the strength and clarity of the empirical results with the Atari and Mujoco domains along with the deep learning network architectures. The reviewers suggested that simpler domains with a simpler function approximation scheme could enable more through experiments and more conclusive results. The claim in the abstract of addressing the instabilities was also not adequately studied in the paper. This paper is not ready for publication. The primary contribution of this work is the empirical evaluation, and the evaluation is not sufficiently clear for the reviewers."""
221,"""XLDA: Cross-Lingual Data Augmentation for Natural Language Inference and Question Answering""","['cross-lingual', 'transfer learning', 'BERT']","""While natural language processing systems often focus on a single language, multilingual transfer learning has the potential to improve performance, especially for low-resource languages. We introduce XLDA, cross-lingual data augmentation, a method that replaces a segment of the input text with its translation in another language. XLDA enhances performance of all 14 tested languages of the cross-lingual natural language inference (XNLI) benchmark. With improvements of up to 4.8, training with XLDA achieves state-of-the-art performance for Greek, Turkish, and Urdu. XLDA is in contrast to, and performs markedly better than, a more naive approach that aggregates examples in various languages in a way that each example is solely in one language. On the SQuAD question answering task, we see that XLDA provides a 1.0 performance increase on the English evaluation set. Comprehensive experiments suggest that most languages are effective as cross-lingual augmentors, that XLDA is robust to a wide range of translation quality, and that XLDA is even more effective for randomly initialized models than for pretrained models.""","""The authors provide an analysis of a cross-lingual data augmentation technique which they call XLDA. This consists of replacing a segment of an input text with its translation in another language. They show that when fine-tuning, it is more beneficial to train on the cross-lingual hypotheses than on the in-language pairs, especially for low resource languages such as Greek, Turkish and Urdu. The paper explores an interesting idea however they lack comparison with other techniques such as backtranslation and XLM models, and would benefit from a wider range of tasks. I feel like this paper is more suitable for an NLP-focussed venue. """
222,"""Coloring graph neural networks for node disambiguation""","['Graph neural networks', 'separability', 'node disambiguation', 'universal approximation', 'representation learning']","""In this paper, we show that a simple coloring scheme can improve, both theoretically and empirically, the expressive power of Message Passing Neural Networks (MPNNs). More specifically, we introduce a graph neural network called Colored Local Iterative Procedure (CLIP) that uses colors to disambiguate identical node attributes, and show that this representation is a universal approximator of continuous functions on graphs with node attributes. Our method relies on separability, a key topological characteristic that allows to extend well-chosen neural networks into universal representations. Finally, we show experimentally that CLIP is capable of capturing structural characteristics that traditional MPNNs fail to distinguish, while being state-of-the-art on benchmark graph classification datasets.""","""This paper presents an extension of MPNN which leverages the random color augmentation to improve the representation power of MPNN. The experimental results shows the effectiveness of colorization. A majority of the reviewers were particularly concerned about lacking permutation invariance in the approach as well as the large variance issue in practice, and their opinion stays the same after the rebuttal. The reviewers unanimously expressed their concerns on the large variance issue during the discussion period. Overall, the reviewers believe that the authors has not addressed their concerns sufficiently."""
223,"""Neural Arithmetic Units""",[],"""Neural networks can approximate complex functions, but they struggle to perform exact arithmetic operations over real numbers. The lack of inductive bias for arithmetic operations leaves neural networks without the underlying logic necessary to extrapolate on tasks such as addition, subtraction, and multiplication. We present two new neural network components: the Neural Addition Unit (NAU), which can learn exact addition and subtraction; and the Neural Multiplication Unit (NMU) that can multiply subsets of a vector. The NMU is, to our knowledge, the first arithmetic neural network component that can learn to multiply elements from a vector, when the hidden size is large. The two new components draw inspiration from a theoretical analysis of recently proposed arithmetic components. We find that careful initialization, restricting parameter space, and regularizing for sparsity is important when optimizing the NAU and NMU. Our proposed units NAU and NMU, compared with previous neural units, converge more consistently, have fewer parameters, learn faster, can converge for larger hidden sizes, obtain sparse and meaningful weights, and can extrapolate to negative and small values.""","""This paper extends work on NALUs, providing a pair of units which, in tandem, outperform NALUs. The reviewers were broadly in favour of the paper given the presentation and results. The one dissenting reviewer appears to not have had time to reconsider their score despite the main points of clarification being addressed in the revision. I am happy to err on the side of optimism here and assume they would be satisfied with the changes that came as an outcome of the discussion, and recommend acceptance."""
224,"""AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty""","['robustness', 'uncertainty']","""Modern deep neural networks can achieve high accuracy when the training distribution and test distribution are identically distributed, but this assumption is frequently violated in practice. When the train and test distributions are mismatched, accuracy can plummet. Currently there are few techniques that improve robustness to unforeseen data shifts encountered during deployment. In this work, we propose a technique to improve the robustness and uncertainty estimates of image classifiers. We propose AugMix, a data processing technique that is simple to implement, adds limited computational overhead, and helps models withstand unforeseen corruptions. AugMix significantly improves robustness and uncertainty measures on challenging image classification benchmarks, closing the gap between previous methods and the best possible performance in some cases by more than half. ""","""This paper tackles the problem of learning under data shift, i.e. when the training and testing distributions are different. The authors propose an approach to improve robustness and uncertainty of image classifiers in this situation. The technique uses synthetic samples created by mixing multiple augmented images, in addition to a Jensen-Shannon Divergence consistency loss. Its evaluation is entirely based on experimental evidence. The method is simple, easy to implement, and effective. Though this is a purely empirical paper, the experiments are extensive and convincing. In the end, the reviewers didn't show any objections against this paper. I therefore recommend acceptance."""
225,"""Variational Template Machine for Data-to-Text Generation""",[],"""How to generate descriptions from structured data organized in tables? Existing approaches using neural encoder-decoder models often suffer from lacking diversity. We claim that an open set of templates is crucial for enriching the phrase constructions and realizing varied generations.Learning such templates is prohibitive since it often requires a large paired
, which is seldom available. This paper explores the problem of automatically learning reusable ""templates"" from paired and non-paired data. We propose the variational template machine (VTM), a novel method to generate text descriptions from data tables. Our contributions include: a) we carefully devise a specific model architecture and losses to explicitly disentangle text template and semantic content information, in the latent spaces, and b) we utilize both small parallel data and large raw text without aligned tables to enrich the template learning. Experiments on datasets from a variety of different domains show that VTM is able to generate more diversely while keeping a good fluency and quality. ""","""The paper addresses the problem of generating descriptions from structured data. In particular a Variational Template Machine which explicitly disentangles templates from semantic content. They empirically demonstrate that their model performs better than existing methods on different methods. This paper has received a strong acceptance from two reviewers. In particular, the reviewers have appreciated the novelty and empirical evaluation of the proposed approach. R3 has raised quite a few concerns but I feel they were adequately addressed by the reviewers. Hence, I recommend that the paper be accepted. """
226,"""Potential Flow Generator with pseudo-formula Optimal Transport Regularity for Generative Models""","['generative models', 'optimal transport', 'GANs', 'flow-based models']","""We propose a potential flow generator with pseudo-formula optimal transport regularity, which can be easily integrated into a wide range of generative models including different versions of GANs and flow-based models. With up to a slight augmentation of the original generator loss functions, our generator is not only a transport map from the input distribution to the target one, but also the one with minimum pseudo-formula transport cost. We show the correctness and robustness of the potential flow generator in several 2D problems, and illustrate the concept of ``proximity'' due to the pseudo-formula optimal transport regularity. Subsequently, we demonstrate the effectiveness of the potential flow generator in image translation tasks with unpaired training data from the MNIST dataset and the CelebA dataset. ""","""This paper proposes applying potential flow generators in conjunction with L2 optimal transport regularity to favor solutions that ""move"" input points as little as possible to output points drawn from the target distribution. The resulting pipeline can be effective in dealing with, among other things, image-to-image translation tasks with unpaired data. Overall, one of the appeals of this methodology is that it can be integrated within a number of existing generative modeling paradigms (e.g., GANs, etc.). After the rebuttal and discussion period, two reviewers maintained weak reject scores while one favored strong acceptance. With these borderline/mixed scores, this paper was discussed at the meta-review level and the final decision was to side with the majority, noting that a revision which fully addresses reviewer comments could likely be successful at a future venue. As one important lingering issue, R1 pointed out that the optimality conditions of the proposed approach are only enforced on sampled trajectories, not actually on the entire space. The rebuttal concedes this point, but suggests that the method still seems to work. But as an improvement, the suggestion is made that randomly perturbed trajectories could help to mitigate this issue. However, no experiments were conducted using this modification, which could be helpful in building confidence in the reliability of the overall methodology. Additionally, from my perspective the empirical validation could also be improved to help solidify the contribution in a revision. For example, the image-to-image translation experiments with CelebA were based on a linear (PCA) embedding and feedforward networks. It would have been nice to have seen a more sophisticated setup for this purpose (as discussed in Section 5), especially for a non-theoretical paper with an ostensibly practically-relevant algorithmic proposal. And consistent with reviewer comments, the paper definitely needs another pass to clean up a number of small grammatical mistakes."""
227,"""CLEVRER: Collision Events for Video Representation and Reasoning""","['Neuro-symbolic', 'Reasoning']","""The ability to reason about temporal and causal events from videos lies at the core of human intelligence. Most video reasoning benchmarks, however, focus on pattern recognition from complex visual and language input, instead of on causal structure. We study the complementary problem, exploring the temporal and causal structures behind videos of objects with simple visual appearance. To this end, we introduce the CoLlision Events for Video REpresentation and Reasoning (CLEVRER) dataset, a diagnostic video dataset for systematic evaluation of computational models on a wide range of reasoning tasks. Motivated by the theory of human casual judgment, CLEVRER includes four types of question: descriptive (e.g., what color), explanatory (whats responsible for), predictive (what will happen next), and counterfactual (what if). We evaluate various state-of-the-art models for visual reasoning on our benchmark. While these models thrive on the perception-based task (descriptive), they perform poorly on the causal tasks (explanatory, predictive and counterfactual), suggesting that a principled approach for causal reasoning should incorporate the capability of both perceiving complex visual and language inputs, and understanding the underlying dynamics and causal relations. We also study an oracle model that explicitly combines these components via symbolic representations. ""","""The reviewers are unanimous in their opinion that this paper offers a novel approach to causal learning. I concur."""
228,"""On the Weaknesses of Reinforcement Learning for Neural Machine Translation""","['Reinforcement learning', 'MRT', 'minimum risk training', 'reinforce', 'machine translation', 'peakkiness', 'generation']","""Reinforcement learning (RL) is frequently used to increase performance in text generation tasks, including machine translation (MT), notably through the use of Minimum Risk Training (MRT) and Generative Adversarial Networks (GAN). However, little is known about what and how these methods learn in the context of MT. We prove that one of the most common RL methods for MT does not optimize the expected reward, as well as show that other methods take an infeasibly long time to converge. In fact, our results suggest that RL practices in MT are likely to improve performance only where the pre-trained parameters are already close to yielding the correct translation. Our findings further suggest that observed gains may be due to effects unrelated to the training signal, concretely, changes in the shape of the distribution curve.""","""In my opinion, the main strength of this work is the theoretical analysis and some observations that may be of great interest to the NLP community in terms of better analyzing the performance of RL (and ""RL-like"") methods as optimizers. The main weakness, as pointed out by R3, the limited empirical analysis. I would urge the authors to take R3's advice and attempt insofar as possible to broaden the scope of the empirical analysis in the final. I believe that this is important for the paper to be able to make its case convincingly. Nonetheless, I do think that the paper makes a significant contribution that will be of interest to the community, and should be presented at ICLR. Therefore, I would recommend for it to be accepted."""
229,"""Parallel Scheduled Sampling""","['deep learning', 'generative models', 'teacher forcing', 'scheduled sampling']","""Auto-regressive models are widely used in sequence generation problems. The output sequence is typically generated in a predetermined order, one discrete unit(pixel or word or character) at a time. The models are trained by teacher-forcing where ground-truth history is fed to the model as input, which at test time is replaced by the model prediction. Scheduled Sampling (Bengio et al., 2015) aimsto mitigate this discrepancy between train and test time by randomly replacing some discrete units in the history with the models prediction. While teacher-forced training works well with ML accelerators as the computation can be parallelized across time, Scheduled Sampling involves undesirable sequential processing. In this paper, we introduce a simple technique to parallelize Scheduled Sampling across time. Experimentally, we find the proposed technique leads to equivalent or better performance on image generation, summarization, dialog generation, and translation compared to teacher-forced training. n dialog response generation task,Parallel Scheduled Sampling achieves 1.6 BLEU score (11.5%) improvement over teacher-forcing while in image generation it achieves 20% and 13.8% improvement in Frechet Inception Distance (FID) and Inception Score (IS) respectively. Further, we discuss the effects of different hyper-parameters associated with Scheduled Sampling on the model performance.""","""The paper proposes a parallelization approach for speeding up scheduled sampling, and show significant improvement over the original. The approach is simple and a clear improvement over vanilla schedule sampling. However, the reviewers point out that there are more recent methods to compare against or combine with, and that the paper is a bit thin on content and could have addressed this. The proposed approach may well combine well with newer techniques, but I tend to agree that this should be tested."""
230,"""Fully Polynomial-Time Randomized Approximation Schemes for Global Optimization of High-Dimensional Folded Concave Penalized Generalized Linear Models""","['statistical learning', 'FPRAS', 'global optimization', 'folded concave penalty', 'GLM', 'high dimensional learning']","""Global solutions to high-dimensional sparse estimation problems with a folded concave penalty (FCP) have been shown to be statistically desirable but are strongly NP-hard to compute, which implies the non-existence of a pseudo-polynomial time global optimization schemes in the worst case. This paper shows that, with high probability, a global solution to the formulation for a FCP-based high-dimensional generalized linear model coincides with a stationary point characterized by the significant subspace second order necessary conditions (S pseudo-formula ONC). Since the desired S pseudo-formula ONC solution admits a fully polynomial-time approximation schemes (FPTAS), we thus have shown the existence of fully polynomial-time randomized approximation scheme (FPRAS) for a strongly NP-hard problem. We further demonstrate two versions of the FPRAS for generating the desired S pseudo-formula ONC solutions. One follows the paradigm of an interior point trust region algorithm and the other is the well-studied local linear approximation (LLA). Our analysis thus provides new techniques for global optimization of certain NP-Hard problems and new insights on the effectiveness of LLA.""","""Thanks for your detailed feedback to the reviewers, which clarified us a lot in many respects. However, the novelty of this paper is rather marginal and given the high competition at ICLR2020, this paper is unfortunately below the bar. We hope that the reviewers' comments are useful for improving the paper for potential future publication. """
231,"""Stochastic Conditional Generative Networks with Basis Decomposition""",[],"""While generative adversarial networks (GANs) have revolutionized machine learning, a number of open questions remain to fully understand them and exploit their power. One of these questions is how to efficiently achieve proper diversity and sampling of the multi-mode data space. To address this, we introduce BasisGAN, a stochastic conditional multi-mode image generator. By exploiting the observation that a convolutional filter can be well approximated as a linear combination of a small set of basis elements, we learn a plug-and-played basis generator to stochastically generate basis elements, with just a few hundred of parameters, to fully embed stochasticity into convolutional filters. By sampling basis elements instead of filters, we dramatically reduce the cost of modeling the parameter space with no sacrifice on either image diversity or fidelity. To illustrate this proposed plug-and-play framework, we construct variants of BasisGAN based on state-of-the-art conditional image generation networks, and train the networks by simply plugging in a basis generator, without additional auxiliary components, hyperparameters, or training objectives. The experimental success is complemented with theoretical results indicating how the perturbations introduced by the proposed sampling of basis elements can propagate to the appearance of generated images.""","""Main content: BasiGAN, a novel method for introducing stochasticity in conditional GANs Summary of discussion: reviewer1: interesting work and results on GANs. Reviewer had a question on pre-defned basis but i think it was answered by the authors. reviewer3: interesting and novel work on GANS, wel-written paper and improves on SOTA. The main uestion is around bases again like reviewer 1, but it seems the authors have addressed this. reviewer4: Novel interesting work. Main comments are around making Theorem 1 more theoretically correct, which it sounds like the authors addressed. Recommendation: Poster. Well written and novel paper and authors addressed a lot of concerns. """
232,"""Graph Neural Networks for Soft Semi-Supervised Learning on Hypergraphs""","['Graph Neural Networks', 'Soft Semi-supervised Learning', 'Hypergraphs']","""Graph-based semi-supervised learning (SSL) assigns labels to initially unlabelled vertices in a graph. Graph neural networks (GNNs), esp. graph convolutional networks (GCNs), inspired the current-state-of-the art models for graph-based SSL problems. GCNs inherently assume that the labels of interest are numerical or categorical variables. However, in many real-world applications such as co-authorship networks, recommendation networks, etc., vertex labels can be naturally represented by probability distributions or histograms. Moreover, real-world network datasets have complex relationships going beyond pairwise associations. These relationships can be modelled naturally and flexibly by hypergraphs. In this paper, we explore GNNs for graph-based SSL of histograms. Motivated by complex relationships (those going beyond pairwise) in real-world networks, we propose a novel method for directed hypergraphs. Our work builds upon existing works on graph-based SSL of histograms derived from the theory of optimal transportation. A key contribution of this paper is to establish generalisation error bounds for a one-layer GNN within the framework of algorithmic stability. We also demonstrate our proposed methods' effectiveness through detailed experimentation on real-world data. We have made the code available.""","""This paper proposes and evaluates using graph convolutional networks for semi-supervised learning of probability distributions (histograms). The paper was reviewed by three experts, all of whom gave a Weak Reject rating. The reviewers acknowledged the strengths of the paper, but also had several important concerns including quality of writing and significance of the contribution, in addition to several more specific technical questions. The authors submitted a response that addressed these concerns to some extent. However, in post-rebuttal discussions, the reviewers chose not to change their ratings, feeling that quality of writing still needed to be improved and that overall a significant revision and another round of peer review would be needed. In light of these reviews, we are not able to recommend accepting the paper, but hope the authors will find the suggestions of the reviewers helpful in preparing a revision for another venue. """
233,"""Learning Temporal Coherence via Self-Supervision for GAN-based Video Generation""","['adversarial training', 'generative models', 'unpaired video translation', 'video super-resolution', 'temporal coherence', 'self-supervision', 'cycle-consistency']","""We focus on temporal self-supervision for GAN-based video generation tasks. While adversarial training successfully yields generative models for a variety of areas, temporal relationship in the generated data is much less explored. This is crucial for sequential generation tasks, e.g. video super-resolution and unpaired video translation. For the former, state-of-the-art methods often favor simpler norm losses such as L2 over adversarial training. However, their averaging nature easily leads to temporally smooth results with an undesirable lack of spatial detail. For unpaired video translation, existing approaches modify the generator networks to form spatio-temporal cycle consistencies. In contrast, we focus on improving the learning objectives and propose a temporally self-supervised algorithm. For both tasks, we show that temporal adversarial learning is key to achieving temporally coherent solutions without sacrificing spatial detail. We also propose a novel Ping-Pong loss to improve the long-term temporal consistency. It effectively prevents recurrent networks from accumulating artifacts temporally without depressing detailed features. We also propose a first set of metrics to quantitatively evaluate the accuracy as well as the perceptual quality of the temporal evolution. A series of user studies confirms the rankings computed with these metrics.""","""The paper presents an architecture for conditional video generation tasks with temporal self-supervision and temporal adversarial learning. The proposed architecture is reasonable but looks somewhat complicated. In terms of technical novelty, the so-called ""ping-pong"" loss looks interesting and novel, but other parts are more-or-less some combinations of existing techniques. Experimental results show promise of the proposed method against selected baselines for video super-resolution (VSR) and unpaired video-to-video translation tasks (UVT). In terms of weakness, (1) the technical novelty is not very high; (2) the final loss is a combination of many losses with many hyperparameters; (3) experimentally the proposed method is not compared against recent SOTA methods on VSR and UVT. The proposed method should be compared against more recent SOTA baselines for VSR tasks (see examples of references below): EDVR: Video Restoration with Enhanced Deformable Convolutional Networks pseudo-url Progressive Fusion Video Super-Resolution Network via Exploiting Non-Local Spatio-Temporal Correlations ICCV 2019 Recurrent Back-Projection Network for Video Super-Resolution CVPR 2019 The same comment would apply for baselines for UVT tasks: Mocycle-GAN: Unpaired Video-to-Video Translation pseudo-url Preserving Semantic and Temporal Consistency for Unpaired Video-to-Video Translation pseudo-url Particularly for UVT, the evaluated dataset seems limited in terms of scope as well (i.e., evaluations on more popular benchmarks, such as Viper would be needed for further validation). Overall, given that the contribution of this work is an empirical performance with a rather complex architecture/loss, more comprehensive empirical evaluations on SOTA baselines are warranted. """
234,"""Training Generative Adversarial Networks from Incomplete Observations using Factorised Discriminators""","['Adversarial Learning', 'Semi-supervised Learning', 'Image generation', 'Image segmentation', 'Missing Data']","""Generative adversarial networks (GANs) have shown great success in applications such as image generation and inpainting. However, they typically require large datasets, which are often not available, especially in the context of prediction tasks such as image segmentation that require labels. Therefore, methods such as the CycleGAN use more easily available unlabelled data, but do not offer a way to leverage additional labelled data for improved performance. To address this shortcoming, we show how to factorise the joint data distribution into a set of lower-dimensional distributions along with their dependencies. This allows splitting the discriminator in a GAN into multiple ""sub-discriminators"" that can be independently trained from incomplete observations. Their outputs can be combined to estimate the density ratio between the joint real and the generator distribution, which enables training generators as in the original GAN framework. We apply our method to image generation, image segmentation and audio source separation, and obtain improved performance over a standard GAN when additional incomplete training examples are available. For the Cityscapes segmentation task in particular, our method also improves accuracy by an absolute 14.9% over CycleGAN while using only 25 additional paired examples.""","""All three reviewers appreciate the new method (FactorGAN) for training generative networks from incomplete observations. At the same time, the quality of the experimental results can still be improved. On balance, the paper will make a good poster."""
235,"""Rnyi Fair Inference""",[],"""Machine learning algorithms have been increasingly deployed in critical automated decision-making systems that directly affect human lives. When these algorithms are solely trained to minimize the training/test error, they could suffer from systematic discrimination against individuals based on their sensitive attributes, such as gender or race. Recently, there has been a surge in machine learning society to develop algorithms for fair machine learning. In particular, several adversarial learning procedures have been proposed to impose fairness. Unfortunately, these algorithms either can only impose fairness up to linear dependence between the variables, or they lack computational convergence guarantees. In this paper, we use Rnyi correlation as a measure of fairness of machine learning models and develop a general training framework to impose fairness. In particular, we propose a min-max formulation which balances the accuracy and fairness when solved to optimality. For the case of discrete sensitive attributes, we suggest an iterative algorithm with theoretical convergence guarantee for solving the proposed min-max problem. Our algorithm and analysis are then specialized to fair classification and fair clustering problems. To demonstrate the performance of the proposed Rnyi fair inference framework in practice, we compare it with well-known existing methods on several benchmark datasets. Experiments indicate that the proposed method has favorable empirical performance against state-of-the-art approaches.""","""The paper addresses the problem of fair representation learning. The authors propose to use Rnyi correlation as a measure of (in)dependence between the predictor and the sensitive attribute and developed a general training framework to impose fairness with theoretical properties. The empirical evaluations have been performed using standard benchmarks for fairness methods and the SOTA baselines -- all this supports the main claims of this work's contributions. All the reviewers and AC agree that this work has made a valuable contribution and recommend acceptance. Congratulations to the authors! """
236,"""Homogeneous Linear Inequality Constraints for Neural Network Activations""","['deep learning', 'constrained optimization']","""We propose a method to impose homogeneous linear inequality constraints of the form 0$ on neural network activations. The proposed method allows a data-driven training approach to be combined with modeling prior knowledge about the task. One way to achieve this task is by means of a projection step at test time after unconstrained training. However, this is an expensive operation. By directly incorporating the constraints into the architecture, we can significantly speed-up inference at test time; for instance, our experiments show a speed-up of up to two orders of magnitude over a projection method. Our algorithm computes a suitable parameterization of the feasible set at initialization and uses standard variants of stochastic gradient descent to find solutions to the constrained network. Thus, the modeling constraints are always satisfied during training. Crucially, our approach avoids to solve an optimization problem at each training step or to manually trade-off data and constraint fidelity with additional hyperparameters. We consider constrained generative modeling as an important application domain and experimentally demonstrate the proposed method by constraining a variational autoencoder.""","""The authors propose a framework for incorporating homogeneous linear inequality constraints on neural network activations into neural network architectures. The authors show that this enables training neural networks that are guaranteed to satisfy non-trivial constraints on the neurons in a manner that is significantly more scalable than prior work, and demonstrate this experimentally on a generative modelling task. The problem considered in the paper is certainly significant (training neural networks that are guaranteed to satisfy constraints arises in many applications) and the authors make some interesting contributions. However, the reviewers found the following issues that make it difficult to accept the paper in its present form: 1) The setting of homogeneous linear equality constraints is not well-motivated and the significance of being able to impose such constraints is not clearly articulated in the paper. The authors would do well to prepare a future revision documenting use-cases motivated by practical applications and add these to the paper. 2) The experimental evaluation is not sufficiently thorough: the authors evaluate their method on an artificial constraint involving a ""checkerboard pattern"" on MNIST. Even in this case, the training method proposed by the authors seems to suffer from some issues, and more thorough experiments need to be conducted to confirm that the training method can perform well across a variety of datasets and constraints. Given these issues, I recommend rejection. However, I encourage the authors to revise their work on this important topic and prepare a future version including practical examples of the constraints and experiments on a variety of prediction tasks. """
237,"""Graph Warp Module: an Auxiliary Module for Boosting the Power of Graph Neural Networks in Molecular Graph Analysis""","['Graph Neural Networks', 'molecular graph analysis', 'supernode', 'auxiliary module']","""Graph Neural Network (GNN) is a popular architecture for the analysis of chemical molecules, and it has numerous applications in material and medicinal science. Current lines of GNNs developed for molecular analysis, however, do not fit well on the training set, and their performance does not scale well with the complexity of the network. In this paper, we propose an auxiliary module to be attached to a GNN that can boost the representation power of the model without hindering the original GNN architecture. Our auxiliary module can improve the representation power and the generalization ability of a wide variety of GNNs, including those that are used commonly in biochemical applications. ""","""This paper presents an auxiliary module to boost the representation power of GNNs. The new module consists of virtual supernode, attention unit, and warp gate unit. The usefulness of each component is shown in well-organized experiments. This is the very borderline paper with split scores. While all reviewers basically agree that the empirical findings in the paper are interesting and could be valuable to the community, one reviewer raised concern regarding the incremental novelty of the method, which is also understood by other reviewers. The impression was not changed through authors response and reviewer discussion, and there is no strong opinion to champion the paper. Therefore, Id like to recommend rejection this time. """
238,"""Depth creates no more spurious local minima in linear networks""","['local minimum', 'deep linear network']","""We show that for any convex differentiable loss, a deep linear network has no spurious local minima as long as it is true for the two layer case. This reduction greatly simplifies the study on the existence of spurious local minima in deep linear networks. When applied to the quadratic loss, our result immediately implies the powerful result by Kawaguchi (2016). Further, with the recent work by Zhou& Liang (2018), we can remove all the assumptions in (Kawaguchi, 2016). This property holds for more general multi-tower linear networks too. Our proof builds on the work in (Laurent & von Brecht, 2018) and develops a new perturbation argument to show that any spurious local minimum must have full rank, a structural property which can be useful more generally""","""Paper shows that the question of linear deep networks having spurious local minima under benign conditions on the loss function can be reduced to the two layer case. This paper is motivated by and builds upon works that are proven for specific cases. Reviewers found the techniques used to prove the result not very novel in light of existing techniques. Novelty of technique is of particular importance to this area because these results have little practical value in linear networks on their own; the goal is to extend these techniques to the more interesting non-linear case. """
239,"""Iterative Target Augmentation for Effective Conditional Generation""","['data augmentation', 'generative models', 'self-training', 'molecular optimization', 'program synthesis']","""Many challenging prediction problems, from molecular optimization to program synthesis, involve creating complex structured objects as outputs. However, available training data may not be sufficient for a generative model to learn all possible complex transformations. By leveraging the idea that evaluation is easier than generation, we show how a simple, broadly applicable, iterative target augmentation scheme can be surprisingly effective in guiding the training and use of such models. Our scheme views the generative model as a prior distribution, and employs a separately trained filter as the likelihood. In each augmentation step, we filter the model's outputs to obtain additional prediction targets for the next training epoch. Our method is applicable in the supervised as well as semi-supervised settings. We demonstrate that our approach yields significant gains over strong baselines both in molecular optimization and program synthesis. In particular, our augmented model outperforms the previous state-of-the-art in molecular optimization by over 10% in absolute gain. ""","""This paper proposes a training scheme to enhance the optimization process where the outputs are required to meet certain constraints. The authors propose to insert an additional target augmentation phase after the regular training. For each datapoint, the algorithm samples candidate outputs until it find a valid output according the an external filter. The model is further fine-tuned on the augmented dataset. The authors provided detailed answers and responses to the reviews, which the reviewers appreciated. However, some significant concerns remained, and due to a large number of stronger papers, this paper was not accepted at this time."""
240,"""Composition-based Multi-Relational Graph Convolutional Networks""","['Graph Convolutional Networks', 'Multi-relational Graphs', 'Knowledge Graph Embeddings', 'Link Prediction']","""Graph Convolutional Networks (GCNs) have recently been shown to be quite successful in modeling graph-structured data. However, the primary focus has been on handling simple undirected graphs. Multi-relational graphs are a more general and prevalent form of graphs where each edge has a label and direction associated with it. Most of the existing approaches to handle such graphs suffer from over-parameterization and are restricted to learning representations of nodes only. In this paper, we propose CompGCN, a novel Graph Convolutional framework which jointly embeds both nodes and relations in a relational graph. CompGCN leverages a variety of entity-relation composition operations from Knowledge Graph Embedding techniques and scales with the number of relations. It also generalizes several of the existing multi-relational GCN methods. We evaluate our proposed method on multiple tasks such as node classification, link prediction, and graph classification, and achieve demonstrably superior results. We make the source code of CompGCN available to foster reproducible research.""","""This paper proposes and evaluates a formulation of graph convolutional networks for multi-relation graphs. The paper was reviewed by three experts working in this area and received three Weak Accept decisions. The reviewers identified some concerns, including novelty with respect to existing work and specific details of the experimental setup and results that were not clear. The authors have addressed most of these concerns in their response, including adding a table that explicitly explains the contribution with respect to existing work and clarifying the missing details. Given the unanimous Weak Accept decision, the ACs also recommend Accept as a poster."""
241,"""Neural Stored-program Memory""","['Memory Augmented Neural Networks', 'Universal Turing Machine', 'fast-weight']","""Neural networks powered with external memory simulate computer behaviors. These models, which use the memory to store data for a neural controller, can learn algorithms and other complex tasks. In this paper, we introduce a new memory to store weights for the controller, analogous to the stored-program memory in modern computer architectures. The proposed model, dubbed Neural Stored-program Memory, augments current memory-augmented neural networks, creating differentiable machines that can switch programs through time, adapt to variable contexts and thus fully resemble the Universal Turing Machine. A wide range of experiments demonstrate that the resulting machines not only excel in classical algorithmic problems, but also have potential for compositional, continual, few-shot learning and question-answering tasks. ""","""This paper presents the neural stored-program memory, which is a key-value memory that is used to store weights for another neural network, analogous to having programs in computers. They provide an extensive set of experiments in various domains to show the benefit of the proposed method, including synthetic tasks and few-shot learning experiments. This is an interesting paper proposing a new idea. We discuss this submission extensively and based on our discussion I recommend accepting this submission. A few final comments from reviewers for the authors: - Please try to make the paper a bit more self-contained so that it is more useful to a general audience. This can be done by either making more space in the main text (e.g., reducing the size of Figure 1, reducing space between sections, table captions and text, etc.) or adding more details in the Appendix. Importantly, your formatting is a bit off. Please use the correct style file, it will give you more space. All reviewers agree that the paper are missing some important details that would improve the paper. - Please cite the original fast weight paper by Malsburg (1981). - Regarding fast-weights using outer products, this was actually first done in the 1993 paper instead of the 2016 and 2017 papers."""
242,"""Can gradient clipping mitigate label noise?""",[],"""Gradient clipping is a widely-used technique in the training of deep networks, and is generally motivated from an optimisation lens: informally, it controls the dynamics of iterates, thus enhancing the rate of convergence to a local minimum. This intuition has been made precise in a line of recent works, which show that suitable clipping can yield significantly faster convergence than vanilla gradient descent. In this paper, we propose a new lens for studying gradient clipping, namely, robustness: informally, one expects clipping to provide robustness to noise, since one does not overly trust any single sample. Surprisingly, we prove that for the common problem of label noise in classification, standard gradient clipping does not in general provide robustness. On the other hand, we show that a simple variant of gradient clipping is provably robust, and corresponds to suitably modifying the underlying loss function. This yields a simple, noise-robust alternative to the standard cross-entropy loss which performs well empirically.""","""This paper studies the effect of clipping on mitigating label noise. The authors demonstrate that standard gradient clipping does not suffice for achieving robustness to label noise. The authors suggest a noise-robust alternative. In the discussion the reviewers raised some interesting questions and technical detailed but mostly agreed that the paper is well-written with nice contributions. I concur with the reviewers that this is a nicely written paper with good contributions. I recommend acceptance but recommend the authors continue to improve their paper based on the reviewers' suggestions."""
243,"""A Theoretical Analysis of Deep Q-Learning""","['reinforcement learning', 'deep Q network', 'minimax-Q learning', 'zero-sum Markov Game']","""Despite the great empirical success of deep reinforcement learning, its theoretical foundation is less well understood. In this work, we make the first attempt to theoretically understand the deep Q-network (DQN) algorithm (Mnih et al., 2015) from both algorithmic and statistical perspectives. In specific, we focus on a slight simplification of DQN that fully captures its key features. Under mild assumptions, we establish the algorithmic and statistical rates of convergence for the action-value functions of the iterative policy sequence obtained by DQN. In particular, the statistical error characterizes the bias and variance that arise from approximating the action-value function using deep neural network, while the algorithmic error converges to zero at a geometric rate. As a byproduct, our analysis provides justifications for the techniques of experience replay and target network, which are crucial to the empirical success of DQN. Furthermore, as a simple extension of DQN, we propose the Minimax-DQN algorithm for zero-sum Markov game with two players, which is deferred to the appendix due to space limitations.""","""The authors offer theoretical guarantees for a simplified version of the deep Q-learning algorithm. However, the majority of the reviewers agree that the simplifying assumptions are so many that the results do not capture major important aspects of deep Q-Learning (e.g. understanding good exploration strategies, understanding why deep nets are better approximators and not using neural net classes that are so large that can capture all non-parametric functions). For justifying the paper to be called a theoretical analysis of deep Q-Learning some of these aspects need to be addressed, or the motivation/title of the paper needs to be re-defined. """
244,"""Learning to Reason: Distilling Hierarchy via Self-Supervision and Reinforcement Learning""","['Reinforcement learning', 'Self-supervised learning', 'unsupervised learning', 'representation learning']","""We present a hierarchical planning and control framework that enables an agent to perform various tasks and adapt to a new task flexibly. Rather than learning an individual policy for each particular task, the proposed framework, DISH, distills a hierarchical policy from a set of tasks by self-supervision and reinforcement learning. The framework is based on the idea of latent variable models that represent high-dimensional observations using low-dimensional latent variables. The resulting policy consists of two levels of hierarchy: (i) a planning module that reasons a sequence of latent intentions that would lead to optimistic future and (ii) a feedback control policy, shared across the tasks, that executes the inferred intention. Because the reasoning is performed in low-dimensional latent space, the learned policy can immediately be used to solve or adapt to new tasks without additional training. We demonstrate the proposed framework can learn compact representations (3-dimensional latent states for a 90-dimensional humanoid system) while solving a small number of imitation tasks, and the resulting policy is directly applicable to other types of tasks, i.e., navigation in cluttered environments.""","""The authors present a self-supervised framework for learning a hierarchical policy in reinforcement learning tasks that combines a high-level planner over learned latent goals with a shared low-level goal-completing control policy. The reviewers had significant concerns about both problem positioning (w.r.t. existing work) and writing clarity, as well as the fact that all comparative experiments were ablations, rather than comparisons to prior work. While the reviewers agreed that the authors reasonably resolved issues of clarity, there was not agreement that concerns about positioning w.r.t. prior work and experimental comparisons were sufficiently resolved. Thus, I recommend to reject this paper at this time."""
245,"""Poisoning Attacks with Generative Adversarial Nets""","['data poisoning', 'adversarial machine learning', 'generative adversarial nets']","""Machine learning algorithms are vulnerable to poisoning attacks: An adversary can inject malicious points in the training dataset to influence the learning process and degrade the algorithm's performance. Optimal poisoning attacks have already been proposed to evaluate worst-case scenarios, modelling attacks as a bi-level optimization problem. Solving these problems is computationally demanding and has limited applicability for some models such as deep networks. In this paper we introduce a novel generative model to craft systematic poisoning attacks against machine learning classifiers generating adversarial training examples, i.e. samples that look like genuine data points but that degrade the classifier's accuracy when used for training. We propose a Generative Adversarial Net with three components: generator, discriminator, and the target classifier. This approach allows us to model naturally the detectability constrains that can be expected in realistic attacks and to identify the regions of the underlying data distribution that can be more vulnerable to data poisoning. Our experimental evaluation shows the effectiveness of our attack to compromise machine learning classifiers, including deep networks.""","""This paper proposes a GAN-based approach to producing poisons for neural networks. While the approach is interesting and appreciated by the reviewers, it is a legitimate and recurring criticism that the method is only demonstrated on very toy problems (MNIST and Fashion MNIST). During the rebuttal stage, the authors added results on CIFAR, although the results on CIFAR were not convincing enough to change the reviewer scores; the SOTA in GANs is sufficient to generate realistic images of cars and trucks (even at the ImageNet scale), while the demonstrated images are sufficiently far from the natural image distribution on CIFAR-10 that it is not clear whether the method benefits from using a GAN. It should be noted that a range of poisoning methods exist that can effectively target CIFAR, and SOTA methods (e.g., poison polytope attacks and backdoor attacks) can even target datasets like ImageNet and CelebA."""
246,"""Scalable Neural Methods for Reasoning With a Symbolic Knowledge Base""","['question-answering', 'knowledge base completion', 'neuro-symbolic reasoning', 'multihop reasoning']","""We describe a novel way of representing a symbolic knowledge base (KB) called a sparse-matrix reified KB. This representation enables neural modules that are fully differentiable, faithful to the original semantics of the KB, expressive enough to model multi-hop inferences, and scalable enough to use with realistically large KBs. The sparse-matrix reified KB can be distributed across multiple GPUs, can scale to tens of millions of entities and facts, and is orders of magnitude faster than naive sparse-matrix implementations. The reified KB enables very simple end-to-end architectures to obtain competitive performance on several benchmarks representing two families of tasks: KB completion, and learning semantic parsers from denotations.""","""This paper proposes an approach to representing a symbolic knowledge base as a sparse matrix, which enables the use of differentiable neural modules for inference. This approach scales to large knowledge bases and is demonstrated on several tasks. Post-discussion and rebuttal, all three reviewers are in agreement that this is an interesting and useful paper. There was intiially some concern about clarity and polish, but these have been resolved upon rebuttal and discussion. Therefore I recommend acceptance. """
247,"""Towards understanding the true loss surface of deep neural networks using random matrix theory and iterative spectral methods""","['Random Matrix theory', 'deep learning', 'deep learning theory', 'hessian eigenvalues', 'true risk']","""The geometric properties of loss surfaces, such as the local flatness of a solution, are associated with generalization in deep learning. The Hessian is often used to understand these geometric properties. We investigate the differences between the eigenvalues of the neural network Hessian evaluated over the empirical dataset, the Empirical Hessian, and the eigenvalues of the Hessian under the data generating distribution, which we term the True Hessian. Under mild assumptions, we use random matrix theory to show that the True Hessian has eigenvalues of smaller absolute value than the Empirical Hessian. We support these results for different SGD schedules on both a 110-Layer ResNet and VGG-16. To perform these experiments we propose a framework for spectral visualization, based on GPU accelerated stochastic Lanczos quadrature. This approach is an order of magnitude faster than state-of-the-art methods for spectral visualization, and can be generically used to investigate the spectral properties of matrices in deep learning.""","""The reviewers all appreciated the importance of the topic: understanding the local geometry of loss surfaces of large models is viewed as critical to understand generalization and design better optimization methods. However, reviewers also pointed out the strength of the assumptions and the limitations of the empirical study. Despite the claim that these assumptions are weaker than those made in prior work, this did not convince the reviewers that the conclusion could be applied to common loss landscapes. I encourage the authors to address the points made by the reviewers and submit an updated version to a later conference."""
248,"""RNNs Incrementally Evolving on an Equilibrium Manifold: A Panacea for Vanishing and Exploding Gradients?""","['novel recurrent neural architectures', 'learning representations of outputs or states']","""Recurrent neural networks (RNNs) are particularly well-suited for modeling long-term dependencies in sequential data, but are notoriously hard to train because the error backpropagated in time either vanishes or explodes at an exponential rate. While a number of works attempt to mitigate this effect through gated recurrent units, skip-connections, parametric constraints and design choices, we propose a novel incremental RNN (iRNN), where hidden state vectors keep track of incremental changes, and as such approximate state-vector increments of Rosenblatt's (1962) continuous-time RNNs. iRNN exhibits identity gradients and is able to account for long-term dependencies (LTD). We show that our method is computationally efficient overcoming overheads of many existing methods that attempt to improve RNN training, while suffering no performance degradation. We demonstrate the utility of our approach with extensive experiments and show competitive performance against standard LSTMs on LTD and other non-LTD tasks. ""","""In this paper, the authors propose the incremental RNN, a novel recurrent neural network architecture that resolves the exploding/vanishing gradient problem. While the reviewers initially had various concerns, the paper has been substantially improved during the discussion period and all questions by the reviewers have been resolved. The main idea of the paper is elegant, the theoretical results interesting, and the empirical evaluation extensive. The reviewers and the AC recommend acceptance of this paper to ICLR-2020."""
249,"""Meta-Learning Deep Energy-Based Memory Models""","['associative memory', 'energy-based memory', 'meta-learning', 'compressive memory']","""We study the problem of learning an associative memory model -- a system which is able to retrieve a remembered pattern based on its distorted or incomplete version. Attractor networks provide a sound model of associative memory: patterns are stored as attractors of the network dynamics and associative retrieval is performed by running the dynamics starting from a query pattern until it converges to an attractor. In such models the dynamics are often implemented as an optimization procedure that minimizes an energy function, such as in the classical Hopfield network. In general it is difficult to derive a writing rule for a given dynamics and energy that is both compressive and fast. Thus, most research in energy-based memory has been limited either to tractable energy models not expressive enough to handle complex high-dimensional objects such as natural images, or to models that do not offer fast writing. We present a novel meta-learning approach to energy-based memory models (EBMM) that allows one to use an arbitrary neural architecture as an energy model and quickly store patterns in its weights. We demonstrate experimentally that our EBMM approach can build compressed memories for synthetic and natural data, and is capable of associative retrieval that outperforms existing memory systems in terms of the reconstruction error and compression rate.""","""Four knowledgable reviewers recommend accept. Good job!"""
250,"""Individualised Dose-Response Estimation using Generative Adversarial Nets""","['individualised dose-response estimation', 'treatment effects', 'causal inference', 'generative adversarial networks']","""The problem of estimating treatment responses from observational data is by now a well-studied one. Less well studied, though, is the problem of treatment response estimation when the treatments are accompanied by a continuous dosage parameter. In this paper, we tackle this lesser studied problem by building on a modification of the generative adversarial networks (GANs) framework that has already demonstrated effectiveness in the former problem. Our model, DRGAN, is flexible, capable of handling multiple treatments each accompanied by a dosage parameter. The key idea is to use a significantly modified GAN model to generate entire dose-response curves for each sample in the training data which will then allow us to use standard supervised methods to learn an inference model capable of estimating these curves for a new sample. Our model consists of 3 blocks: (1) a generator, (2) a discriminator, (3) an inference block. In order to address the challenge presented by the introduction of dosages, we propose novel architectures for both our generator and discriminator. We model the generator as a multi-task deep neural network. In order to address the increased complexity of the treatment space (because of the addition of dosages), we develop a hierarchical discriminator consisting of several networks: (a) a treatment discriminator, (b) a dosage discriminator for each treatment. In the experiments section, we introduce a new semi-synthetic data simulation for use in the dose-response setting and demonstrate improvements over the existing benchmark models.""","""This paper addresses the problem of estimating treatment responses involving a continuous dosage parameter. The basic idea is to learn a GAN model capable of generating synthetic dose-response curves for each training sample, which then facilitates the supervised training of an inference model that estimates these curves for new cases. For this purpose, specialized architectures are also proposed for the GAN, which involves a multi-task generator network and a hierarchical discriminator network. Empirical results demonstrate improvement over existing methods. While there is always a chance that reviewers may underappreciate certain aspects of a submission, the fact that there was a unanimous decision to reject this work indicates that the contribution must be better marketed to the ML community. For example, after the rebuttal one reviewer remained unconvinced regarding explanations for why the proposed method is likely to learn the full potential outcome distribution. Among other things, another reviewer felt that both the proposed DRGAN model, and the GANITE framework upon which it is based, were not necessarily working as advertised in the present context."""
251,"""GENERALIZATION GUARANTEES FOR NEURAL NETS VIA HARNESSING THE LOW-RANKNESS OF JACOBIAN""","['Theory of neural nets', 'low-rank structure of Jacobian', 'optimization and generalization theory']","""Modern neural network architectures often generalize well despite containing many more parameters than the size of the training dataset. This paper explores the generalization capabilities of neural networks trained via gradient descent. We develop a data-dependent optimization and generalization theory which leverages the low-rank structure of the Jacobian matrix associated with the network. Our results help demystify why training and generalization is easier on clean and structured datasets and harder on noisy and unstructured datasets as well as how the network size affects the evolution of the train and test errors during training. Specifically, we use a control knob to split the Jacobian spectum into ``information"" and ``nuisance"" spaces associated with the large and small singular values. We show that over the information space learning is fast and one can quickly train a model with zero training loss that can also generalize well. Over the nuisance space training is slower and early stopping can help with generalization at the expense of some bias. We also show that the overall generalization capability of the network is controlled by how well the labels are aligned with the information space. A key feature of our results is that even constant width neural nets can provably generalize for sufficiently nice datasets. We conduct various numerical experiments on deep networks that corroborate our theoretical findings and demonstrate that: (i) the Jacobian of typical neural networks exhibit low-rank structure with a few large singular values and many small ones leading to a low-dimensional information space, (ii) over the information space learning is fast and most of the labels falls on this space, and (iii) label noise falls on the nuisance space and impedes optimization/generalization.""","""This submission investigates the properties of the Jacobian matrix in deep learning setup. Specifically, it splits the spectrum of the matrix into information (large singulars) and ``nuisance (small singulars) spaces. The paper shows that over the information space learning is fast and achieves zero loss. It also shows that generalization relates to how well labels are aligned with the information space. While the submission certainly has encouraging analysis/results, reviewers find these contributions limited and it is not clear how some of the claims in the paper can be extended to more general settings. For example, while the authors claim that low-rank structure is suggested by theory, the support of this claim is limited to a case study on mixture of Gaussians. In addition, the provided analysis only studies two-layer networks. As elaborated by R4, extending these arguments to more than two layers does not seem straighforward using the tools used in the submission. While all reviewers appreciated author's response, they were not convinced and maintained their original ratings. """
252,"""Invariance vs Robustness of Neural Networks""","['Invariance', 'Adversarial', 'Robustness']","""Neural networks achieve human-level accuracy on many standard datasets used in image classification. The next step is to achieve better generalization to natural (or non-adversarial) perturbations as well as known pixel-wise adversarial perturbations of inputs. Previous work has studied generalization to natural geometric transformations (e.g., rotations) as invariance, and generalization to adversarial perturbations as robustness. In this paper, we examine the interplay between invariance and robustness. We empirically study the following two cases:(a) change in adversarial robustness as we improve only the invariance using equivariant models and training augmentation, (b) change in invariance as we improve only the adversarial robustness using adversarial training. We observe that the rotation invariance of equivariant models (StdCNNs and GCNNs) improves by training augmentation with progressively larger rotations but while doing so, their adversarial robustness does not improve, or worse, it can even drop significantly on datasets such as MNIST. As a plausible explanation for this phenomenon we observe that the average perturbation distance of the test points to the decision boundary decreases as the model learns larger and larger rotations. On the other hand, we take adversarially trained LeNet and ResNet models which have good \ell_\infty adversarial robustness on MNIST and CIFAR-10, and observe that adversarially training them with progressively larger norms keeps their rotation invariance essentially unchanged. In fact, the difference between test accuracy on unrotated test data and on randomly rotated test data upto \theta , for all \theta in [0, 180], remains essentially unchanged after adversarial training . As a plausible explanation for the observed phenomenon we show empirically that the principal components of adversarial perturbations and perturbations given by small rotations are nearly orthogonal""","""This paper examines the interplay between the related ideas of invariance and robustness in deep neural network models. Invariance is the notion that small perturbations to an input image (such as rotations or translations) should not change the classification of that image. Robustness is usually taken to be the idea that small perturbations to input images (e.g. noise, whether white or adversarial) should not significantly affect the model's performance. In the context of this paper, robustness is mostly considered in terms of adversarial perturbations that are imperceptible to humans and created to intentionally disrupt a model's accuracy. The results of this investigation suggests that these ideas are mostly unrelated: equivariant models (with architectures designed to encourage the learning of invariances) that are trained with data augmentation whereby input images are given random rotations do not seem to offer any additional adversarial robustness, and similarly using adversarial training to combat adversarial noise does not seem to confer any additional help for learning rotational invariance. (In some cases, these types of training on the one hand seem to make invariance to the other type of perturbations even worse.) Unfortunately, the reviewers do not believe the technical results are of sufficient interest to warrant publication at this time. """
253,"""Implicit Rugosity Regularization via Data Augmentation""","['deep networks', 'implicit regularization', 'Hessian', 'rugosity', 'curviness', 'complexity']","""Deep (neural) networks have been applied productively in a wide range of supervised and unsupervised learning tasks. Unlike classical machine learning algorithms, deep networks typically operate in the overparameterized regime, where the number of parameters is larger than the number of training data points. Consequently, understanding the generalization properties and the role of (explicit or implicit) regularization in these networks is of great importance. In this work, we explore how the oft-used heuristic of data augmentation imposes an implicit regularization penalty of a novel measure of the rugosity or roughness based on the tangent Hessian of the function fit to the training data.""","""This paper aims to study the effect of data augmentation of generalization performance. The authors put forth a measure of rugosity or ""roughness"" based on the tangent Hessian of the function reminiscent of a classic result by Donoho et. al. The authors show that this measure changes in tandem with how much data augmentation helps. The reviewers and I concur that the rugosity measure is interesting. However, as the reviewer mention the main draw back of this paper is that this measure of rugosity when made explicit does not improve generalization. This is the main draw back of the paper. I agree with the authors that this measure is interesting in itself. However, I think in its current form the paper is not ready for prime time and recommend rejection. That said, I believe this paper has a lot of potential and recommend the authors to rewrite and carry out more careful experiments for a future submission."""
254,"""Localizing and Amortizing: Efficient Inference for Gaussian Processes""","['Gaussian Processes', 'Variational Inference', 'Amortized Inference', 'Nearest Neighbors']","""The inference of Gaussian Processes concerns the distribution of the underlying function given observed data points. GP inference based on local ranges of data points is able to capture fine-scale correlations and allow fine-grained decomposition of the computation. Following this direction, we propose a new inference model that considers the correlations and observations of the K nearest neighbors for the inference at a data point. Compared with previous works, we also eliminate the data ordering prerequisite to simplify the inference process. Additionally, the inference task is decomposed to small subtasks with several technique innovations, making our model well suits the stochastic optimization. Since the decomposed small subtasks have the same structure, we further speed up the inference procedure with amortized inference. Our model runs efficiently and achieves good performances on several benchmark tasks.""","""This paper presents a method for speeding up Gaussian process inference by leveraging locality information through k-nearest neighbours. The key idea is well-motivated intuitively, however the way in which it is implemented seems to introduce new complications. One such issue is KNN overhead in high dimensions, but R1 outlines other potential issues too. Moreover, the method's merit is not demonstrated in a convincing way through the experiments. The authors have provided a rebuttal for those issues, but it does not seem to solve the concerns entirely. """
255,"""A critical analysis of self-supervision, or what we can learn from a single image""","['self-supervision', 'feature representation learning', 'CNN']","""We look critically at popular self-supervision techniques for learning deep convolutional neural networks without manual labels. We show that three different and representative methods, BiGAN, RotNet and DeepCluster, can learn the first few layers of a convolutional network from a single image as well as using millions of images and manual labels, provided that strong data augmentation is used. However, for deeper layers the gap with manual supervision cannot be closed even if millions of unlabelled images are used for training. We conclude that: (1) the weights of the early layers of deep networks contain limited information about the statistics of natural images, that (2) such low-level statistics can be learned through self-supervision just as well as through strong supervision, and that (3) the low-level statistics can be captured via synthetic transformations instead of using a large image dataset.""","""This paper studies the effectiveness of self-supervised approaches by characterising how much information they can extract from a given dataset of images on a per-layer basis. Based on an empirical evaluation of RotNet, BiGAN, and DeepCluster, the authors argue that the early layers of CNNs can be effectively learned from a single image coupled with strong data augmentation. Secondly, the authors also provide some empirical evidence that supervision might still necessary to learn the deeper layers (even in the presence of millions of images for self-supervision). Overall, the reviews agree that the paper is well written and timely given the growing popularity of self-supervised methods. Given that most of the issues raised by the reviewers were adequately addressed in the rebuttal, I will recommend acceptance. We ask the authors to include additional experiments requested by the reviewers (they are valuable even if the conclusions are not perfectly aligned with the main message). """
256,"""Improved memory in recurrent neural networks with sequential non-normal dynamics""","['recurrent neural networks', 'memory', 'non-normal dynamics']","""Training recurrent neural networks (RNNs) is a hard problem due to degeneracies in the optimization landscape, a problem also known as vanishing/exploding gradients. Short of designing new RNN architectures, previous methods for dealing with this problem usually boil down to orthogonalization of the recurrent dynamics, either at initialization or during the entire training period. The basic motivation behind these methods is that orthogonal transformations are isometries of the Euclidean space, hence they preserve (Euclidean) norms and effectively deal with vanishing/exploding gradients. However, this ignores the crucial effects of non-linearity and noise. In the presence of a non-linearity, orthogonal transformations no longer preserve norms, suggesting that alternative transformations might be better suited to non-linear networks. Moreover, in the presence of noise, norm preservation itself ceases to be the ideal objective. A more sensible objective is maximizing the signal-to-noise ratio (SNR) of the propagated signal instead. Previous work has shown that in the linear case, recurrent networks that maximize the SNR display strongly non-normal, sequential dynamics and orthogonal networks are highly suboptimal by this measure. Motivated by this finding, here we investigate the potential of non-normal RNNs, i.e. RNNs with a non-normal recurrent connectivity matrix, in sequential processing tasks. Our experimental results show that non-normal RNNs outperform their orthogonal counterparts in a diverse range of benchmarks. We also find evidence for increased non-normality and hidden chain-like feedforward motifs in trained RNNs initialized with orthogonal recurrent connectivity matrices. ""","""This paper proposes to explore nonnormal matrix initialization in RNNs. Two reviewers recommended acceptance and one recommended rejection. The reviewers recommending acceptance highlighted the utility of the approach, its potential to inspire future work, and the clarity and quality of writing and accompanying experiments. One reviewer recommending weak acceptance expressed appreciation of the quality of the rebuttal and that their concerns were largely addressed. The reviewer recommending rejection was primarily concerned with the novelty of the method. Their review suggested the inclusion of an additional citation, which was included in a revised version for the rebuttal but not with a direct comparison of results. On the balance, the paper has a relatively high degree of support from the reviewers, and presents an interesting and potentially useful initialization in a clear and well-motivated way."""
257,"""Boosting Network: Learn by Growing Filters and Layers via SplitLBI""",[],"""Network structures are important to learning good representations of many tasks in computer vision and machine learning communities. These structures are either manually designed, or searched by Neural Architecture Search (NAS) in previous works, which however requires either expert-level efforts, or prohibitive computational cost. In practice, it is desirable to efficiently and simultaneously learn both the structures and parameters of a network from arbitrary classes with budgeted computational cost. We identify it as a new learning paradigm -- Boosting Network, where one starts from simple models, delving into complex trained models progressively. In this paper, by virtue of an iterative sparse regularization path -- Split Linearized Bregman Iteration (SplitLBI), we propose a simple yet effective boosting network method that can simultaneously grow and train a network by progressively adding both convolutional filters and layers. Extensive experiments with VGG and ResNets validate the effectiveness of our proposed algorithms.""","""This paper considers how to learn the structure of deep network by beginning with a simple network and then progressively adding layers and filters as needed. The paper received three reviews by expert working in this area. R1 recommends Weak Reject due to concerns about novelty, degree of contribution, clarity of technical exposition, and experiments. R2 recommends Weak Accept and has some specific suggestions and questions. R3 recommends Weak Reject, also citing concerns with experiments and writing. The authors submitted a response that addressed many of these comments, but R1 and R3 continue to have concerns about contribution and the experiments, while R2 maintains their Weak Accept rating. Given the split decision, the AC also read the paper. While we believe the paper has significant merit, we agree with R1 and R3 on the need for additional experimentation, and believe another round of peer review would help clarify the writing and contribution. We hope the reviewer comments will hep authors prepare a revision for a future venue."""
258,"""Improving Federated Learning Personalization via Model Agnostic Meta Learning""","['Federated Learning', 'Model Agnostic Meta Learning', 'Personalization']","""Federated Learning (FL) refers to learning a high quality global model based on decentralized data storage, without ever copying the raw data. A natural scenario arises with data created on mobile phones by the activity of their users. Given the typical data heterogeneity in such situations, it is natural to ask how can the global model be personalized for every such device, individually. In this work, we point out that the setting of Model Agnostic Meta Learning (MAML), where one optimizes for a fast, gradient-based, few-shot adaptation to a heterogeneous distribution of tasks, has a number of similarities with the objective of personalization for FL. We present FL as a natural source of practical applications for MAML algorithms, and make the following observations. 1) The popular FL algorithm, Federated Averaging, can be interpreted as a meta learning algorithm. 2) Careful fine-tuning can yield a global model with higher accuracy, which is at the same time easier to personalize. However, solely optimizing for the global model accuracy yields a weaker personalization result. 3) A model trained using a standard datacenter optimization method is much harder to personalize, compared to one trained using Federated Averaging, supporting the first claim. These results raise new questions for FL, MAML, and broader ML research.""","""The reviewers have reached consensus that while the paper is interesting, it could use more time. We urge the authors to continue their investigations."""
259,"""Contrastive Representation Distillation""","['Knowledge Distillation', 'Representation Learning', 'Contrastive Learning', 'Mutual Information']",""" Often we wish to transfer representational knowledge from one neural network to another. Examples include distilling a large network into a smaller one, transferring knowledge from one sensory modality to a second, or ensembling a collection of models into a single estimator. Knowledge distillation, the standard approach to these problems, minimizes the KL divergence between the probabilistic outputs of a teacher and student network. We demonstrate that this objective ignores important structural knowledge of the teacher network. This motivates an alternative objective by which we train a student to capture significantly more information in the teacher's representation of the data. We formulate this objective as contrastive learning. Experiments demonstrate that our resulting new objective outperforms knowledge distillation on a variety of knowledge transfer tasks, including single model compression, ensemble distillation, and cross-modal transfer. When combined with knowledge distillation, our method sets a state of the art in many transfer tasks, sometimes even outperforming the teacher network.""","""This paper presents a new distillation method with theoretical and empirical supports. Given reviewers' comments and AC's reading, the novelty/significance and application-scope shown in the paper can be arguably limited. However, the authors extensively verified and compared the proposed methods and existing ones by showing significant improvements under comprehensive experiments. As the distillation method can enjoy a broader usage, I think the propose method in this paper can be influential in the future works. Hence, I think this is a borderlines paper toward acceptance."""
260,"""Weakly-Supervised Trajectory Segmentation for Learning Reusable Skills""","['skills', 'demonstration', 'agent', 'sub-task', 'primitives', 'robot learning', 'manipulation']","""Learning useful and reusable skill, or sub-task primitives, is a long-standing problem in sensorimotor control. This is challenging because it's hard to define what constitutes a useful skill. Instead of direct manual supervision which is tedious and prone to bias, in this work, our goal is to extract reusable skills from a collection of human demonstrations collected directly for several end-tasks. We propose a weakly-supervised approach for trajectory segmentation following the classic work on multiple instance learning. Our approach is end-to-end trainable, works directly from high-dimensional input (e.g., images) and only requires the knowledge of what skill primitives are present at training, without any need of segmentation or ordering of primitives. We evaluate our approach via rigorous experimentation across four environments ranging from simulation to real world robots, procedurally generated to human collected demonstrations and discrete to continuous action space. Finally, we leverage the generated skill segmentation to demonstrate preliminary evidence of zero-shot transfer to new combinations of skills. Result videos at pseudo-url""","""The authors present a multiple instance learning-based approach that uses weak supervison (of which skills appear in any given trajectory) to automatically segment a set of skills from demonstrations. The reviewers had significant concerns about the significance and performance of the method, as well as the metrics used for analysis. Most notably, neither the original paper nor the rebuttal provided a sufficient justification or fix for the lack of analysis beyond accuracy scores (as opposed to confusion matrices, precision/recall, etc), which leaves the contribution and claims of the paper unclear. Thus, I recommend rejection at this time."""
261,"""Language-independent Cross-lingual Contextual Representations""","['contextual representation', 'cross-lingual', 'transfer learning']","""Contextual representation models like BERT have achieved state-of-the-art performance on a diverse range of NLP tasks. We propose a cross-lingual contextual representation model that generates language-independent contextual representations. This helps to enable zero-shot cross-lingual transfer of a wide range of NLP models, on top of contextual representation models like BERT. We provide a formulation of language-independent cross-lingual contextual representation based on mono-lingual representations. Our formulation takes three steps to align sequences of vectors: transform, extract, and reorder. We present a detailed discussion about the process of learning cross-lingual contextual representations, also about the performance in cross-lingual transfer learning and its implications.""","""The paper proposes a method to learn cross-lingual representations by aligning monolingual models with the help of a parallel corpus using a three-step process: transform, extract, and reorder. Experiments on XNLI show that the proposed method is able to perform zero-shot cross-lingual transfer, although its overall performance is still below state-of-the-art jointly trained method XLM. All three reviewers suggested that the proposed method needs to be evaluated more thoroughly (more datasets and languages). R2 and R4 raise some concerns around the complexity of the proposed method (possibly could be simplified further). R3 suggests a more thorough investigation on why the model saturates at 250,000 parallel sentences, among others. The authors acknowledged reviewers' concerns in their response and will incorporate them in future work. I recommend rejecting this paper for ICLR."""
262,"""Improved Training Techniques for Online Neural Machine Translation""","['Deep learning', 'natural language processing', 'Machine translation']","""Neural sequence-to-sequence models are at the basis of state-of-the-art solutions for sequential prediction problems such as machine translation and speech recognition. The models typically assume that the entire input is available when starting target generation. In some applications, however, it is desirable to start the decoding process before the entire input is available, e.g. to reduce the latency in automatic speech recognition. We consider state-of-the-art wait-k decoders, that first read k tokens from the source and then alternate between reading tokens from the input and writing to the output. We investigate the sensitivity of such models to the value of k that is used during training and when deploying the model, and the effect of updating the hidden states in transformer models as new source tokens are read. We experiment with German-English translation on the IWSLT14 dataset and the larger WMT15 dataset. Our results significantly improve over earlier state-of-the-art results for German-English translation on the WMT15 dataset across different latency levels.""","""The paper proposes a method of training latency-limited (wait-k) decoders for online machine translation. The authors investigate the impact of the value of k, and of recalculating the transformer's decoder hidden states when a new source token arrives. They significantly improve over state-of-the-art results for German-English translation on the WMT15 dataset, however there is limited novelty wrt previous approaches. The authors responded in depth to reviews and updated the paper with improvements, for which there was no reviewer response. The paper presents interesting results but IMO the approach is not novel enough to justify acceptance at ICLR. """
263,"""Near-Zero-Cost Differentially Private Deep Learning with Teacher Ensembles""",[],"""Ensuring the privacy of sensitive data used to train modern machine learning models is of paramount importance in many areas of practice. One approach to study these concerns is through the lens of differential privacy. In this framework, privacy guarantees are generally obtained by perturbing models in such a way that specifics of data used to train the model are made ambiguous. A particular instance of this approach is through a ``teacher-student'' model, wherein the teacher, who owns the sensitive data, provides the student with useful, but noisy, information, hopefully allowing the student model to perform well on a given task without access to particular features of the sensitive data. Because stronger privacy guarantees generally involve more significant noising on the part of the teacher, deploying existing frameworks fundamentally involves a trade-off between utility and privacy guarantee. One of the most important techniques used in previous work involves an ensemble of teacher models, which return information to a student based on a noisy voting procedure. In this work, we propose a novel voting mechanism, which we call an Immutable Noisy ArgMax, that, under certain conditions, can bear very large random noising from the teacher without affecting the useful information transferred to the student. Our mechanisms improve over the state-of-the-art methods on all measures, and scale to larger tasks with both higher utility and stronger privacy ( \approx 0$).""","""This paper presents a differentially private mechanism, called Noisy ArgMax, for privately aggregating predictions from several teacher models. There is a consensus in the discussion that the technique of adding a large constant to the largest vote breaks differential privacy. Given this technical flaw, the paper cannot be accepted."""
264,"""On Variational Learning of Controllable Representations for Text without Supervision""","['sequence variational autoencoders', 'unsupervised learning', 'controllable text generation', 'text style transfer']","""The variational autoencoder (VAE) has found success in modelling the manifold of natural images on certain datasets, allowing meaningful images to be generated while interpolating or extrapolating in the latent code space, but it is unclear whether similar capabilities are feasible for text considering its discrete nature. In this work, we investigate the reason why unsupervised learning of controllable representations fails for text. We find that traditional sequence VAEs can learn disentangled representations through their latent codes to some extent, but they often fail to properly decode when the latent factor is being manipulated, because the manipulated codes often land in holes or vacant regions in the aggregated posterior latent space, which the decoding network is not trained to process. Both as a validation of the explanation and as a fix to the problem, we propose to constrain the posterior mean to a learned probability simplex, and performs manipulation within this simplex. Our proposed method mitigates the latent vacancy problem and achieves the first success in unsupervised learning of controllable representations for text. Empirically, our method significantly outperforms unsupervised baselines and is competitive with strong supervised approaches on text style transfer. Furthermore, when switching the latent factor (e.g., topic) during a long sentence generation, our proposed framework can often complete the sentence in a seemingly natural way -- a capability that has never been attempted by previous methods. ""","""This paper analyzes the behavior of VAE for learning controllable text representations and uses this insight to introduce a method to constrain the posterior space by introducing a regularization term and a structured reconstruction term to the standard VAE loss. Experiments show the proposed method improves over unsupervised baselines, although it still underperforms supervised approaches in text style transfer. The paper had some issues with presentation, as pointed out by R1 and R3. In addition, it missed citations to many prior work. Some of these issues had been addressed after the rebuttal, but I still think it needs to be more self contained (e.g., include details of evaluation protocols in the appendix, instead of citing another paper). In an internal discussion, R1 still has some concerns regarding whether the negative log likelihood is less affected by manipulations in the constrained space compared to beta-VAE. In particular, the concern is about whether the magnitude of the manipulation is comparable across models, which is also shared by R3. R1 also think some of the generated samples are not very convincing. This is a borderline paper with some interesting insights that tackles an important problem. However, due to its shortcoming in the current state, I recommend to reject the paper."""
265,"""HOPPITY: LEARNING GRAPH TRANSFORMATIONS TO DETECT AND FIX BUGS IN PROGRAMS""","['Bug Detection', 'Program Repair', 'Graph Neural Network', 'Graph Transformation']","""We present a learning-based approach to detect and fix a broad range of bugs in Javascript programs. We frame the problem in terms of learning a sequence of graph transformations: given a buggy program modeled by a graph structure, our model makes a sequence of predictions including the position of bug nodes and corresponding graph edits to produce a fix. Unlike previous works that use deep neural networks, our approach targets bugs that are more complex and semantic in nature (i.e.~bugs that require adding or deleting statements to fix). We have realized our approach in a tool called HOPPITY. By training on 290,715 Javascript code change commits on Github, HOPPITY correctly detects and fixes bugs in 9,490 out of 36,361 programs in an end-to-end fashion. Given the bug location and type of the fix, HOPPITY also outperforms the baseline approach by a wide margin.""","""This paper presents a learning-based approach to detect and fix bugs in JavaScript programs. By modeling the bug detection and fix as a sequence of graph transformations, the proposed method achieved promising experimental results on a large JavaScript dataset crawled from GitHub. All the reviews agree to accept the paper for its reasonable and interesting approach to solve the bug problems. The main concerns are about the experimental design, which has been addressed by the authors in the revision. Based on the novelty and solid experiments of the proposed method, I agreed to accept the paper as other revises. """
266,"""Deep Coordination Graphs""","['multi-agent reinforcement learning', 'coordination graph', 'deep Q-learning', 'value factorization', 'relative overgeneralization']","""This paper introduces the deep coordination graph (DCG) for collaborative multi-agent reinforcement learning. DCG strikes a flexible trade-off between representational capacity and generalization by factorizing the joint value function of all agents according to a coordination graph into payoffs between pairs of agents. The value can be maximized by local message passing along the graph, which allows training of the value function end-to-end with Q-learning. Payoff functions are approximated with deep neural networks and parameter sharing improves generalization over the state-action space. We show that DCG can solve challenging predator-prey tasks that are vulnerable to the relative overgeneralization pathology and in which all other known value factorization approaches fail.""","""This work extends previous work (Castellini et al) with parameter sharing and low-rank approximations, for pairwise communication between agents. However the work as presented here is still considered too incremental, in particular when compared to Castellini et al. The advances such as parameter sharing and low-rank approximation are good but not enough of a contribution. Authors' efforts to address this concern did not change reviewers' judgment. Therefore, we recommend rejection."""
267,"""Active Learning Graph Neural Networks via Node Feature Propagation""","['Graph Learning', 'Active Learning']","""Graph Neural Networks (GNNs) for prediction tasks like node classification or edge prediction have received increasing attention in recent machine learning from graphically structured data. However, a large quantity of labeled graphs is difficult to obtain, which significantly limit the true success of GNNs. Although active learning has been widely studied for addressing label-sparse issues with other data types like text, images, etc., how to make it effective over graphs is an open question for research. In this paper, we present the investigation on active learning with GNNs for node classification tasks. Specifically, we propose a new method, which uses node feature propagation followed by K-Medoids clustering of the nodes for instance selection in active learning. With a theoretical bound analysis we justify the design choice of our approach. In our experiments on four benchmark dataset, the proposed method outperforms other representative baseline methods consistently and significantly.""","""The authors propose a method of selecting nodes to label in a graph neural network setting to reduce the loss as efficiently as possible. Building atop Sener & Savarese 2017 the authors propose an alternative distance metric and clustering algorithm. In comparison to the just mentioned work, they show that their upper bound is smaller than the previous art's upper bound. While one cannot conclude from this that their algorithm is better, at least empirically the method appears to have a advantage over state of the art. However, reviewers were concerned about the assumptions necessary to prove the theorem, despite the modifications made by the authors after the initial round. The work proposes a simple estimator and shows promising results but reviewers felt improvements like reducing the number of assumptions and potentially a lower bound may greatly strengthen the paper."""
268,"""Towards Interpretable Evaluations: A Case Study of Named Entity Recognition""","['interpretable evaluation', 'dataset biases', 'model biases', 'NER']",""" With the proliferation of models for natural language processing (NLP) tasks, it is even harder to understand the differences between models and their relative merits. Simply looking at differences between holistic metrics such as accuracy, BLEU, or F1 do not tell us \emph{why} or \emph{how} a particular method is better and how dataset biases influence the choices of model design. In this paper, we present a general methodology for {\emph{interpretable}} evaluation of NLP systems and choose the task of named entity recognition (NER) as a case study, which is a core task of identifying people, places, or organizations in text. The proposed evaluation method enables us to interpret the \textit{model biases}, \textit{dataset biases}, and how the \emph{differences in the datasets} affect the design of the models, identifying the strengths and weaknesses of current approaches. By making our {analysis} tool available, we make it easy for future researchers to run similar analyses and drive the progress in this area.""","""The paper diligently setup and conducted multiple experiments to validate their approach - bucketizating attributions of data and analyze them accordingly to discover deeper insights eg biases. However, reviewers pointed out that such bucketing is tailored to tasks where attributions are easily observed, such as the one of the focus in this paper -NER. While manuscript proposes this approach as general, reviewers failed to seem this point. Another reviewer recommended this manuscript to become a journal item rather than conference, due to the length of the page in appendix (17). There were some confusions around writings as well, pointed out by some reviewers. We highly recommend authors to carefully reflect on reviewers both pros and cons of the paper to improve the paper for your future submission. """
269,"""Counterfactuals uncover the modular structure of deep generative models""","['generative models', 'causality', 'counterfactuals', 'representation learning', 'disentanglement', 'generalization', 'unsupervised learning']","""Deep generative models can emulate the perceptual properties of complex image datasets, providing a latent representation of the data. However, manipulating such representation to perform meaningful and controllable transformations in the data space remains challenging without some form of supervision. While previous work has focused on exploiting statistical independence to \textit{disentangle} latent factors, we argue that such requirement can be advantageously relaxed and propose instead a non-statistical framework that relies on identifying a modular organization of the network, based on counterfactual manipulations. Our experiments support that modularity between groups of channels is achieved to a certain degree on a variety of generative models. This allowed the design of targeted interventions on complex image datasets, opening the way to applications such as computationally efficient style transfer and the automated assessment of robustness to contextual changes in pattern recognition systems.""","""This paper provides a fresh application of tools from causality theory to investigate modularity and disentanglement in learned deep generative models. It also goes one step further towards making these models more transparent by studying their internal components. While there is still margin for improving the experiments, I believe this paper is a timely contribution to the ICLR/ML community. This paper has high-variance in the reviewer scores. But I believe the authors did a good job with the revision and rebuttal. I recommend acceptance."""
270,"""Overlearning Reveals Sensitive Attributes""","['privacy', 'censoring representation', 'transfer learning']","""``""Overlearning'' means that a model trained for a seemingly simple objective implicitly learns to recognize attributes and concepts that are (1) not part of the learning objective, and (2) sensitive from a privacy or bias perspective. For example, a binary gender classifier of facial images also learns to recognize races, even races that are not represented in the training data, and identities. We demonstrate overlearning in several vision and NLP models and analyze its harmful consequences. First, inference-time representations of an overlearned model reveal sensitive attributes of the input, breaking privacy protections such as model partitioning. Second, an overlearned model can be ""`re-purposed'' for a different, privacy-violating task even in the absence of the original training data. We show that overlearning is intrinsic for some tasks and cannot be prevented by censoring unwanted attributes. Finally, we investigate where, when, and why overlearning happens during model training.""","""This paper introduces the problem of overlearning, which can be thought of as unintended transfer learning from a (victim) source model to a target task that the source models creator had not intended its model to be used for. The paper raises good points about privacy legislation limitations due to the fact that overlearning makes it impossible to foresee future uses of a given dataset. Please incorporate the revisions suggested in the reviews to add clarity to the overlearning versus censoring confusion addressed by the reviewers."""
271,"""Unsupervised Learning of Efficient and Robust Speech Representations""",[],"""We present an unsupervised method for learning speech representations based on a bidirectional contrastive predictive coding that implicitly discovers phonetic structure from large-scale corpora of unlabelled raw audio signals. The representations, which we learn from up to 8000 hours of publicly accessible speech data, are evaluated by looking at their impact on the behaviour of supervised speech recognition systems. First, across a variety of datasets, we find that the features learned from the largest and most diverse pretraining dataset result in significant improvements over standard audio features as well as over features learned from smaller amounts of pretraining data. Second, they significantly improve sample efficiency in low-data scenarios. Finally, the features confer significant robustness advantages to the resulting recognition systems: we see significant improvements in out-of-domain transfer relative to baseline feature sets, and the features likewise provide improvements in four different low-resource African language datasets.""","""The paper focuses on learning speech representations with contrastive predictive coding (CPC). As noted by reviewers, (i) novelty is too low (mostly making the model bidirectional) for ICLR (ii) comparison with existing work is missing."""
272,"""BOOSTING ENCODER-DECODER CNN FOR INVERSE PROBLEMS""","['Prediction error', 'Boosting', 'Encoder-decoder convolutional neural network', 'Inverse problem']","""Encoder-decoder convolutional neural networks (CNN) have been extensively used for various inverse problems. However, their prediction error for unseen test data is difficult to estimate a priori, since the neural networks are trained using only selected data and their architectures are largely considered blackboxes. This poses a fundamental challenge in improving the performance of neural networks. Recently, it was shown that Steins unbiased risk estimator (SURE) can be used as an unbiased estimator of the prediction error for denoising problems. However, the computation of the divergence term in SURE is difficult to implement in a neural network framework, and the condition to avoid trivial identity mapping is not well defined. In this paper, inspired by the finding that an encoder-decoder CNN can be expressed as a piecewise linear representation, we provide a close form expression of the unbiased estimator for the prediction error. The close form representation leads to a novel boosting scheme to prevent a neural network from converging to an identity mapping so that it can enhance the performance. Experimental results show that the proposed algorithm provides consistent improvement in various inverse problems.""","""This paper introduces a closed-form expression for the Steins unbiased estimator for the prediction error, and a boosting approach based on this, with empirical evaluation. While this paper is interesting, all reviewers seem to agree that more work is required before this paper can be published at ICLR. """
273,"""Bounds on Over-Parameterization for Guaranteed Existence of Descent Paths in Shallow ReLU Networks""","['Spurious local minima', 'Loss landscape', 'Over-parameterization', 'Theory of deep learning', 'Optimization', 'Descent path']","""We study the landscape of squared loss in neural networks with one-hidden layer and ReLU activation functions. Let pseudo-formula and pseudo-formula be the widths of hidden and input layers, respectively. We show that there exist poor local minima with positive curvature for some training sets of size m+2d-2 By positive curvature of a local minimum, we mean that within a small neighborhood the loss function is strictly increasing in all directions. Consequently, for such training sets, there are initialization of weights from which there is no descent path to global optima. It is known that for m$, there always exist descent paths to global optima from all initial weights. In this perspective, our results provide a somewhat sharp characterization of the over-parameterization required for ""existence of descent paths"" in the loss landscape. ""","""This article investigates the optimization landscape of shallow ReLU networks, showing that for sufficiently narrow networks there are data sets for which there is no descent paths to the global minimiser. The topic and the nature of the results is very interesting. The reviewers found that this article makes important contributions in a relevant line of investigation and had generally positive ratings. The authors' responses addressed questions from the initial reviews, and the discussion helped identifying questions for future study departing from the present contribution. """
274,"""Meta-Learning with Warped Gradient Descent""","['meta-learning', 'transfer learning']","""Learning an efficient update rule from data that promotes rapid learning of new tasks from the same distribution remains an open problem in meta-learning. Typically, previous works have approached this issue either by attempting to train a neural network that directly produces updates or by attempting to learn better initialisations or scaling factors for a gradient-based update rule. Both of these approaches pose challenges. On one hand, directly producing an update forgoes a useful inductive bias and can easily lead to non-converging behaviour. On the other hand, approaches that try to control a gradient-based update rule typically resort to computing gradients through the learning process to obtain their meta-gradients, leading to methods that can not scale beyond few-shot task adaptation. In this work, we propose Warped Gradient Descent (WarpGrad), a method that intersects these approaches to mitigate their limitations. WarpGrad meta-learns an efficiently parameterised preconditioning matrix that facilitates gradient descent across the task distribution. Preconditioning arises by interleaving non-linear layers, referred to as warp-layers, between the layers of a task-learner. Warp-layers are meta-learned without backpropagating through the task training process in a manner similar to methods that learn to directly produce updates. WarpGrad is computationally efficient, easy to implement, and can scale to arbitrarily large meta-learning problems. We provide a geometrical interpretation of the approach and evaluate its effectiveness in a variety of settings, including few-shot, standard supervised, continual and reinforcement learning.""","""A strong paper reporting improved approaches to meta-learning."""
275,"""Multi-scale Attributed Node Embedding""","['network embedding', 'graph embedding', 'node embedding', 'network science', 'graph representation learning']","""We present network embedding algorithms that capture information about a node from the local distribution over node attributes around it, as observed over random walks following an approach similar to Skip-gram. Observations from neighborhoods of different sizes are either pooled (AE) or encoded distinctly in a multi-scale approach (MUSAE). Capturing attribute-neighborhood relationships over multiple scales is useful for a diverse range of applications, including latent feature identification across disconnected networks with similar attributes. We prove theoretically that matrices of node-feature pointwise mutual information are implicitly factorized by the embeddings. Experiments show that our algorithms are robust, computationally efficient and outperform comparable models on social, web and citation network datasets.""","""This paper constitutes interesting progress on an important topic; the reviewers identify certain improvements and directions for future work, and I urge the authors to continue to develop refinements and extensions."""
276,"""Neural Non-additive Utility Aggregation""",[],"""Neural architectures for set regression problems aim at learning representations such that good predictions can be made based on the learned representations. This strategy, however, ignores the fact that meaningful intermediate results might be helpful to perform well. We study two new architectures that explicitly model latent intermediate utilities and use non-additive utility aggregation to estimate the set utility based on the latent utilities. We evaluate the new architectures with visual and textual datasets, which have non-additive set utilities due to redundancy and synergy effects. We find that the new architectures perform substantially better in this setup.""","""This paper presents two new architectures that model latent intermediate utilities and use non-additive utility aggregation to estimate the set utility based on the computed latent utilities. These two extensions are easy to understand and seem like a simple extension to the existing RNN model architectures, so that they can be implemented easily. However, the connection to Choquet integral is not clear and no theory has been provided to make that connection. Hence, it is hard for the reader to understand why the integral is useful here. The reviewers have also raised objection about the evaluation which does not seem to be fair to existing methods. These comments can be incorporated to make the paper more accessible and the results more appreciable. """
277,"""Effective and Robust Detection of Adversarial Examples via Benford-Fourier Coefficients""",[],"""Adversarial examples have been well known as a serious threat to deep neural networks (DNNs). To ensure successful and safe operations of DNNs on realworld tasks, it is urgent to equip DNNs with effective defense strategies. In this work, we study the detection of adversarial examples, based on the assumption that the output and internal responses of one DNN model for both adversarial and benign examples follow the generalized Gaussian distribution (GGD), but with different parameters (i.e., shape factor, mean, and variance). GGD is a general distribution family to cover many popular distributions (e.g., Laplacian, Gaussian, or uniform). It is more likely to approximate the intrinsic distributions of internal responses than any specific distribution. Besides, since the shape factor is more robust to different databases rather than the other two parameters, we propose to construct discriminative features via the shape factor for adversarial detection, employing the magnitude of Benford-Fourier coefficients (MBF), which can be easily estimated using responses. Finally, a support vector machine is trained as the adversarial detector through leveraging the MBF features. Through the Kolmogorov-Smirnov (KS) test, we empirically verify that: 1) the posterior vectors of both adversarial and benign examples follow GGD; 2) the extracted MBF features of adversarial and benign examples follow different distributions. Extensive experiments in terms of image classification demonstrate that the proposed detector is much more effective and robust on detecting adversarial examples of different crafting methods and different sources, in contrast to state-of-the-art adversarial detection methods.""","""This paper presents a new metric for adversarial attack's detection. The reviewers find the idea interesting, but the some part has not been clearly explained, and there are questions on the reproducibility issue of the experiments. """
278,"""V-MPO: On-Policy Maximum a Posteriori Policy Optimization for Discrete and Continuous Control""","['reinforcement learning', 'policy iteration', 'multi-task learning', 'continuous control']","""Some of the most successful applications of deep reinforcement learning to challenging domains in discrete and continuous control have used policy gradient methods in the on-policy setting. However, policy gradients can suffer from large variance that may limit performance, and in practice require carefully tuned entropy regularization to prevent policy collapse. As an alternative to policy gradient algorithms, we introduce V-MPO, an on-policy adaptation of Maximum a Posteriori Policy Optimization (MPO) that performs policy iteration based on a learned state-value function. We show that V-MPO surpasses previously reported scores for both the Atari-57 and DMLab-30 benchmark suites in the multi-task setting, and does so reliably without importance weighting, entropy regularization, or population-based tuning of hyperparameters. On individual DMLab and Atari levels, the proposed algorithm can achieve scores that are substantially higher than has previously been reported. V-MPO is also applicable to problems with high-dimensional, continuous action spaces, which we demonstrate in the context of learning to control simulated humanoids with 22 degrees of freedom from full state observations and 56 degrees of freedom from pixel observations, as well as example OpenAI Gym tasks where V-MPO achieves substantially higher asymptotic scores than previously reported.""","""This paper proposes an extension of MPO for on-policy reinforcement learning. The proposed method achieved promising results in a relatively hyper-parameter insensitive manner. One concern of the reviewers is the lack of comparison with previous works, such as original MPO, which has been partially addressed by the authors in rebuttal. In addition, Blind Review #3 has some concerns with the fairness of the experimental comparison, though other reviews accept the comparison using standardized benchmark. Overall, the paper proposes a promising extension of MPO; thus, I recommend it for acceptance. """
279,"""Empirical Studies on the Properties of Linear Regions in Deep Neural Networks""","['deep learning', 'linear region', 'optimization']","""A deep neural networks (DNN) with piecewise linear activations can partition the input space into numerous small linear regions, where different linear functions are fitted. It is believed that the number of these regions represents the expressivity of a DNN. This paper provides a novel and meticulous perspective to look into DNNs: Instead of just counting the number of the linear regions, we study their local properties, such as the inspheres, the directions of the corresponding hyperplanes, the decision boundaries, and the relevance of the surrounding regions. We empirically observed that different optimization techniques lead to completely different linear regions, even though they result in similar classification accuracies. We hope our study can inspire the design of novel optimization techniques, and help discover and analyze the behaviors of DNNs.""","""This paper studies the properties of regions where a DNN with piecewise linear activations behaves linearly. They develop a variety of techniques to chracterize properties and show how these properties correlate with various parameters of the network architecture and training method. The reviewers were in consensus on the quality of the paper: The paper is well written and contains a number of insights that would be of broad interest to the deep learning community. I therefore recommend acceptance."""
280,"""Unified recurrent network for many feature types""","['sparse', 'recurrent', 'asynchronous', 'time', 'series']","""There are time series that are amenable to recurrent neural network (RNN) solutions when treated as sequences, but some series, e.g. asynchronous time series, provide a richer variation of feature types than current RNN cells take into account. In order to address such situations, we introduce a unified RNN that handles five different feature types, each in a different manner. Our RNN framework separates sequential features into two groups dependent on their frequency, which we call sparse and dense features, and which affect cell updates differently. Further, we also incorporate time features at the sequential level that relate to the time between specified events in the sequence and are used to modify the cell's memory state. We also include two types of static (whole sequence level) features, one related to time and one not, which are combined with the encoder output. The experiments show that the proposed modeling framework does increase performance compared to standard cells.""","""main summary: sparse time LSTM discussions; reviewer 4: technical description of the proposed method insufficient, reviewer 2, 3: same paper sent to ICLR 2019 and rejected recommendation: rejected, based on all reviewers comments"""
281,"""S2VG: Soft Stochastic Value Gradient method""","['Model-based reinforcement learning', 'soft stochastic value gradient']","""Model-based reinforcement learning (MBRL) has shown its advantages in sample-efficiency over model-free reinforcement learning (MFRL). Despite the impressive results it achieves, it still faces a trade-off between the ease of data generation and model bias. In this paper, we propose a simple and elegant model-based reinforcement learning algorithm called soft stochastic value gradient method (S2VG). S2VG combines the merits of the maximum-entropy reinforcement learning and MBRL, and exploits both real and imaginary data. In particular, we embed the model in the policy training and learn pseudo-formula and pseudo-formula functions from the real (or imaginary) data set. Such embedding enables us to compute an analytic policy gradient through the back-propagation rather than the likelihood-ratio estimation, which can reduce the variance of the gradient estimation. We name our algorithm Soft Stochastic Value Gradient method to indicate its connection with the well-known stochastic value gradient method in \citep{heess2015Learning}.""","""The authors consider improvements to model-based reinforcement learning to improve sample efficiency and computational speed. They propose a method which they claim is simple and elegant and embeds the model in the policy learning step, this allows them to compute analytic gradients through the model which can have lower variance than likelihood ratio gradients. They evaluate their method on Mujoco with limited data. All of the reviewers found the presentation confusing and below the bar for an acceptable submission. Although the authors tried to explain the algorithm better to the reviewers, they did not find the presentation sufficiently improved. I agree that the paper has substantial room for improvement around clarity. Reviewers also asked that experiments be run for more time steps. I agree that this would be an important addition as many model-based reinforcement learning approaches perform worse asymptotically model free approaches and it would be interesting to see how the proposed approach does. A reviewer pointed out that equation 2 is missing a term, and indeed I believe that is true. The authors response is not correct, they likely refer to an equation in SVG where the state is integrated out. Finally, the method does not compare to state-of-the-art model-based approaches, claiming that they use ensembles or uncertainty to improve performance. The authors would need to show that adding either of these to their approach attains similar performance to state-of-the-art approaches. At this time, this paper is below the bar for acceptance."""
282,"""Representation Learning Through Latent Canonicalizations""","['representation learning', 'latent canonicalization', 'sim2real', 'few shot', 'disentanglement']","""We seek to learn a representation on a large annotated data source that generalizes to a target domain using limited new supervision. Many prior approaches to this problem have focused on learning disentangled representations so that as individual factors vary in a new domain, only a portion of the representation need be updated. In this work, we seek the generalization power of disentangled representations, but relax the requirement of explicit latent disentanglement and instead encourage linearity of individual factors of variation by requiring them to be manipulable by learned linear transformations. We dub these transformations latent canonicalizers, as they aim to modify the value of a factor to a pre-determined (but arbitrary) canonical value (e.g., recoloring the image foreground to black). Assuming a source domain with access to meta-labels specifying the factors of variation within an image, we demonstrate experimentally that our method helps reduce the number of observations needed to generalize to a similar target domain when compared to a number of supervised baselines. ""","""This paper proposes a method to allow models to generalize more effectively through the use of latent linear transforms. Overall, I think this method is interesting, but both R2 and R4 were concerned with the experimental evaluation being too simplistic, and the method not being applicable to areas where a good simulator is not available. This seems like a very valid concern to me, and given the high bar for acceptance to ICLR, I would suggest that the paper is not accepted at this time. I would encourage the authors to continue with follow-up experiments that better showcase the generality of the method, and re-submit a more polished draft to a conference in the near future."""
283,"""Improved Generalization Bound of Permutation Invariant Deep Neural Networks""","['Deep Neural Network', 'Invariance', 'Symmetry', 'Group', 'Generalization']","""We theoretically prove that a permutation invariant property of deep neural networks largely improves its generalization performance. Learning problems with data that are invariant to permutations are frequently observed in various applications, for example, point cloud data and graph neural networks. Numerous methodologies have been developed and they achieve great performances, however, understanding a mechanism of the performance is still a developing problem. In this paper, we derive a theoretical generalization bound for invariant deep neural networks with a ReLU activation to clarify their mechanism. Consequently, our bound shows that the main term of their generalization gap is improved by pseudo-formula where pseudo-formula is a number of permuting coordinates of data. Moreover, we prove that an approximation power of invariant deep neural networks can achieve an optimal rate, though the networks are restricted to be invariant. To achieve the results, we develop several new proof techniques such as correspondence with a fundamental domain and a scale-sensitive metric entropy.""","""This work proves a generalization bound for permutation invariant neural networks (with ReLU activations). While it appears the proof is technically sound and the exact result is novel, reviewers did not feel that the proof significantly improves our understanding of model generalization relative to prior work. Because of this, the work is too incremental in its current form."""
284,"""Robust saliency maps with distribution-preserving decoys""","['explainable machine learning', 'explainable AI', 'deep learning interpretability', 'saliency maps', 'perturbation', 'convolutional neural network']","""Saliency methods help to make deep neural network predictions more interpretable by identifying particular features, such as pixels in an image, that contribute most strongly to the network's prediction. Unfortunately, recent evidence suggests that many saliency methods perform poorly when gradients are saturated or in the presence of strong inter-feature dependence or noise injected by an adversarial attack. In this work, we propose a data-driven technique that uses the distribution-preserving decoys to infer robust saliency scores in conjunction with a pre-trained convolutional neural network classifier and any off-the-shelf saliency method. We formulate the generation of decoys as an optimization problem, potentially applicable to any convolutional network architecture. We also propose a novel decoy-enhanced saliency score, which provably compensates for gradient saturation and considers joint activation patterns of pixels in a single-layer convolutional neural network. Empirical results on the ImageNet data set using three different deep neural network architectures---VGGNet, AlexNet and ResNet---show both qualitatively and quantitatively that decoy-enhanced saliency scores outperform raw scores produced by three existing saliency methods.""","""This submission proposes a method to explain deep vision models using saliency maps that are robust to certain input perturbations. Strengths: -The paper is clear and well-written. -The approach is interesting. Weaknesses: -The motivation and formulation of the approach (e.g. coherence vs explanation and the use of decoys) was not convincing. -The validation needs additional experiments and comparisons to recent works. These weaknesses were not sufficiently addressed in the discussion phase. AC agrees with the majority recommendation to reject."""
285,"""Graph inference learning for semi-supervised classification""","['semi-supervised classification', 'graph inference learning']","""In this work, we address the semi-supervised classification of graph data, where the categories of those unlabeled nodes are inferred from labeled nodes as well as graph structures. Recent works often solve this problem with the advanced graph convolution in a conventional supervised manner, but the performance could be heavily affected when labeled data is scarce. Here we propose a Graph Inference Learning (GIL) framework to boost the performance of node classification by learning the inference of node labels on graph topology. To bridge the connection of two nodes, we formally define a structure relation by encapsulating node attributes, between-node paths and local topological structures together, which can make inference conveniently deduced from one node to another node. For learning the inference process, we further introduce meta-optimization on structure relations from training nodes to validation nodes, such that the learnt graph inference capability can be better self-adapted into test nodes. Comprehensive evaluations on four benchmark datasets (including Cora, Citeseer, Pubmed and NELL) demonstrate the superiority of our GIL when compared with other state-of-the-art methods in the semi-supervised node classification task.""","""The authors propose a graph inference learning framework to address the issues of sparse labeled data in graphs. The authors use structural information and node attributes to define a structure relation which is then use to infer unknown labels from known labels. The authors demonstrate the effectiveness of their approach on four benchmark datasets. The approach presented in the paper is sound and the empirical results are convincing. All reviewers have given a positive rating for this paper. Two reviewers had some initial concerns about the paper but after the rebuttal they have acknowledged the answers given by the authors and adjusted their scores. R1 still has concerns about the motivation of the paper and I request the authors to adequately address this in their final version."""
286,"""A Coordinate-Free Construction of Scalable Natural Gradient""","['Natural gradient', 'second-order optimization', 'K-FAC', 'parameterization invariance', 'deep learning']","""Most neural networks are trained using first-order optimization methods, which are sensitive to the parameterization of the model. Natural gradient descent is invariant to smooth reparameterizations because it is defined in a coordinate-free way, but tractable approximations are typically defined in terms of coordinate systems, and hence may lose the invariance properties. We analyze the invariance properties of the Kronecker-Factored Approximate Curvature (K-FAC) algorithm by constructing the algorithm in a coordinate-free way. We explicitly construct a Riemannian metric under which the natural gradient matches the K-FAC update; invariance to affine transformations of the activations follows immediately. We extend our framework to analyze the invariance properties of K-FAC appied to convolutional networks and recurrent neural networks, as well as metrics other than the usual Fisher metric.""","""The authors analyze the natural gradient algorithm for training a neural net from a theoretical perspective and prove connections to the K-FAC algorithm. The paper is poorly written and contains no experimental evaluation or well established implications wrt practical significance of the results."""
287,"""Unsupervised Disentanglement of Pose, Appearance and Background from Images and Videos""",['unsupervised landmark discovery'],"""Unsupervised landmark learning is the task of learning semantic keypoint-like representations without the use of expensive keypoint-level annotations. A popular approach is to factorize an image into a pose and appearance data stream, then to reconstruct the image from the factorized components. The pose representation should capture a set of consistent and tightly localized landmarks in order to facilitate reconstruction of the input image. Ultimately, we wish for our learned landmarks to focus on the foreground object of interest. However, the reconstruction task of the entire image forces the model to allocate landmarks to model the background. This work explores the effects of factorizing the reconstruction task into separate foreground and background reconstructions, conditioning only the foreground reconstruction on the unsupervised landmarks. Our experiments demonstrate that the proposed factorization results in landmarks that are focused on the foreground object of interest. Furthermore, the rendered background quality is also improved, as the background rendering pipeline no longer requires the ill-suited landmarks to model its pose and appearance. We demonstrate this improvement in the context of the video-prediction.""","""The paper proposes an approach for unsupervised learning of keypoint landmarks from images and videos by decomposing them into the foreground and static background. The technical approach builds upon related prior works such as Lorenz et al. 2019 and Jakab et al. 2018 by extending them with foreground/background separation. The proposed method works well for static background achieving strong pose prediction results. The weaknesses of the paper are that (1) the proposed method is a fairly reasonable but incremental extension of existing techniques; (2) it relies on a strong assumption on the property of static backgrounds; (3) video prediction results are of limited significance and scope. In particular, the proposed method may work for simple data like KTH but is very limited for modeling videos as it is not well-suited to handle moving backgrounds, interactions between objects (e.g., robot arm in the foreground and objects in the background), and stochasticity. """
288,"""Inductive representation learning on temporal graphs""","['temporal graph', 'inductive representation learning', 'functional time encoding', 'self-attention']","""Inductive representation learning on temporal graphs is an important step toward salable machine learning on real-world dynamic networks. The evolving nature of temporal dynamic graphs requires handling new nodes as well as capturing temporal patterns. The node embeddings, which are now functions of time, should represent both the static node features and the evolving topological structures. Moreover, node and topological features can be temporal as well, whose patterns the node embeddings should also capture. We propose the temporal graph attention (TGAT) layer to efficiently aggregate temporal-topological neighborhood features to learn the time-feature interactions. For TGAT, we use the self-attention mechanism as building block and develop a novel functional time encoding technique based on the classical Bochner's theorem from harmonic analysis. By stacking TGAT layers, the network recognizes the node embeddings as functions of time and is able to inductively infer embeddings for both new and observed nodes as the graph evolves. The proposed approach handles both node classification and link prediction task, and can be naturally extended to include the temporal edge features. We evaluate our method with transductive and inductive tasks under temporal settings with two benchmark and one industrial dataset. Our TGAT model compares favorably to state-of-the-art baselines as well as the previous temporal graph embedding approaches.""","""The major contribution of this paper is the use of random Fourier features as temporal (positional) encoding for dynamic graphs. The reviewers all find the proposed method interesting, and believes that this is a paper with reasonable contributions. One comment pointed out that the connection between Time2Vec and harmonic analysis has been discussed in the previous work, and we suggest the authors to include this discussion/comparison in the paper."""
289,"""The Effect of Neural Net Architecture on Gradient Confusion & Training Performance""","['neural network architecture', 'speed of training', 'layer width', 'network depth']","""The goal of this paper is to study why typical neural networks train so fast, and how neural network architecture affects the speed of training. We introduce a simple concept called gradient confusion to help formally analyze this. When confusion is high, stochastic gradients produced by different data samples may be negatively correlated, slowing down convergence. But when gradient confusion is low, data samples interact harmoniously, and training proceeds quickly. Through novel theoretical and experimental results, we show how the neural net architecture affects gradient confusion, and thus the efficiency of training. We show that increasing the width of neural networks leads to lower gradient confusion, and thus easier model training. On the other hand, increasing the depth of neural networks has the opposite effect. Finally, we observe empirically that techniques like batch normalization and skip connections reduce gradient confusion, which helps reduce the training burden of very deep networks.""","""This paper introduces the concept of gradient confusion to show how the neural network architecture affects the speed of training. The reviewers' opinion on this paper varies widely, also after the discussion phase. The main disagreement is on the significance of this work, and whether the concept of gradient confusion adds something meaningful to the existing literature with respect to understanding deep networks. The strong disagreement on this paper suggest that the paper is not quite ready yet for ICLR, but that the authors should make another iteration on the paper to strengthen the case for its significance. """
290,"""R-TRANSFORMER: RECURRENT NEURAL NETWORK ENHANCED TRANSFORMER""","['Sequence Modeling', 'Multi-head Attention', 'RNNs']","""Recurrent Neural Networks have long been the dominating choice for sequence modeling. However, it severely suffers from two issues: impotent in capturing very long-term dependencies and unable to parallelize the sequential computation procedure. Therefore, many non-recurrent sequence models that are built on convolution and attention operations have been proposed recently. Notably, models with multi-head attention such as Transformer have demonstrated extreme effectiveness in capturing long-term dependencies in a variety of sequence modeling tasks. Despite their success, however, these models lack necessary components to model local structures in sequences and heavily rely on position embeddings that have limited effects and require a considerable amount of design efforts. In this paper, we propose the R-Transformer which enjoys the advantages of both RNNs and the multi-head attention mechanism while avoids their respective drawbacks. The proposed model can effectively capture both local structures and global long-term dependencies in sequences without any use of position embeddings. We evaluate R-Transformer through extensive experiments with data from a wide range of domains and the empirical results show that R-Transformer outperforms the state-of-the-art methods by a large margin in most of the tasks.""","""The submission proposes a variant of a Transformer architecture that does not use positional embeddings to model local structural patterns but instead adds a recurrent layer before each attention layer to maintain local context. The approach is empirically verified on a number of domains. The reviewers had concerns with the paper, most notably that the architectural modification is not sufficiently novel or significant to warrant publication, that appropriate ablations and baselines were not done to convincingly show the benefit of the approach, that the speed tradeoff was not adequately discussed, and that the results were not compared to actual SOTA results. For these reasons, the recommendation is to reject the paper."""
291,"""BatchEnsemble: an Alternative Approach to Efficient Ensemble and Lifelong Learning""","['deep learning', 'ensembles']",""" Ensembles, where multiple neural networks are trained individually and their predictions are averaged, have been shown to be widely successful for improving both the accuracy and predictive uncertainty of single neural networks. However, an ensembles cost for both training and testing increases linearly with the number of networks, which quickly becomes untenable. In this paper, we propose BatchEnsemble, an ensemble method whose computational and memory costs are significantly lower than typical ensembles. BatchEnsemble achieves this by defining each weight matrix to be the Hadamard product of a shared weight among all ensemble members and a rank-one matrix per member. Unlike ensembles, BatchEnsemble is not only parallelizable across devices, where one device trains one member, but also parallelizable within a device, where multiple ensemble members are updated simultaneously for a given mini-batch. Across CIFAR-10, CIFAR-100, WMT14 EN-DE/EN-FR translation, and out-of-distribution tasks, BatchEnsemble yields competitive accuracy and uncertainties as typical ensembles; the speedup at test time is 3X and memory reduction is 3X at an ensemble of size 4. We also apply BatchEnsemble to lifelong learning, where on Split-CIFAR-100, BatchEnsemble yields comparable performance to progressive neural networks while having a much lower computational and memory costs. We further show that BatchEnsemble can easily scale up to lifelong learning on Split-ImageNet which involves 100 sequential learning tasks""","""This paper proposed an improved ensemble method called BatchEnsemble, where the weight matrix is decomposed as the element-wise product of a shared weigth metrix and a rank-one matrix for each member. The effectiveness of the proposed methods has been verified by experiments on a list of various tasks including image classification, machine translation, lifelong learning and uncertainty modeling. The idea is simple and easy to follow. Although some reviewers thought it lacks of in-deep analysis, I would like to see it being accepted so the community can benefit from it."""
292,"""Mixed Precision DNNs: All you need is a good parametrization""","['Deep Neural Network Compression', 'Quantization', 'Straight through gradients']","""Efficient deep neural network (DNN) inference on mobile or embedded devices typically involves quantization of the network parameters and activations. In particular, mixed precision networks achieve better performance than networks with homogeneous bitwidth for the same size constraint. Since choosing the optimal bitwidths is not straight forward, training methods, which can learn them, are desirable. Differentiable quantization with straight-through gradients allows to learn the quantizer's parameters using gradient methods. We show that a suited parametrization of the quantizer is the key to achieve a stable training and a good final performance. Specifically, we propose to parametrize the quantizer with the step size and dynamic range. The bitwidth can then be inferred from them. Other parametrizations, which explicitly use the bitwidth, consistently perform worse. We confirm our findings with experiments on CIFAR-10 and ImageNet and we obtain mixed precision DNNs with learned quantization parameters, achieving state-of-the-art performance.""","""The reviewers uniformly vote to accept this paper. Please take comments into account when revising for the camera ready. I was also very impressed by the authors' responsiveness to reviewer comments, putting in additional work after submission."""
293,"""Role-Wise Data Augmentation for Knowledge Distillation""","['Data Augmentation', 'Knowledge Distillation']","""Knowledge Distillation (KD) is a common method for transferring the ``knowledge'' learned by one machine learning model (the teacher) into another model (the student), where typically, the teacher has a greater capacity (e.g., more parameters or higher bit-widths). To our knowledge, existing methods overlook the fact that although the student absorbs extra knowledge from the teacher, both models share the same input data -- and this data is the only medium by which the teacher's knowledge can be demonstrated. Due to the difference in model capacities, the student may not benefit fully from the same data points on which the teacher is trained. On the other hand, a human teacher may demonstrate a piece of knowledge with individualized examples adapted to a particular student, for instance, in terms of her cultural background and interests. Inspired by this behavior, we design data augmentation agents with distinct roles to facilitate knowledge distillation. Our data augmentation agents generate distinct training data for the teacher and student, respectively. We focus specifically on KD when the teacher network has greater precision (bit-width) than the student network. We find empirically that specially tailored data points enable the teacher's knowledge to be demonstrated more effectively to the student. We compare our approach with existing KD methods on training popular neural architectures and demonstrate that role-wise data augmentation improves the effectiveness of KD over strong prior approaches. The code for reproducing our results will be made publicly available.""","""This paper studies Population-Based Augmentation in the context of knowledge distillation (KD) and proposes a role-wise data augmentation schemes for improved KD. While the reviewers believe that there is some merit in the proposed approach, its incremental nature and inherent complexity require a cleaner exposition and a stronger empirical evaluation on additional data sets. I will hence recommend the rejection of this manuscript in the current state. Nevertheless, applying PBA to KD seems to be an interesting direction and we encourage the authors to add the missing experiments and to carefully incorporate the reviewer feedback to improve the manuscript."""
294,"""A Theoretical Analysis of the Number of Shots in Few-Shot Learning""","['Few shot learning', 'Meta Learning', 'Performance Bounds']","""Few-shot classification is the task of predicting the category of an example from a set of few labeled examples. The number of labeled examples per category is called the number of shots (or shot number). Recent works tackle this task through meta-learning, where a meta-learner extracts information from observed tasks during meta-training to quickly adapt to new tasks during meta-testing. In this formulation, the number of shots exploited during meta-training has an impact on the recognition performance at meta-test time. Generally, the shot number used in meta-training should match the one used in meta-testing to obtain the best performance. We introduce a theoretical analysis of the impact of the shot number on Prototypical Networks, a state-of-the-art few-shot classification method. From our analysis, we propose a simple method that is robust to the choice of shot number used during meta-training, which is a crucial hyperparameter. The performance of our model trained for an arbitrary meta-training shot number shows great performance for different values of meta-testing shot numbers. We experimentally demonstrate our approach on different few-shot classification benchmarks.""","""The reviewers generally found the paper's contribution to be valuable and informative, and I believe that this paper should be accepted for publication and a poster presentation. I would strongly recommend to the authors to carefully read over the reviews and address any comments or concerns that were not yet addressed in the rebuttal."""
295,"""CaptainGAN: Navigate Through Embedding Space For Better Text Generation""","['Generative Adversarial Network', 'Text Generation', 'Straight-Through Estimator']","""Score-function-based text generation approaches such as REINFORCE, in general, suffer from high computational complexity and training instability problems. This is mainly due to the non-differentiable nature of the discrete space sampling and thus these methods have to treat the discriminator as a reward function and ignore the gradient information. In this paper, we propose a novel approach, CaptainGAN, which adopts the straight-through gradient estimator and introduces a re-centered gradient estimation technique to steer the generator toward better text tokens through the embedding space. Our method is stable to train and converges quickly without maximum likelihood pre-training. On multiple metrics of text quality and diversity, our method outperforms existing GAN-based methods on natural language generation.""","""This paper proposes a method to train generative adversarial nets for text generation. The paper proposes to address the challenge of discrete sequences using straight-through and gradient centering. The reviewers found that the results on COCO Image Captions and EMNLP 2017 News were interesting. However, this paper is borderline because it does not sufficiently motivate one of its key contributions: the gradient centering. The paper establishes that it provides an improvement in ablation, but more in-depth analysis would significantly improve the paper. I strongly encourage the authors to resubmit the paper once this has been addressed."""
296,"""Independence-aware Advantage Estimation""","['Reinforcement Learning', 'Advantage Estimation']","""Most of existing advantage function estimation methods in reinforcement learning suffer from the problem of high variance, which scales unfavorably with the time horizon. To address this challenge, we propose to identify the independence property between current action and future states in environments, which can be further leveraged to effectively reduce the variance of the advantage estimation. In particular, the recognized independence property can be naturally utilized to construct a novel importance sampling advantage estimator with close-to-zero variance even when the Monte-Carlo return signal yields a large variance. To further remove the risk of the high variance introduced by the new estimator, we combine it with existing Monte-Carlo estimator via a reward decomposition model learned by minimizing the estimation variance. Experiments demonstrate that our method achieves higher sample efficiency compared with existing advantage estimation methods in complex environments. ""","""Policy gradient methods typically suffer from high variance in the advantage function estimator. The authors point out independence property between the current action and future states which implies that certain terms from the advantage estimator can be omitted when this property holds. Based on this fact, they construct a novel important sampling based advantage estimator. They evaluate their approach on simple discrete action environments and demonstrate reduced variance and improved performance. Reviewers were generally concerned about the clarity of the technical exposition and the positioning of this work with respect to other estimators of the advantage function which use control variates. The authors clarified differences between their approach and previous approaches using control variance and clarified many of the technical questions that reviewers asked about. I am not convinced by the merits of this approach. While, I think the fundamental idea is interesting, the experiments are limited to simple discrete environments and no comparison is made to other control variate based approaches for reducing variance. Furthermore, due to the function approximation which introduces bias, the method should be compared to actor critic methods which directly estimate the advantage function. Finally, one of the advantages of on policy policy gradient methods is its simplicity. This method introduces many additional steps and parameters to be learned. The authors would need to demonstrate large improvements in sample efficiency on more complex tasks to justify this added complexity. At this time, I do not recommend this paper for acceptance."""
297,"""Conditional Learning of Fair Representations""","['algorithmic fairness', 'representation learning']","""We propose a novel algorithm for learning fair representations that can simultaneously mitigate two notions of disparity among different demographic subgroups in the classification setting. Two key components underpinning the design of our algorithm are balanced error rate and conditional alignment of representations. We show how these two components contribute to ensuring accuracy parity and equalized false-positive and false-negative rates across groups without impacting demographic parity. Furthermore, we also demonstrate both in theory and on two real-world experiments that the proposed algorithm leads to a better utility-fairness trade-off on balanced datasets compared with existing algorithms on learning fair representations for classification. ""","""This paper provides a new algorithm for learning fair representation for two different fairness criteria--accuracy parity and equalized odds. The reviewers agree that the paper provides novel techniques, although the experiments may appear to be a bit weak. Overall, this paper gives new contributions to the fair representation learning literature. The authors should consider citing and discussing the relationship with the following work: A Reductions Approach to Fair Classification., ICML 2018"""
298,"""Sparse Coding with Gated Learned ISTA""","['Sparse coding', 'deep learning', 'learned ISTA', 'convergence analysis']","""In this paper, we study the learned iterative shrinkage thresholding algorithm (LISTA) for solving sparse coding problems. Following assumptions made by prior works, we first discover that the code components in its estimations may be lower than expected, i.e., require gains, and to address this problem, a gated mechanism amenable to theoretical analysis is then introduced. Specific design of the gates is inspired by convergence analyses of the mechanism and hence its effectiveness can be formally guaranteed. In addition to the gain gates, we further introduce overshoot gates for compensating insufficient step size in LISTA. Extensive empirical results confirm our theoretical findings and verify the effectiveness of our method.""","""The paper extends LISTA by introducing gain gates and overshoot gates, which respectively address underestimation of code components and compensation of small step size of LISTA. The authors theoretically analyze these extensions and backup the effectiveness of their proposed algorithm with encouraging empirical results. All reviewers are highly positive on the contributions of this paper, and appreciate the rigorous theory which is further supported by convincing experiments. All three reviewers recommended accept. """
299,"""RaCT: Toward Amortized Ranking-Critical Training For Collaborative Filtering ""","['Collaborative Filtering', 'Recommender Systems', 'Actor-Critic', 'Learned Metrics']","""We investigate new methods for training collaborative filtering models based on actor-critic reinforcement learning, to more directly maximize ranking-based objective functions. Specifically, we train a critic network to approximate ranking-based metrics, and then update the actor network to directly optimize against the learned metrics. In contrast to traditional learning-to-rank methods that require re-running the optimization procedure for new lists, our critic-based method amortizes the scoring process with a neural network, and can directly provide the (approximate) ranking scores for new lists. We demonstrate the actor-critic's ability to significantly improve the performance of a variety of prediction models, and achieve better or comparable performance to a variety of strong baselines on three large-scale datasets. ""","""The reviewers generally agreed that the application and method are interesting and relevant, and the paper should be accepted. I would encourage the authors to carefully go through the reviewers' suggestions and address them in the final."""
300,"""Provable robustness against all adversarial pseudo-formula -perturbations for 1$""","['adversarial robustness', 'provable guarantees']","""In recent years several adversarial attacks and defenses have been proposed. Often seemingly robust models turn out to be non-robust when more sophisticated attacks are used. One way out of this dilemma are provable robustness guarantees. While provably robust models for specific pseudo-formula -perturbation models have been developed, we show that they do not come with any guarantee against other pseudo-formula -perturbations. We propose a new regularization scheme, MMR-Universal, for ReLU networks which enforces robustness wrt pseudo-formula - \textit{and} pseudo-formula -perturbations and show how that leads to the first provably robust models wrt any pseudo-formula -norm for 1$.""","""This paper extends the degree to which ReLU networks can be provably resistant to a broader class of adversarial attacks using a MMR-Universal regularization scheme. In particular, the first provably robust model in terms of lp norm perturbations is developed, where robustness holds with respect to *any* p greater than or equal to one (as opposed to prior work that may only apply to specific lp-norm perturbations). While I support accepting this paper based on the strong reviews and significant technical contribution, one potential drawback is the lack of empirical tests with a broader cohort of representative CNN architectures (as pointed out by R1). In this regard, the rebuttal promises that additional experiments with larger models will be added in the future to the final version, but obviously such results cannot be used to evaluate performance at this time."""
301,"""Variational Autoencoders with Normalizing Flow Decoders""",[],"""Recently proposed normalizing flow models such as Glow (Kingma & Dhariwal, 2018) have been shown to be able to generate high quality, high dimensional images with relatively fast sampling speed. Due to the inherently restrictive design of architecture , however, it is necessary that their model are excessively deep in order to achieve effective training. In this paper we propose to combine Glow model with an underlying variational autoencoder in order to counteract this issue. We demonstrate that such our proposed model is competitive with Glow in terms of image quality while requiring far less time for training. Additionally, our model achieves state-of-the-art FID score on CIFAR-10 for a likelihood-based model.""","""The paper received mixed reviews: WR (R1,R3) and WA (R2). AC has carefully read reviews and rebuttal and examined the paper. Unfortunately, the AC sides with R1 & R3, who are more experienced in this field than R2, and feels that paper does not quite meet the acceptance threshold. The authors should incorporate the comments of the reviewers and resubmit to another venue. """
302,"""MULTI-STAGE INFLUENCE FUNCTION""","['influence function', 'multistage training', 'pretrained model']","""Multi-stage training and knowledge transfer from a large-scale pretrain task to various fine-tune end tasks have revolutionized natural language processing (NLP) and computer vision (CV), with state-of-the-art performances constantly being improved. In this paper, we develop a multi-stage influence function score to track predictions from a finetune model all the way back to the pretrain data. With this score, we can identify the pretrain examples in the pretrain task that contribute most to a prediction in the fine-tune task. The proposed multi-stage influence function generalizes the original influence function for a single model in Koh et al 2017, thereby enabling influence computation through both pretrain and fine-tune models. We test our proposed method in various experiments to show its effectiveness and potential applications.""","""This paper extends the idea of influence functions (aka the implicit function theorem) to multi-stage training pipelines, and also adds an L2 penalty to approximate the effect of training for a limited number of iterations. I think this paper is borderline. I also think that R3 had the best take and questions on this paper. Pros: - The main idea makes sense, and could be used to understand real training pipelines better. - The experiments, while mostly small-scale, answer most of the immediate questions about this model. Cons: - The paper still isn't all that polished. E.g. on page 4: ""Algorithm 1 shows how to compute the influence score in (11). The pseudocode for computing the influence function in (11) is shown in Algorithm 1"" - I wish the image dataset experiments had been done with larger images and models. Ultimately, the straightforwardness of the extension and the relative niche applications mean that although the main idea is sound, the quality and the overall impact of this paper don't quite meet the bar."""
303,"""THE EFFECT OF ADVERSARIAL TRAINING: A THEORETICAL CHARACTERIZATION""","['adversarial training', 'robustness', 'separable data']","""It has widely shown that adversarial training (Madry et al., 2018) is effective in defending adversarial attack empirically. However, the theoretical understanding of the difference between the solution of adversarial training and that of standard training is limited. In this paper, we characterize the solution of adversarial training for linear classification problem for a full range of adversarial radius "". Specifically, we show that if the data themselves are -strongly linearly-separable, adversarial training with radius smaller than "" converges to the hard margin solution of SVM with a faster rate than standard training. If the data themselves are not -strongly linearly-separable, we show that adversarial training with radius "" is stable to outliers while standard training is not. Moreover, we prove that the classifier returned by adversarial training with a large radius "" has low confidence in each data point. Experiments corroborate our theoretical finding well.""","""This paper studies adversarial training in the linear classification setting, and shows a rate of convergence for adversarial training of o(1/log T) to the hard margin SVM solution under a set of assumptions. While 2 reviewers agree that the problem and the central result is somewhat interesting (though R3 is uncertain of the applicability to deep learning, I agree that useful insights can often be gleaned from studying the linear case), reviewers were critical of the degree of clarity and rigour in the writing, including notation, symbol reuse, repetitions/redundancies, and clarity surrounding the assumptions made. No updates to the paper were made and reviewers did not feel their concerns were addressed by the rebuttals. I therefore recommend rejection, but would encourage the authors to continue refining their paper in order to showcase their results more clearly and didactically."""
304,"""Policy Message Passing: A New Algorithm for Probabilistic Graph Inference""","['graph inference algorithm', 'graph reasoning', 'variational inference']","""A general graph-structured neural network architecture operates on graphs through two core components: (1) complex enough message functions; (2) a fixed information aggregation process. In this paper, we present the Policy Message Passing algorithm, which takes a probabilistic perspective and reformulates the whole information aggregation as stochastic sequential processes. The algorithm works on a much larger search space, utilizes reasoning history to perform inference, and is robust to noisy edges. We apply our algorithm to multiple complex graph reasoning and prediction tasks and show that our algorithm consistently outperforms state-of-the-art graph-structured models by a significant margin.""","""This paper was reviewed by 3 experts, who recommend Weak Reject, Weak Reject, and Reject. The reviewers were overall supportive of the work presented in the paper and felt it would have merit for eventual publication. However, the reviewers identified a number of serious concerns about writing quality, missing technical details, experiments, and missing connections to related work. In light of these reviews, and the fact that the authors have not submitted a response to reviews, we are not able to accept the paper. However given the supportive nature of the reviews, we hope the authors will work to polish the paper and submit to another venue."""
305,"""Learning Expensive Coordination: An Event-Based Deep RL Approach""","['Multi-Agent Deep Reinforcement Learning', 'Deep Reinforcement Learning', 'Leader–Follower Markov Game', 'Expensive Coordination']","""Existing works in deep Multi-Agent Reinforcement Learning (MARL) mainly focus on coordinating cooperative agents to complete certain tasks jointly. However, in many cases of the real world, agents are self-interested such as employees in a company and clubs in a league. Therefore, the leader, i.e., the manager of the company or the league, needs to provide bonuses to followers for efficient coordination, which we call expensive coordination. The main difficulties of expensive coordination are that i) the leader has to consider the long-term effect and predict the followers' behaviors when assigning bonuses and ii) the complex interactions between followers make the training process hard to converge, especially when the leader's policy changes with time. In this work, we address this problem through an event-based deep RL approach. Our main contributions are threefold. (1) We model the leader's decision-making process as a semi-Markov Decision Process and propose a novel multi-agent event-based policy gradient to learn the leader's long-term policy. (2) We exploit the leader-follower consistency scheme to design a follower-aware module and a follower-specific attention module to predict the followers' behaviors and make accurate response to their behaviors. (3) We propose an action abstraction-based policy gradient algorithm to reduce the followers' decision space and thus accelerate the training process of followers. Experiments in resource collections, navigation, and the predator-prey game reveal that our approach outperforms the state-of-the-art methods dramatically.""","""This paper tackles the challenge of incentivising selfish agents towards a collaborative goal. In doing so, the authors propose several new modules. The reviewers commented on experiments being extremely thorough. One reviewer commented on a lack of ablation study of the 3 contributions, which was promptly provided by the authors. The proposed method is also supported by theoretical derivations. The contributions appear to be quite novel, significantly improving performance of the studied SMGs. One reviewer mentioned the clarity being compromised by too much material being in the appendix, which has been addressed by the authors moving some main pieces of content to the main text. Two reviewer commented on the relevance being lower because of the problem not being widely studied in RL. I would disagree with the reviewers on this aspect, it is great to have new problem brought to light and have fresh and novel results, rather than having yet another paper work on Atari. I also think that the authors in their rebuttal made the practical relevance of their problem setting sufficiently clear with several practical examples. """
306,"""FEW-SHOT LEARNING ON GRAPHS VIA SUPER-CLASSES BASED ON GRAPH SPECTRAL MEASURES""","['Few shot graph classification', 'graph spectral measures', 'super-classes']","""We propose to study the problem of few-shot graph classification in graph neural networks (GNNs) to recognize unseen classes, given limited labeled graph examples. Despite several interesting GNN variants being proposed recently for node and graph classification tasks, when faced with scarce labeled examples in the few-shot setting, these GNNs exhibit significant loss in classification performance. Here, we present an approach where a probability measure is assigned to each graph based on the spectrum of the graphs normalized Laplacian. This enables us to accordingly cluster the graph base-labels associated with each graph into super-classes, where the L^p Wasserstein distance serves as our underlying distance metric. Subsequently, a super-graph constructed based on the super-classes is then fed to our proposed GNN framework which exploits the latent inter-class relationships made explicit by the super-graph to achieve better class label separation among the graphs. We conduct exhaustive empirical evaluations of our proposed method and show that it outperforms both the adaptation of state-of-the-art graph classification methods to few-shot scenario and our naive baseline GNNs. Additionally, we also extend and study the behavior of our method to semi-supervised and active learning scenarios.""","""The authors propose a method for few-shot learning for graph classification. The majority of reviewers agree on the novelty of the proposed method and that the problem is interesting. The authors have addressed all major concerns."""
307,"""Understanding and Stabilizing GANs' Training Dynamics with Control Theory""","['Generative Adversarial Nets', 'Stability Analysis', 'Control Theory']","""Generative adversarial networks~(GANs) have made significant progress on realistic image generation but often suffer from instability during the training process. Most previous analyses mainly focus on the equilibrium that GANs achieve, whereas a gap exists between such theoretical analyses and practical implementations, where it is the training dynamics that plays a vital role in the convergence and stability of GANs. In this paper, we directly model the dynamics of GANs and adopt the control theory to understand and stabilize it. Specifically, we interpret the training process of various GANs as certain types of dynamics in a unified perspective of control theory which enables us to model the stability and convergence easily. Borrowed from control theory, we adopt the widely-used negative feedback control to stabilize the training dynamics, which can be considered as an pseudo-formula regularization on the output of the discriminator. We empirically verify our method on both synthetic data and natural image datasets. The results demonstrate that our method can stabilize the training dynamics as well as converge better than baselines.""","""This paper suggests stabilizing the training of GANs using ideas from control theory. The reviewers all noted that the approach was well-motivated and seemed convinced that that the problem was a worthwhile one. However, there were universal concerns about the comparisons with baselines and performance over previous works on Stabilizing GAN training and the authors were not able to properly address them."""
308,"""Learning from Unlabelled Videos Using Contrastive Predictive Neural 3D Mapping""","['3D feature learning', 'unsupervised learning', 'inverse graphics', 'object discovery']","""Predictive coding theories suggest that the brain learns by predicting observations at various levels of abstraction. One of the most basic prediction tasks is view prediction: how would a given scene look from an alternative viewpoint? Humans excel at this task. Our ability to imagine and fill in missing information is tightly coupled with perception: we feel as if we see the world in 3 dimensions, while in fact, information from only the front surface of the world hits our retinas. This paper explores the role of view prediction in the development of 3D visual recognition. We propose neural 3D mapping networks, which take as input 2.5D (color and depth) video streams captured by a moving camera, and lift them to stable 3D feature maps of the scene, by disentangling the scene content from the motion of the camera. The model also projects its 3D feature maps to novel viewpoints, to predict and match against target views. We propose contrastive prediction losses to replace the standard color regression loss, and show that this leads to better performance on complex photorealistic data. We show that the proposed model learns visual representations useful for (1) semi-supervised learning of 3D object detectors, and (2) unsupervised learning of 3D moving object detectors, by estimating the motion of the inferred 3D feature maps in videos of dynamic scenes. To the best of our knowledge, this is the first work that empirically shows view prediction to be a scalable self-supervised task beneficial to 3D object detection. ""","""The authors propose to learn space-aware 3D feature abstractions of the world given 2.5D input, by minimizing 3D and 2D view contrastive prediction objectives. The work builds upon Tung et al. (2019) but extends it by removing some of the limitations, making it thus more general. To do so, they learn an inverse graphics network which takes as input 2.5D video and maps to a 3D feature maps of the scene. The authors present experiments on both real and simulation datasets and their proposed approach is tested on feature learning, 3D moving object detection, and 3D motion estimation with good performance. All reviewers agree that this is an important problem in computer vision and the papers provides a working solution. The authors have done a good job with comparisons and make a clear case about their superiority of their model (large datasets, multiple tasks). Moreover, the rebuttal period has been quite productive, with the authors incorporating reviewers' comments in the manuscript, resulting thus in a stronger submission. Based in reviewer's comment and my own assessment, I think this paper should get accepted, as the experiments are solid with good results that the CV audience of ICLR would find relevant. """
309,"""Star-Convexity in Non-Negative Matrix Factorization""","['nmf', 'convexity', 'nonconvex optimization', 'average-case-analysis']","""Non-negative matrix factorization (NMF) is a highly celebrated algorithm for matrix decomposition that guarantees strictly non-negative factors. The underlying optimization problem is computationally intractable, yet in practice gradient descent based solvers often find good solutions. This gap between computational hardness and practical success mirrors recent observations in deep learning, where it has been the focus of extensive discussion and analysis. In this paper we revisit the NMF optimization problem and analyze its loss landscape in non-worst-case settings. It has recently been observed that gradients in deep networks tend to point towards the final minimizer throughout the optimization. We show that a similar property holds (with high probability) for NMF, provably in a non-worst case model with a planted solution, and empirically across an extensive suite of real-world NMF problems. Our analysis predicts that this property becomes more likely with growing number of parameters, and experiments suggest that a similar trend might also hold for deep neural networks --- turning increasing data sets and models into a blessing from an optimization perspective. ""","""The paper derives results for nonnegative-matrix factorization along the lines of recent results on SGD for DNNs, showing that the loss is star-convex towards randomized planted solutions. Overall, the paper is relatively well written and fairly clear. The reviewers agree that the theoretical contribution of the paper could be improved (tighten bounds) and that the experiments can be improved as well. In the context of other papers submitted to ICLR I therefore recommend to reject the paper. """
310,"""BayesOpt Adversarial Attack""","['Black-box Adversarial Attack', 'Bayesian Optimisation', 'Gaussian Process']","""Black-box adversarial attacks require a large number of attempts before finding successful adversarial examples that are visually indistinguishable from the original input. Current approaches relying on substitute model training, gradient estimation or genetic algorithms often require an excessive number of queries. Therefore, they are not suitable for real-world systems where the maximum query number is limited due to cost. We propose a query-efficient black-box attack which uses Bayesian optimisation in combination with Bayesian model selection to optimise over the adversarial perturbation and the optimal degree of search space dimension reduction. We demonstrate empirically that our method can achieve comparable success rates with 2-5 times fewer queries compared to previous state-of-the-art black-box attacks.""","""This paper proposes a query-efficient black-box attack that uses Bayesian optimization in combination with Bayesian model selection to optimize over the adversarial perturbation and the optimal degree of search space dimension reduction. The method can achieve comparable success rates with 2-5 times fewer queries compared to previous state-of-the-art black-box attacks. The paper should be further improved in the final version (e.g., including more results on ImageNet data)."""
311,"""Learning Algorithmic Solutions to Symbolic Planning Tasks with a Neural Computer""",[],"""A key feature of intelligent behavior is the ability to learn abstract strategies that transfer to unfamiliar problems. Therefore, we present a novel architecture, based on memory-augmented networks, that is inspired by the von Neumann and Harvard architectures of modern computers. This architecture enables the learning of abstract algorithmic solutions via Evolution Strategies in a reinforcement learning setting. Applied to Sokoban, sliding block puzzle and robotic manipulation tasks, we show that the architecture can learn algorithmic solutions with strong generalization and abstraction: scaling to arbitrary task configurations and complexities, and being independent of both the data representation and the task domain.""","""The authors present a method that optimizes a differentiable neural computer with evolutionary search, and which can transfer abstract strategies to novel problems. The reviewers all agreed that the approach is interesting, though were concerned about the magnitude of the contribution / novelty compared to existing work, clarity of contributions, impact of pretraining, and simplicity of examples. While the reviewers felt that the authors resolved the many of their concerns in the rebuttal, there was remaining concern about the significance of the contribution. Thus, I recommend this paper for rejection at this time."""
312,"""Blockwise Adaptivity: Faster Training and Better Generalization in Deep Learning""","['optimization', 'deep learning', 'blockwise adaptivity']","""Stochastic methods with coordinate-wise adaptive stepsize (such as RMSprop and Adam) have been widely used in training deep neural networks. Despite their fast convergence, they can generalize worse than stochastic gradient descent. In this paper, by revisiting the design of Adagrad, we propose to split the network parameters into blocks, and use a blockwise adaptive stepsize. Intuitively, blockwise adaptivity is less aggressive than adaptivity to individual coordinates, and can have a better balance between adaptivity and generalization. We show theoretically that the proposed blockwise adaptive gradient descent has comparable regret in online convex learning and convergence rate for optimizing nonconvex objective as its counterpart with coordinate-wise adaptive stepsize, but is better up to some constant. We also study its uniform stability and show that blockwise adaptivity can lead to lower generalization error than coordinate-wise adaptivity. Experimental results show that blockwise adaptive gradient descent converges faster and improves generalization performance over Nesterov's accelerated gradient and Adam.""","""The authors propose an adaptive block-wise coordinate descent method and claim faster convergence and lower generalization error. While the reviewers agreed that this method may work well in practice, they had several concerns about the relevance of the theory and strength of the empirical results. After considering the author responses, the reviewers have agreed that this paper is not yet ready for publication. """
313,"""The Variational Bandwidth Bottleneck: Stochastic Evaluation on an Information Budget""","['Variational Information Bottleneck', 'Reinforcement learning']","""In many applications, it is desirable to extract only the relevant information from complex input data, which involves making a decision about which input features are relevant. The information bottleneck method formalizes this as an information-theoretic optimization problem by maintaining an optimal tradeoff between compression (throwing away irrelevant input information), and predicting the target. In many problem settings, including the reinforcement learning problems we consider in this work, we might prefer to compress only part of the input. This is typically the case when we have a standard conditioning input, such as a state observation, and a ``privileged'' input, which might correspond to the goal of a task, the output of a costly planning algorithm, or communication with another agent. In such cases, we might prefer to compress the privileged input, either to achieve better generalization (e.g., with respect to goals) or to minimize access to costly information (e.g., in the case of communication). Practical implementations of the information bottleneck based on variational inference require access to the privileged input in order to compute the bottleneck variable, so although they perform compression, this compression operation itself needs unrestricted, lossless access. In this work, we propose the variational bandwidth bottleneck, which decides for each example on the estimated value of the privileged information before seeing it, i.e., only based on the standard input, and then accordingly chooses stochastically, whether to access the privileged input or not. We formulate a tractable approximation to this framework and demonstrate in a series of reinforcement learning experiments that it can improve generalization and reduce access to computationally costly information.""","""Existing implementation of information bottleneck need access to privileged information which goes against the idea of compression. The authors propose variational bandwidth bottleneck which estimates the value of the privileged information and then stochastically decided whether to access this information or not. They provide a suitable approximation and show that their method improves generalisation in RL while reducing access to expensive information. These paper received only two reviews. However, both the reviews were favourable. During discussions with the AC the reviewers acknowledged that most of their concerns were addressed. R2 is still concerned that VBB does not result in improvement in terms of sample efficiency. I request the authors to adequately address this in the final version. Having said that, the paper does make other interesting contributions, hence I recommend that this paper should be accepted."""
314,"""Imitation Learning of Robot Policies using Language, Vision and Motion""","['robot learning', 'imitation learning', 'natural language processing']","""In this work we propose a novel end-to-end imitation learning approach which combines natural language, vision, and motion information to produce an abstract representation of a task, which in turn can be used to synthesize specific motion controllers at run-time. This multimodal approach enables generalization to a wide variety of environmental conditions and allows an end-user to influence a robot policy through verbal communication. We empirically validate our approach with an extensive set of simulations and show that it achieves a high task success rate over a variety of conditions while remaining amenable to probabilistic interpretability.""","""The present paper addresses the problem of imitation learning in multi-modal settings, combining vision, language and motion. The proposed approach learns an abstract task representation, and the goal is to use this as a basis for generalization. This paper was subject to considerable discussion, and the authors clarified several issues that reviewers raised during the rebuttal phase. Overall, the empirical study presented in the paper remains limited, for example in terms of ablations (which components of the proposed model have what effect on performance) and placement in the context of prior work. As a result, the depth of insights is not yet sufficient for publication."""
315,"""Regularizing Deep Multi-Task Networks using Orthogonal Gradients""","['multi-task learning', 'gradient regularization', 'orthogonal gradients']","""Deep neural networks are a promising approach towards multi-task learning because of their capability to leverage knowledge across domains and learn general purpose representations. Nevertheless, they can fail to live up to these promises as tasks often compete for a model's limited resources, potentially leading to lower overall performance. In this work we tackle the issue of interfering tasks through a comprehensive analysis of their training, derived from looking at the interaction between gradients within their shared parameters. Our empirical results show that well-performing models have low variance in the angles between task gradients and that popular regularization methods implicitly reduce this measure. Based on this observation, we propose a novel gradient regularization term that minimizes task interference by enforcing near orthogonal gradients. Updating the shared parameters using this property encourages task specific decoders to optimize different parts of the feature extractor, thus reducing competition. We evaluate our method with classification and regression tasks on the multiDigitMNIST and NYUv2 dataset where we obtain competitive results. This work is a first step towards non-interfering multi-task optimization.""","""This paper proposes a training approach that orthogonalizes gradients to enable better learning across multiple tasks. The idea is simple and intuitive. Given that there is past work following the same kind of ideas, it would be need to further: (a) expand the experimental evaluation section with comparisons to prior work and, ideally, demonstrate stronger results. (b) study in more depth the assumptions behind gradient orthogonality for transfer. This would increase impact on top of past literature by explaining, besides intuitions, why gradient orthogonality helps for transfer in the first place. """
316,"""SDGM: Sparse Bayesian Classifier Based on a Discriminative Gaussian Mixture Model""","['classification', 'sparse Bayesian learning', 'Gaussian mixture model']","""In probabilistic classification, a discriminative model based on Gaussian mixture exhibits flexible fitting capability. Nevertheless, it is difficult to determine the number of components. We propose a sparse classifier based on a discriminative Gaussian mixture model (GMM), which is named sparse discriminative Gaussian mixture (SDGM). In the SDGM, a GMM-based discriminative model is trained by sparse Bayesian learning. This learning algorithm improves the generalization capability by obtaining a sparse solution and automatically determines the number of components by removing redundant components. The SDGM can be embedded into neural networks (NNs) such as convolutional NNs and can be trained in an end-to-end manner. Experimental results indicated that the proposed method prevented overfitting by obtaining sparsity. Furthermore, we demonstrated that the proposed method outperformed a fully connected layer with the softmax function in certain cases when it was used as the last layer of a deep NN.""","""This paper presents a method for merging a discriminative GMM with an ARD sparsity-promoting prior. This is accomplished by nesting the ARD prior update within a larger EM-based routine for handling the GMM, allowing the model to automatically remove redundant components and improve generalization. The resulting algorithm was deployed on standard benchmark data sets and compared against existing baselines such as logistic regression, RVMs, and SVMs. Overall, one potential weakness of this paper, which is admittedly somewhat subjective, is that the exhibited novelty of the proposed approach is modest. Indeed ARD approaches are now widely used in various capacities, and even if some hurdles must be overcome to implement the specific marriage with a discriminative GMM as reported here, at least one reviewer did not feel that this was sufficient to warrant publication. Other concerns related to the experiments and comparison with existing work. For example, one reviewer mentioned comparisons with Panousis et al., ""Nonparametric Bayesian Deep Networks with Local Competition,"" ICML 2019 and requested a discussion of differences. However, the rebuttal merely deferred this consideration to future work and provided no feedback regarding similarities or differences. In the end, all reviewers recommended rejecting this paper and I did not find any sufficient reason to overrule this consensus."""
317,"""Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery""","['reinforcement learning', 'semi-supervised learning', 'unsupervised learning', 'robotics', 'deep learning']","""Reinforcement learning requires manual specification of a reward function to learn a task. While in principle this reward function only needs to specify the task goal, in practice reinforcement learning can be very time-consuming or even infeasible unless the reward function is shaped so as to provide a smooth gradient towards a successful outcome. This shaping is difficult to specify by hand, particularly when the task is learned from raw observations, such as images. In this paper, we study how we can automatically learn dynamical distances: a measure of the expected number of time steps to reach a given goal state from any other state. These dynamical distances can be used to provide well-shaped reward functions for reaching new goals, making it possible to learn complex tasks efficiently. We show that dynamical distances can be used in a semi-supervised regime, where unsupervised interaction with the environment is used to learn the dynamical distances, while a small amount of preference supervision is used to determine the task goal, without any manually engineered reward function or goal examples. We evaluate our method both on a real-world robot and in simulation. We show that our method can learn to turn a valve with a real-world 9-DoF hand, using raw image observations and just ten preference labels, without any other supervision. Videos of the learned skills can be found on the project website: pseudo-url""","""The authors present a method to learn the expected number of time steps to reach any given state from any other state in a reinforcement learning setting. They show that these so-called dynamical distances can be used to increase learning efficiency by helping to shape reward. After some initial discussion, the reviewers had concerns about the applicability of this method to continuing problems without a clear goal state, learning issues due to the dependence of distance estimates on policy (and vice versa), experimental thoroughness, and a variety of smaller technical issues. While some of these were resolved, the largest outstanding issue is whether the proper comparisons were made to existing work other than DIAYN. The authors appear to agree that additional baselines would benefit the paper, but are uncertain whether this can occur in time. Nonetheless, after discussion the reviewers all appeared to agree on the merit of the core idea, though I strongly encourage the authors to address as many technical and baseline issues as possible before the camera ready deadline. In summary, I recommend this paper for acceptance."""
318,"""Behavior Regularized Offline Reinforcement Learning""","['reinforcement learning', 'offline RL', 'batch RL']","""In reinforcement learning (RL) research, it is common to assume access to direct online interactions with the environment. However in many real-world applications, access to the environment is limited to a fixed offline dataset of logged experience. In such settings, standard RL algorithms have been shown to diverge or otherwise yield poor performance. Accordingly, much recent work has suggested a number of remedies to these issues. In this work, we introduce a general framework, behavior regularized actor critic (BRAC), to empirically evaluate recently proposed methods as well as a number of simple baselines across a variety of offline continuous control tasks. Surprisingly, we find that many of the technical complexities introduced in recent methods are unnecessary to achieve strong performance. Additional ablations provide insights into which design choices matter most in the offline RL setting.""","""This paper is an empirical studies of methods to stabilize offline (ie, batch) RL methods where the dataset is available up front and not collected during learning. This can be an important setting in e.g. safety critical or production systems, where learned policies should not be applied on the real system until their performance and safety is verified. Since policies leave the area where training data is present, in such settings poor performance or divergence might result, unless divergence from the reference policy is regularized. This paper studies various methods to perform such regularization. The reviewers are all very happy about the thoroughness of the empirical work. The work only studies existing methods (and combination thereof), so the novelty is limited by design. The paper was also considered well written and easy to follow. The results were very similar between the considered regularizers, which somehow limits the usefulness of the paper as practical guideline (although at least now we know that perhaps we do not need to spend a lot of time choosing the best between these). Bigger differences were observed between ""value penalties"" versus ""policy regularization"". This seems to correspond to theoretical observations by Neu et al (pseudo-url, 2017), which is not cited in the manuscript. Although unpublished, I think that work is highly relevant for the current manuscript, and I'd strongly recommend the authors to consider its content. Some minor comments about the paper are given below. On the balance, the strong point of the paper is the empirical thoroughness and clarity, whereas novelty, significance, and theoretical analysis are weaker points. Due to the high selectivity of ICLR, I unfortunately have to recommend rejection for this manuscript. I have some minor comments about the contents of the paper: - The manuscript contains the line: ""Under this definition, such a behavior policy b is always well-defined even if the dataset was collected by multiple, distinct behavior policies"". Wouldn't simply defining the behavior as a mixture of the underlying behavior policies (when known) work equally well? - The paper mentions several earlier works that regularize policies update using the KL from a reference policy (or to a reference policy). The paper of Peters is cited in this context, although there the constraint is actually on the KL divergence between state-action distributions, resulting in a different type of regularization."""
319,"""Resizable Neural Networks""",[],""" In this paper, we present a deep convolutional neural network (CNN) which performs arbitrary resize operation on intermediate feature map resolution at stage-level. Motivated by weight sharing mechanism in neural architecture search, where a super-network is trained and sub-networks inherit the weights from the super-network, we present a novel CNN approach. We construct a spatial super-network which consists of multiple sub-networks, where each sub-network is a single scale network that obtain a unique spatial configuration, the convolutional layers are shared across all sub-networks. Such network, named as Resizable Neural Networks, are equivalent to training infinite single scale networks, but has no extra computational cost. Moreover, we present a training algorithm such that all sub-networks achieve better performance than individually trained counterparts. On large-scale ImageNet classification, we demonstrate its effectiveness on various modern network architectures such as MobileNet, ShuffleNet, and ResNet. To go even further, we present three variants of resizable networks: 1) Resizable as Architecture Search (Resizable-NAS). On ImageNet, Resizable-NAS ResNet-50 attain 0.4% higher on accuracy and 44% smaller than the baseline model. 2) Resizable as Data Augmentation (Resizable-Aug). While we use resizable networks as a data augmentation technique, it obtains superior performance on ImageNet classification, outperform AutoAugment by 1.2% with ResNet-50. 3) Adaptive Resizable Network (Resizable-Adapt). We introduce the adaptive resizable networks as dynamic networks, which further improve the performance with less computational cost via data-dependent inference.""","""This paper offers likely novel schemes for image resizing. The performance improvement is clear. Unfortunately two reviewers find substantial clarity issues in the manuscript after revision, and the AC concurs that this is still an issue. The paper is borderline but given the number of higher ranked papers in the pool is unable to be accepted unfortunately. """
320,"""The Local Elasticity of Neural Networks""",[],"""This paper presents a phenomenon in neural networks that we refer to as local elasticity. Roughly speaking, a classifier is said to be locally elastic if its prediction at a feature vector x' is not significantly perturbed, after the classifier is updated via stochastic gradient descent at a (labeled) feature vector x that is dissimilar to x' in a certain sense. This phenomenon is shown to persist for neural networks with nonlinear activation functions through extensive simulations on real-life and synthetic datasets, whereas this is not observed in linear classifiers. In addition, we offer a geometric interpretation of local elasticity using the neural tangent kernel (Jacot et al., 2018). Building on top of local elasticity, we obtain pairwise similarity measures between feature vectors, which can be used for clustering in conjunction with K-means. The effectiveness of the clustering algorithm on the MNIST and CIFAR-10 datasets in turn corroborates the hypothesis of local elasticity of neural networks on real-life data. Finally, we discuss some implications of local elasticity to shed light on several intriguing aspects of deep neural networks.""","""This paper presents a new phenomenon referred to as the ""local elasticity of neural networks"". The main argument is that the SGD update for nonlinear network at a local input x does not change the predictions at a different input x' (see Fig. 2). This is then connected to similarity using nearest-neighbor and kernel methods. An algorithm is also presented. The reviewers find the paper intriguing and believe that this could be interesting for the community. After the rebuttal period, one of the reviewers increased their score. I do agree with the view of the reviewers, although I found that the paper's presentation can be improved. For example, Fig. 1 is not clear at all, and the related work section basically talks about many existing works but does not discuss why they are related to this work and how this work add value to this existing works. I found Fig. 2 very clear and informative. I hope that the authors could further improve the presentation. This should help in improving the impact of the paper. With the reviewers score, I recommend to accept this paper, and encourage the authors to improve the presentation of the paper."""
321,"""Deep Evidential Uncertainty""","['Evidential deep learning', 'Uncertainty estimation', 'Epistemic uncertainty']","""Deterministic neural networks (NNs) are increasingly being deployed in safety critical domains, where calibrated, robust and efficient measures of uncertainty are crucial. While it is possible to train regression networks to output the parameters of a probability distribution by maximizing a Gaussian likelihood function, the resulting model remains oblivious to the underlying confidence of its predictions. In this paper, we propose a novel method for training deterministic NNs to not only estimate the desired target but also the associated evidence in support of that target. We accomplish this by placing evidential priors over our original Gaussian likelihood function and training our NN to infer the hyperparameters of our evidential distribution. We impose priors during training such that the model is penalized when its predicted evidence is not aligned with the correct output. Thus the model estimates not only the probabilistic mean and variance of our target but also the underlying uncertainty associated with each of those parameters. We observe that our evidential regression method learns well-calibrated measures of uncertainty on various benchmarks, scales to complex computer vision tasks, and is robust to adversarial input perturbations. ""","""This paper presents a method for providing uncertainty for deep learning regressors through assigning a notion of evidence to the predictions. This is done by putting priors on the parameters of the Gaussian outputs of the model and estimating these via an empirical Bayes-like optimization. The reviewers in general found the methodology sensible although incremental in light of Sensoy et al. and Malinin & Gales but found the experiments thorough. A comment on the paper pointed out that the approach was very similar to something presented in the thesis of Malinin (it seems unfair to expect the authors to have been aware of this, but the thesis should be cited and not just the paper which is a different contribution). In discussion, one reviewer raised their score from weak reject to weak accept but the highest scoring reviewer explicitly was not willing to champion the paper and raise their score to accept. Thus the recommendation here is to reject. Taking the reviewer feedback into account, incorporating the proposed changes and adding more careful treatment of related work would make this a much stronger submission to a future conference."""
322,"""Lossless Data Compression with Transformer""","['data compression', 'transformer']","""Transformers have replaced long-short term memory and other recurrent neural networks variants in sequence modeling. It achieves state-of-the-art performance on a wide range of tasks related to natural language processing, including language modeling, machine translation, and sentence representation. Lossless compression is another problem that can benefit from better sequence models. It is closely related to the problem of online learning of language models. But, despite this ressemblance, it is an area where purely neural network based methods have not yet reached the compression ratio of state-of-the-art algorithms. In this paper, we propose a Transformer based lossless compression method that match the best compression ratio for text. Our approach is purely based on neural networks and does not rely on hand-crafted features as other lossless compression algorithms. We also provide a thorough study of the impact of the different components of the Transformer and its training on the compression ratio.""","""The paper proposes to use transformers to do lossless data compression. The idea is simple and straightforward (with adding n-gram inputs). The initial submission considered one dataset, a new dataset was added in the rebuttal. Still, there is no runtime in the experiments (and Transformers can take a lot of time to train). Since this is more an experimental paper, this is crucial (and the improvements reports are very small and it is difficult to judge if there are significant). Overall, there was a positive discussion between the authors and the reviewers. The reviewers commented that concerns have been addressed, but did not change the evaluation which is unanimous reject. """
323,"""Refining the variational posterior through iterative optimization""","['uncertainty estimation', 'variational inference', 'auxiliary variables', 'Bayesian neural networks']","""Variational inference (VI) is a popular approach for approximate Bayesian inference that is particularly promising for highly parameterized models such as deep neural networks. A key challenge of variational inference is to approximate the posterior over model parameters with a distribution that is simpler and tractable yet sufficiently expressive. In this work, we propose a method for training highly flexible variational distributions by starting with a coarse approximation and iteratively refining it. Each refinement step makes cheap, local adjustments and only requires optimization of simple variational families. We demonstrate theoretically that our method always improves a bound on the approximation (the Evidence Lower BOund) and observe this empirically across a variety of benchmark tasks. In experiments, our method consistently outperforms recent variational inference methods for deep learning in terms of log-likelihood and the ELBO. We see that the gains are further amplified on larger scale models, significantly outperforming standard VI and deep ensembles on residual networks on CIFAR10.""","""In this paper a method for refining the variational approximation is proposed. The reviewers liked the contribution but a number reservations such as missing reference made the paper drop below the acceptance threshold. The authors are encouraged to modify paper and send to next conference. Reject. """
324,"""Adaptive Adversarial Imitation Learning""","['Imitation Learning', 'Reinforcement Learning']","""We present the ADaptive Adversarial Imitation Learning (ADAIL) algorithm for learning adaptive policies that can be transferred between environments of varying dynamics, by imitating a small number of demonstrations collected from a single source domain. This problem is important in robotic learning because in real world scenarios 1) reward functions are hard to obtain, 2) learned policies from one domain are difficult to deploy in another due to varying source to target domain statistics, 3) collecting expert demonstrations in multiple environments where the dynamics are known and controlled is often infeasible. We address these constraints by building upon recent advances in adversarial imitation learning; we condition our policy on a learned dynamics embedding and we employ a domain-adversarial loss to learn a dynamics-invariant discriminator. The effectiveness of our method is demonstrated on simulated control tasks with varying environment dynamics and the learned adaptive agent outperforms several recent baselines.""","""This paper extends adversarial imitation learning to an adaptive setting where environment dynamics change frequently. The authors propose a novel approach with pragmatic design choices to address the challenges that arise in this setting. Several questions and requests for clarification were addressed during the reviewing phase. The paper remains borderline after the rebuttal. Remaining concerns include the size of the algorithmic or conceptual contribution of the paper."""
325,"""AutoQ: Automated Kernel-Wise Neural Network Quantization ""","['AutoML', 'Kernel-Wise Neural Networks Quantization', 'Hierarchical Deep Reinforcement Learning']","""Network quantization is one of the most hardware friendly techniques to enable the deployment of convolutional neural networks (CNNs) on low-power mobile devices. Recent network quantization techniques quantize each weight kernel in a convolutional layer independently for higher inference accuracy, since the weight kernels in a layer exhibit different variances and hence have different amounts of redundancy. The quantization bitwidth or bit number (QBN) directly decides the inference accuracy, latency, energy and hardware overhead. To effectively reduce the redundancy and accelerate CNN inferences, various weight kernels should be quantized with different QBNs. However, prior works use only one QBN to quantize each convolutional layer or the entire CNN, because the design space of searching a QBN for each weight kernel is too large. The hand-crafted heuristic of the kernel-wise QBN search is so sophisticated that domain experts can obtain only sub-optimal results. It is difficult for even deep reinforcement learning (DRL) DDPG-based agents to find a kernel-wise QBN configuration that can achieve reasonable inference accuracy. In this paper, we propose a hierarchical-DRL-based kernel-wise network quantization technique, AutoQ, to automatically search a QBN for each weight kernel, and choose another QBN for each activation layer. Compared to the models quantized by the state-of-the-art DRL-based schemes, on average, the same models quantized by AutoQ reduce the inference latency by 54.06%, and decrease the inference energy consumption by 50.69%, while achieving the same inference accuracy.""","""This paper proposes a network quantization method which is based on kernel-level quantization. The extension from layer-level to kernel-level is straightforward, and so the novelty is somewhat limited given its similarity with HAQ. Nevertheless, experimental results demonstrate its efficiency in real applications. The paper can be improved by clarifying some experimental details, and have further discussions on its relationship with HAQ."""
326,"""Learnable Group Transform For Time-Series""","['Group Transform', 'Time-Frequency Representation', 'Wavelet Transform', 'Group Theory', 'Representation Theory', 'Time-Series']","""We propose to undertake the problem of representation learning for time-series by considering a Group Transform approach. This framework allows us to, first, generalize classical time-frequency transformations such as the Wavelet Transform, and second, to enable the learnability of the representation. While the creation of the Wavelet Transform filter-bank relies on the sampling of the affine group in order to transform the mother filter, our approach allows for non-linear transformations of the mother filter by introducing the group of strictly increasing and continuous functions. The transformations induced by such a group enable us to span a larger class of signal representations. The sampling of this group can be optimized with respect to a specific loss and function and thus cast into a Deep Learning architecture. The experiments on diverse time-series datasets demonstrate the expressivity of this framework which competes with state-of-the-art performances.""","""This paper received two weak rejects (3) and one accept (8). In the discussion phase, the paper received significant discussion between the authors and reviewers and internally between the reviewers (which is tremendously appreciated). In particular, there was a discussion about the novelty of the contribution and ideas (AnonReviewer3 felt that the ideas presented provided an interesting new thought-provoking perspective) and the strength of the empirical results. None of the reviewers felt really strongly about rejecting and would not argue strongly against acceptance. However, AnonReviewer3 was not prepared to really champion the paper for acceptance due to a lack of confidence. Unfortunately, the paper falls just below the bar for acceptance. Taking the reviewer feedback into account and adding careful new experiments with strong results would make this a much stronger paper for a future submission."""
327,"""Zero-Shot Policy Transfer with Disentangled Attention""","['Transfer Learning', 'Reinforcement Learning', 'Attention', 'Domain Adaptation', 'Representation Learning', 'Feature Extraction']","""Domain adaptation is an open problem in deep reinforcement learning (RL). Often, agents are asked to perform in environments where data is difficult to obtain. In such settings, agents are trained in similar environments, such as simulators, and are then transferred to the original environment. The gap between visual observations of the source and target environments often causes the agent to fail in the target environment. We present a new RL agent, SADALA (Soft Attention DisentAngled representation Learning Agent). SADALA first learns a compressed state representation. It then jointly learns to ignore distracting features and solve the task presented. SADALA's separation of important and unimportant visual features leads to robust domain transfer. SADALA outperforms both prior disentangled-representation based RL and domain randomization approaches across RL environments (Visual Cartpole and DeepMind Lab).""","""This paper proposes a new method for zero-shot policy transfer in RL. The authors propose learning the policy over a disentangled representation that is augmented with attention. Hence, the paper is a simple modification of an existing approach (DARLA). The reviewers agreed that the novelty of the proposed approach and the experimental evaluation are limited. For this reason I recommend rejection."""
328,"""Fantastic Generalization Measures and Where to Find Them""","['Generalization', 'correlation', 'experiments']","""Generalization of deep networks has been intensely researched in recent years, resulting in a number of theoretical bounds and empirically motivated measures. However, most papers proposing such measures only study a small set of models, leaving open the question of whether these measures are truly useful in practice. We present the first large scale study of generalization bounds and measures in deep networks. We train over two thousand CIFAR-10 networks with systematic changes in important hyper-parameters. We attempt to uncover potential causal relationships between each measure and generalization, by using rank correlation coefficient and its modified forms. We analyze the results and show that some of the studied measures are very promising for further research.""","""This paper provides a valuable survey, summary, and empirical comparison of many generalization quantities from throughout the literature. It is comprehensive, thorough, and will be useful to a variety of researchers (both theoretical and applied)."""
329,"""Combining graph and sequence information to learn protein representations""","['NLP', 'Protein', 'Representation Learning']","""Computational methods that infer the function of proteins are key to understanding life at the molecular level. In recent years, representation learning has emerged as a powerful paradigm to discover new patterns among entities as varied as images, words, speech, molecules. In typical representation learning, there is only one source of data or one level of abstraction at which the learned representation occurs. However, proteins can be described by their primary, secondary, tertiary, and quaternary structure or even as nodes in protein-protein interaction networks. Given that protein function is an emergent property of all these levels of interactions in this work, we learn joint representations from both amino acid sequence and multilayer networks representing tissue-specific protein-protein interactions. Using these representations, we train machine learning models that outperform existing methods on the task of tissue-specific protein function prediction on 10 out of 13 tissues. Furthermore, we outperform existing methods by 19% on average.""","""The paper presents a linear classifier based on a concatenation of two types of features for protein function prediction. The two features are constructed using methods from previous papers, based on peptide sequence and protein-protein interactions. All the reviewers agree that the problem is an important one, but the paper as it is presented does not provide any methodological advance, and weak empirical evidence of better protein function prediction. Therefore the paper would require a major revision before being suitable for ICLR. """
330,"""Provably Communication-efficient Data-parallel SGD via Nonuniform Quantization""",[],"""As the size and complexity of models and datasets grow, so does the need for communication-efficient variants of stochastic gradient descent that can be deployed on clusters to perform model fitting in parallel. Alistarh et al. (2017) describe two variants of data-parallel SGD that quantize and encode gradients to lessen communication costs. For the first variant, QSGD, they provide strong theoretical guarantees. For the second variant, which we call QSGDinf, they demonstrate impressive empirical gains for distributed training of large neural networks. Building on their work, we propose an alternative scheme for quantizing gradients and show that it yields stronger theoretical guarantees than exist for QSGD while matching the empirical performance of QSGDinf.""","""This paper proposes a communication-efficient data-parallel SGD with quantization. The method bridges the gap between theory and practice. The QSGD method has theoretical guarantees while QSGDinf doesn't, but the latter gives better result. This paper proves stronger results for QSGD using a different quantization scheme which matches the performance of QSGDinf. The reviewers find issues with the approach and have pointed some of them out. During the discussion period, we did discuss if reviewers would like to raise their scores. Unfortunately, they still have unresolved issues (see R1's comment). R1 made another comment recently that they were unable to add to their review: ""The proposed algorithm and the theoretical analysis does not include momentum. However, in the experiments, it is clearly stated that momentum (with a factor of 0.9) is used. Thus, it is unclear whether the experiments really validate the theoretical guarantees. And, it is also unclear how momentum is added for both NUQSGD and EF-SGD, since momentum is not mentioned in Algorithm 1 in this paper, or the paper of QSGD, or the paper of EF-SignSGD. (There is a version of SignSGD with momentum *without* error feedback, called SIGNUM)."" With the current score, the paper does not make the cut for ICLR, but I encourage the authors to revise the paper based on reviewers' feedback. For now, I recommend to reject this paper."""
331,"""Training Deep Networks with Stochastic Gradient Normalized by Layerwise Adaptive Second Moments""","['deep learning', 'optimization', 'SGD', 'Adam', 'NovoGrad', 'large batch training']","""We propose NovoGrad, an adaptive stochastic gradient descent method with layer-wise gradient normalization and decoupled weight decay. In our experiments on neural networks for image classification, speech recognition, machine translation, and language modeling, it performs on par or better than well tuned SGD with momentum and Adam/AdamW. Additionally, NovoGrad (1) is robust to the choice of learning rate and weight initialization, (2) works well in a large batch setting, and (3) has two times smaller memory footprint than Adam.""","""The paper presented an adaptive stochastic gradient descent method with layer-wise normalization and decoupled weight decay and justified it on a variety of tasks. The main concern for this paper is the novelty is not sufficient. The method is a combination of LARS and AdamW with slight modifications. Although the paper has good empirically evaluations, theoretical convergence proof would make the paper more convincing. """
332,"""Adaptive Online Planning for Continual Lifelong Learning""","['reinforcement learning', 'model predictive control', 'planning', 'model based', 'model free', 'uncertainty', 'computation']","""We study learning control in an online lifelong learning scenario, where mistakes can compound catastrophically into the future and the underlying dynamics of the environment may change. Traditional model-free policy learning methods have achieved successes in difficult tasks due to their broad flexibility, and capably condense broad experiences into compact networks, but struggle in this setting, as they can activate failure modes early in their lifetimes which are difficult to recover from and face performance degradation as dynamics change. On the other hand, model-based planning methods learn and adapt quickly, but require prohibitive levels of computational resources. Under constrained computation limits, the agent must allocate its resources wisely, which requires the agent to understand both its own performance and the current state of the environment: knowing that its mastery over control in the current dynamics is poor, the agent should dedicate more time to planning. We present a new algorithm, Adaptive Online Planning (AOP), that achieves strong performance in this setting by combining model-based planning with model-free learning. By measuring the performance of the planner and the uncertainty of the model-free components, AOP is able to call upon more extensive planning only when necessary, leading to reduced computation times. We show that AOP gracefully deals with novel situations, adapting behaviors and policies effectively in the face of unpredictable changes in the world -- challenges that a continual learning agent naturally faces over an extended lifetime -- even when traditional reinforcement learning methods fail.""","""A new setting for lifelong learning is analyzed and a new method, AOP, is introduced, which combines a model-free with a model-based approach to deal with this setting. While the idea is interesting, the main claims are insufficiently demonstrated. A theoretical justification is missing, and the experiments alone are not rigorous enough to draw strong conclusions. The three environments are rather simplistic and there are concerns about the statistical significance, for at least some of the experiments."""
333,"""Reparameterized Variational Divergence Minimization for Stable Imitation""","['Imitation Learning', 'Reinforcement Learning', 'Adversarial Learning', 'Learning from Demonstration']","""State-of-the-art results in imitation learning are currently held by adversarial methods that iteratively estimate the divergence between student and expert policies and then minimize this divergence to bring the imitation policy closer to expert behavior. Analogous techniques for imitation learning from observations alone (without expert action labels), however, have not enjoyed the same ubiquitous successes. Recent work in adversarial methods for generative models has shown that the measure used to judge the discrepancy between real and synthetic samples is an algorithmic design choice, and that different choices can result in significant differences in model performance. Choices including Wasserstein distance and various pseudo-formula -divergences have already been explored in the adversarial networks literature, while more recently the latter class has been investigated for imitation learning. Unfortunately, we find that in practice this existing imitation-learning framework for using pseudo-formula -divergences suffers from numerical instabilities stemming from the combination of function approximation and policy-gradient reinforcement learning. In this work, we alleviate these challenges and offer a reparameterization of adversarial imitation learning as pseudo-formula -divergence minimization before further extending the framework to handle the problem of imitation from observations only. Empirically, we demonstrate that our design choices for coupling imitation learning and pseudo-formula -divergences are critical to recovering successful imitation policies. Moreover, we find that with the appropriate choice of pseudo-formula -divergence, we can obtain imitation-from-observation algorithms that outperform baseline approaches and more closely match expert performance in continous-control tasks with low-dimensional observation spaces. With high-dimensional observations, we still observe a significant gap with and without action labels, offering an interesting avenue for future work.""","""The submission performs empirical analysis on f-VIM (Ke, 2019), a method for imitation learning by f-divergence minimization. The paper especially focues on a state-only formulation akin to GAILfO (Torabi et al., 2018b). The main contributions are: 1) The paper identifies numerical proplems with the output activations of f-VIM and suggest a scheme to choose them such that the resulting rewards are bounded. 2) A regularizer that was proposed by Mescheder et al. (2018) for GANs is tested in the adversarial imitation learning setting. 3) In order to handle state-only demonstrations, the technique of GAILfO is applied to f-VIM (then denoted f-VIMO) which inputs state-nextStates instead of state-actions to the discriminator. The reviewers found the submitted paper hard to follow, which suggests a revision might make more apparent the author's contributions in later submissions of this work. """
334,"""Sparsity Meets Robustness: Channel Pruning for the Feynman-Kac Formalism Principled Robust Deep Neural Nets""","['Sparse Network', 'Model Compression', 'Adversarial Training']","""Deep neural nets (DNNs) compression is crucial for adaptation to mobile devices. Though many successful algorithms exist to compress naturally trained DNNs, developing efficient and stable compression algorithms for robustly trained DNNs remains widely open. In this paper, we focus on a co-design of efficient DNN compression algorithms and sparse neural architectures for robust and accurate deep learning. Such a co-design enables us to advance the goal of accommodating both sparsity and robustness. With this objective in mind, we leverage the relaxed augmented Lagrangian based algorithms to prune the weights of adversarially trained DNNs, at both structured and unstructured levels. Using a Feynman-Kac formalism principled robust and sparse DNNs, we can at least double the channel sparsity of the adversarially trained ResNet20 for CIFAR10 classification, meanwhile, improve the natural accuracy by 8.69\% and the robust accuracy under the benchmark 20 iterations of IFGSM attack by 5.42\%.""","""The paper is rejected based on unanimous reviews."""
335,"""Ternary MobileNets via Per-Layer Hybrid Filter Banks""","['Model compression', 'ternary quantization', 'energy-efficient models']","""MobileNets family of computer vision neural networks have fueled tremendous progress in the design and organization of resource-efficient architectures in recent years. New applications with stringent real-time requirements in highly constrained devices require further compression of MobileNets-like already computeefficient networks. Model quantization is a widely used technique to compress and accelerate neural network inference and prior works have quantized MobileNets to 4 6 bits albeit with a modest to significant drop in accuracy. While quantization to sub-byte values (i.e. precision 8 bits) has been valuable, even further quantization of MobileNets to binary or ternary values is necessary to realize significant energy savings and possibly runtime speedups on specialized hardware, such as ASICs and FPGAs. Under the key observation that convolutional filters at each layer of a deep neural network may respond differently to ternary quantization, we propose a novel quantization method that generates per-layer hybrid filter banks consisting of full-precision and ternary weight filters for MobileNets. The layer-wise hybrid filter banks essentially combine the strengths of full-precision and ternary weight filters to derive a compact, energy-efficient architecture for MobileNets. Using this proposed quantization method, we quantized a substantial portion of weight filters of MobileNets to ternary values resulting in 27.98% savings in energy, and a 51.07% reduction in the model size, while achieving comparable accuracy and no degradation in throughput on specialized hardware in comparison to the baseline full-precision MobileNets.""","""The paper presents a quantization method that generates per-layer hybrid filter banks consisting of full-precision and ternary weight filters for MobileNets. The paper is well-written. However, it is incremental. Moreover, empirical results are not convincing enough. Experiments are only performed on ImageNet. Comparison on more datasets and more model architectures should be performed."""
336,"""Neural Communication Systems with Bandwidth-limited Channel""","['variational inference', 'joint coding', 'bandwidth-limited channel', 'deep learning', 'representation learning', 'compression']","""Reliably transmitting messages despite information loss due to a noisy channel is a core problem of information theory. One of the most important aspects of real world communication is that it may happen at varying levels of information transfer. The bandwidth-limited channel models this phenomenon. In this study we consider learning joint coding with the bandwidth-limited channel. Although, classical results suggest that it is asymptotically optimal to separate the sub-tasks of compression (source coding) and error correction (channel coding), it is well known that for finite block-length problems, and when there are restrictions to the computational complexity of coding, this optimality may not be achieved. Thus, we empirically compare the performance of joint and separate systems, and conclude that joint systems outperform their separate counterparts when coding is performed by flexible learnable function approximators such as neural networks. Specifically, we cast the joint communication problem as a variational learning problem. To facilitate this, we introduce a differentiable and computationally efficient version of this channel. We show that our design compensates for the loss of information by two mechanisms: (i) missing information is modelled by a prior model incorporated in the channel model, and (ii) sampling from the joint model is improved by auxiliary latent variables in the decoder. Experimental results justify the validity of our design decisions through improved distortion and FID scores.""","""There was some support for this paper, but it was on the borderline and significant concerns were raised. It did not compare to the exiting related literature on communications, compression, and coding. There were significant issues with clarity."""
337,"""The Intriguing Effects of Focal Loss on the Calibration of Deep Neural Networks""",[],"""Miscalibration -- a mismatch between a model's confidence and its correctness -- of Deep Neural Networks (DNNs) makes their predictions hard for downstream components to trust. Ideally, we want networks to be accurate, calibrated and confident. Temperature scaling, the most popular calibration approach, will calibrate a DNN without affecting its accuracy, but it will also make its correct predictions under-confident. In this paper, we show that replacing the widely used cross-entropy loss with focal loss allows us to learn models that are already very well calibrated. When combined with temperature scaling, focal loss, whilst preserving accuracy and yielding state-of-the-art calibrated models, also preserves the confidence of the model's correct predictions, which is extremely desirable for downstream tasks. We provide a thorough analysis of the factors causing miscalibration, and use the insights we glean from this to theoretically justify the empirically excellent performance of focal loss. We perform extensive experiments on a variety of computer vision (CIFAR-10/100) and NLP (SST, 20 Newsgroup) datasets, and with a wide variety of different network architectures, and show that our approach achieves state-of-the-art accuracy and calibration in almost all cases.""","""The paper investigates the effect of focal loss on calibration of neural nets. On one hand, the reviewers agree that this paper is well-written and the empirical results are interesting. On the other hand, the reviewers felt that there could be better evaluation of the effect of calibration on downstream tasks, and better justification for the choice of optimal gamma (e.g. on a simpler problem setup). I encourage the others to revise the draft and resubmit to a different venue. """
338,"""High performance RNNs with spiking neurons""","['RNNs', 'Spiking neurons', 'Neuromorphics']",""" The increasing need for compact and low-power computing solutions for machine learning applications has triggered a renaissance in the study of energy-efficient neural network accelerators. In particular, in-memory computing neuromorphic architectures have started to receive substantial attention from both academia and industry. However, most of these architectures rely on spiking neural networks, which typically perform poorly compared to their non-spiking counterparts in terms of accuracy. In this paper, we propose a new adaptive spiking neuron model that can also be abstracted as a low-pass filter. This abstraction enables faster and better training of spiking networks using back-propagation, without simulating spikes. We show that this model dramatically improves the inference performance of a recurrent neural network and validate it with three complex spatio-temporal learning tasks: the temporal addition task, the temporal copying task, and a spoken-phrase recognition task. Application of these results will lead to the development of powerful spiking models for neuromorphic hardware that solve relevant edge-computing and Internet-of-Things applications with high accuracy and ultra-low power consumption.""","""This paper presents a new mechanism to train spiking neural networks that is more suitable for neuromorphic chips. While the text is well written and the experiments provide an interesting analysis, the relevance of the proposed neuron models to the ICLR/ML community seems small at this point. My recommendation is that this paper should be submitted to a more specialised conference/workshop dedicated to hardware methods."""
339,"""DASGrad: Double Adaptive Stochastic Gradient""","['stochastic convex optimization', 'adaptivity', 'online learning', 'transfer learning']","""Adaptive moment methods have been remarkably successful for optimization under the presence of high dimensional or sparse gradients, in parallel to this, adaptive sampling probabilities for SGD have allowed optimizers to improve convergence rates by prioritizing examples to learn efficiently. Numerous applications in the past have implicitly combined adaptive moment methods with adaptive probabilities yet the theoretical guarantees of such procedures have not been explored. We formalize double adaptive stochastic gradient methods DASGrad as an optimization technique and analyze its convergence improvements in a stochastic convex optimization setting, we provide empirical validation of our findings with convex and non convex objectives. We observe that the benefits of the method increase with the model complexity and variability of the gradients, and we explore the resulting utility in extensions to transfer learning. ""","""The reviewers were confused by several elements of the paper, as mentioned in their reviews and, despite the authors' rebuttal, still have several areas of concerns. I encourage you to read the reviews carefully and address the reviewers' concerns for a future submission."""
340,"""Physics-as-Inverse-Graphics: Unsupervised Physical Parameter Estimation from Video""",[],"""We propose a model that is able to perform physical parameter estimation of systems from video, where the differential equations governing the scene dynamics are known, but labeled states or objects are not available. Existing physical scene understanding methods require either object state supervision, or do not integrate with differentiable physics to learn interpretable system parameters and states. We address this problem through a \textit{physics-as-inverse-graphics} approach that brings together vision-as-inverse-graphics and differentiable physics engines, where objects and explicit state and velocity representations are discovered by the model. This framework allows us to perform long term extrapolative video prediction, as well as vision-based model-predictive control. Our approach significantly outperforms related unsupervised methods in long-term future frame prediction of systems with interacting objects (such as ball-spring or 3-body gravitational systems), due to its ability to build dynamics into the model as an inductive bias. We further show the value of this tight vision-physics integration by demonstrating data-efficient learning of vision-actuated model-based control for a pendulum system. We also show that the controller's interpretability provides unique capabilities in goal-driven control and physical reasoning for zero-data adaptation.""","""The submission presents an approach to estimating physical parameters from video. The approach is sensible and is presented fairly well. The main criticism is that the approach is only demonstrated in simplistic ""toy"" settings. Nevertheless, the reviewers recommend (weakly) accepting the paper and the AC concurs."""
341,"""Dynamics-Aware Embeddings""","['representation learning', 'reinforcement learning', 'rl']","""In this paper we consider self-supervised representation learning to improve sample efficiency in reinforcement learning (RL). We propose a forward prediction objective for simultaneously learning embeddings of states and actions. These embeddings capture the structure of the environment's dynamics, enabling efficient policy learning. We demonstrate that our action embeddings alone improve the sample efficiency and peak performance of model-free RL on control from low-dimensional states. By combining state and action embeddings, we achieve efficient learning of high-quality policies on goal-conditioned continuous control from pixel observations in only 1-2 million environment steps.""","""This paper studies how self-supervised objectives can improve representations for efficient RL. The reviewers are generally in agreement that the method is interesting, the paper is well-written, and the results are convincing. The paper should be accepted."""
342,"""Sampling-Free Learning of Bayesian Quantized Neural Networks""","['Bayesian neural networks', 'Quantized neural networks']","""Bayesian learning of model parameters in neural networks is important in scenarios where estimates with well-calibrated uncertainty are important. In this paper, we propose Bayesian quantized networks (BQNs), quantized neural networks (QNNs) for which we learn a posterior distribution over their discrete parameters. We provide a set of efficient algorithms for learning and prediction in BQNs without the need to sample from their parameters or activations, which not only allows for differentiable learning in quantized models but also reduces the variance in gradients estimation. We evaluate BQNs on MNIST, Fashion-MNIST and KMNIST classification datasets compared against bootstrap ensemble of QNNs (E-QNN). We demonstrate BQNs achieve both lower predictive errors and better-calibrated uncertainties than E-QNN (with less than 20% of the negative log-likelihood).""","""This paper proposes Bayesian quantized networks and efficient algorithms for learning and prediction of these networks. The reviewers generally thought that this was a novel and interesting paper. There were a few concerns about the clarity of parts of the paper and the experimental results. These concerns were addressed during the discussion phase, and the reviewers agree that the paper should be accepted."""
343,"""Attack-Resistant Federated Learning with Residual-based Reweighting""","['robust federated learning', 'backdoor attacks']","""Federated learning has a variety of applications in multiple domains by utilizing private training data stored on different devices. However, the aggregation process in federated learning is highly vulnerable to adversarial attacks so that the global model may behave abnormally under attacks. To tackle this challenge, we present a novel aggregation algorithm with residual-based reweighting to defend federated learning. Our aggregation algorithm combines repeated median regression with the reweighting scheme in iteratively reweighted least squares. Our experiments show that our aggression algorithm outperforms other alternative algorithms in the presence of label-flipping, backdoor, and Gaussian noise attacks. We also provide theoretical guarantees for our aggregation algorithm. ""","""The paper proposes an aggregation algorithm for federated learning that is robust against label-flipping, backdoor, and Gaussian noise attacks. The reviewers agree that the paper presents an interesting and novel method, however the reviewers also agree that the theory was difficult to understand and that the success of the methodology may be highly dependent on design choices and difficult-to-tune hyperparameters. """
344,"""From Inference to Generation: End-to-end Fully Self-supervised Generation of Human Face from Speech""","['Multi-modal learning', 'Self-supervised learning', 'Voice profiling', 'Conditional GANs']","""This work seeks the possibility of generating the human face from voice solely based on the audio-visual data without any human-labeled annotations. To this end, we propose a multi-modal learning framework that links the inference stage and generation stage. First, the inference networks are trained to match the speaker identity between the two different modalities. Then the pre-trained inference networks cooperate with the generation network by giving conditional information about the voice. The proposed method exploits the recent development of GANs techniques and generates the human face directly from the speech waveform making our system fully end-to-end. We analyze the extent to which the network can naturally disentangle two latent factors that contribute to the generation of a face image one that comes directly from a speech signal and the other that is not related to it and explore whether the network can learn to generate natural human face image distribution by modeling these factors. Experimental results show that the proposed network can not only match the relationship between the human face and speech, but can also generate the high-quality human face sample conditioned on its speech. Finally, the correlation between the generated face and the corresponding speech is quantitatively measured to analyze the relationship between the two modalities.""","""The authors propose a conditional GAN-based approach for generating faces consistent with given input speech. The technical novelty is not large, as the approach is mainly putting together existing ideas, but the application is a fairly new one and the experiments and results are convincing. The approach might also have broader applicability beyond this task."""
345,"""SoftAdam: Unifying SGD and Adam for better stochastic gradient descent""","['Optimization', 'SGD', 'Adam', 'Generalization', 'Deep Learning']","""Abstract Stochastic gradient descent (SGD) and Adam are commonly used to optimize deep neural networks, but choosing one usually means making tradeoffs between speed, accuracy and stability. Here we present an intuition for why the tradeoffs exist as well as a method for unifying the two in a continuous way. This makes it possible to control the way models are trained in much greater detail. We show that for default parameters, the new algorithm equals or outperforms SGD and Adam across a range of models for image classification tasks and outperforms SGD for language modeling tasks.""","""The reviewers all agreed that the proposed modification was minor. I encourage the authors to pursue in this direction, as they mentioned in their rebuttal, before resubmitting to another conference."""
346,"""Learning Robust Representations via Multi-View Information Bottleneck""","['Information Bottleneck', 'Multi-View Learning', 'Representation Learning', 'Information Theory']","""The information bottleneck principle provides an information-theoretic method for representation learning, by training an encoder to retain all information which is relevant for predicting the label while minimizing the amount of other, excess information in the representation. The original formulation, however, requires labeled data to identify the superfluous information. In this work, we extend this ability to the multi-view unsupervised setting, where two views of the same underlying entity are provided but the label is unknown. This enables us to identify superfluous information as that not shared by both views. A theoretical analysis leads to the definition of a new multi-view model that produces state-of-the-art results on the Sketchy dataset and label-limited versions of the MIR-Flickr dataset. We also extend our theory to the single-view setting by taking advantage of standard data augmentation techniques, empirically showing better generalization capabilities when compared to common unsupervised approaches for representation learning.""","""This paper extends the information bottleneck method to the unsupervised representation learning under the multi-view assumption. The work couples the multi-view InfoMax principle with the information bottleneck principle to derive an objective which encourages the representations to contain only the information shared by both views and thus eliminate the effect of independent factors of variations. Recent advances in estimating lower-bounds on mutual information are applied to perform approximate optimisation in practice. The authors empirically validate the proposed approach in two standard multi-view settings. Overall, the reviewers found the presentation clear, and the paper well written and well motivated. The issues raised by the reviewers were addressed in the rebuttal and we feel that the work is well suited for ICLR. We ask the authors to carefully integrate the detailed comments from the reviewers into the manuscript. Finally, the work should investigate and briefly establish a connection to [1]. [1] Wang et al. ""Deep Multi-view Information Bottleneck"". International Conference on Data Mining 2019 (pseudo-url)"""
347,"""Adapting Behaviour for Learning Progress""","['adaptation', 'behaviour', 'reinforcement learning', 'modulated behaviour', 'exploration', 'deep reinforcement learning']","""Determining what experience to generate to best facilitate learning (i.e. exploration) is one of the distinguishing features and open challenges in reinforcement learning. The advent of distributed agents that interact with parallel instances of the environment has enabled larger scale and greater flexibility, but has not removed the need to tune or tailor exploration to the task, because the ideal data for the learning algorithm necessarily depends on its process of learning. We propose to dynamically adapt the data generation by using a non-stationary multi-armed bandit to optimize a proxy of the learning progress. The data distribution is controlled via modulating multiple parameters of the policy (such as stochasticity, consistency or optimism) without significant overhead. The adaptation speed of the bandit can be increased by exploiting the factored modulation structure. We demonstrate on a suite of Atari 2600 games how this unified approach produces results comparable to per-task tuning at a fraction of the cost.""","""The paper introduces a non-stationary bandit strategy for adapting the exploration rate in Deep RL algorithms. They consider exploration algorithms with a tunable parameter (e.g. the epsilon probability in epsilon-greedy) and attempt to adjust this parameter in an online fashion using a proxy to the learning progress. The proposed approach is empirically compared with using fixed exploration parameters and adjusting the parameter using a bandit strategy that doesn't model the learning process. Unfortunately, the proposed approach is not theoretically grounded and the experiments lack comparison with good baselines in order to be convincing. A comparison with other, provably efficient, non-stationary bandit algorithms such as exponential weight methods (Besbes et al 2014) or Thompson sampling (Raj & Kalyani 2017), which are cited in the paper, is missing. Moreover, given the whole set of results and how they are presented, the improvement due to the proposed method is not clear. In light of these concerns I recommend to reject this paper."""
348,"""On PAC-Bayes Bounds for Deep Neural Networks using the Loss Curvature""","['PAC-Bayes', 'Hessian', 'curvature', 'lower bound', 'Variational Inference']","""We investigate whether it's possible to tighten PAC-Bayes bounds for deep neural networks by utilizing the Hessian of the training loss at the minimum. For the case of Gaussian priors and posteriors we introduce a Hessian-based method to obtain tighter PAC-Bayes bounds that relies on closed form solutions of layerwise subproblems. We thus avoid commonly used variational inference techniques which can be difficult to implement and time consuming for modern deep architectures. We conduct a theoretical analysis that links the random initialization, minimum, and curvature at the minimum of a deep neural network to limits on what is provable about generalization through PAC-Bayes. Through careful experiments we validate our theoretical predictions and analyze the influence of the prior mean, prior covariance, posterior mean and posterior covariance on obtaining tighter bounds. ""","""The paper computes an ""approximate"" generalization bound based on loss curvature. Several expert reviewers found a long list of issues, including missing related work and a sloppy mix of formal statements and heuristics, without proper accounting of what could be gleaned from some many heuristic steps. Ultimately, the paper needs to be rewritten and re-reviewed. """
349,"""Encoder-Agnostic Adaptation for Conditional Language Generation""","['NLP', 'generation', 'pretraining']","""Large pretrained language models have changed the way researchers approach discriminative natural language understanding tasks, leading to the dominance of approaches that adapt a pretrained model for arbitrary downstream tasks. However, it is an open question how to use similar techniques for language generation. Early results in the encoder-agnostic setting have been mostly negative. In this work, we explore methods for adapting a pretrained language model to arbitrary conditional input. We observe that pretrained transformer models are sensitive to large parameter changes during tuning. Therefore, we propose an adaptation that directly injects arbitrary conditioning into self attention, an approach we call pseudo self attention. Through experiments on four diverse conditional text generation tasks, we show that this encoder-agnostic technique outperforms strong baselines, produces coherent generations, and is data-efficient.""","""This paper proposes a method to use a pretrained language model for language generation with arbitrary conditional input (images, text). The main idea, which is called pseudo self-attention, is to incorporate the conditioning input as a pseudo history to a pretrained transformer. Experiments on class-conditional generation, summarization, story generation, and image captioning show the benefit of the proposed approach. While I think that the proposed approach makes sense, especially for generation from multiple modalities, it would be useful to see the following comparison in the case of conditional generation from one modality (i.e., text-text such as in summarization and story generation). How does the proposed approach compare to a method that simply concatenates these input and output? In Figure 1(c), this would be having the encoder part be pretrained as well, as opposed to randomly initialized, which is possible if the input is also text. I believe this is what R2 is suggesting as well when they mentioned a GPT-2 style model, and I agree this is an important baseline. This is a borderline paper. However, due to space constraint and the above issues, I recommend to reject the paper."""
350,"""Multigrid Neural Memory""","['multigrid architecture', 'memory network', 'convolutional neural network']","""We introduce a novel architecture that integrates a large addressable memory space into the core functionality of a deep neural network. Our design distributes both memory addressing operations and storage capacity over many network layers. Distinct from strategies that connect neural networks to external memory banks, our approach co-locates memory with computation throughout the network structure. Mirroring recent architectural innovations in convolutional networks, we organize memory into a multiresolution hierarchy, whose internal connectivity enables learning of dynamic information routing strategies and data-dependent read/write operations. This multigrid spatial layout permits parameter-efficient scaling of memory size, allowing us to experiment with memories substantially larger than those in prior work. We demonstrate this capability on synthetic exploration and mapping tasks, where the network is able to self-organize and retain long-term memory for trajectories of thousands of time steps. On tasks decoupled from any notion of spatial geometry, such as sorting or associative recall, our design functions as a truly generic memory and yields results competitive with those of the recently proposed Differentiable Neural Computer.""","""This paper investigates convolutional LSTMs with a multi-grid structure. This idea in itself has very little innovation and the experimental results are not entirely convincing."""
351,"""Estimating counterfactual treatment outcomes over time through adversarially balanced representations""","['treatment effects over time', 'causal inference', 'counterfactual estimation']","""Identifying when to give treatments to patients and how to select among multiple treatments over time are important medical problems with a few existing solutions. In this paper, we introduce the Counterfactual Recurrent Network (CRN), a novel sequence-to-sequence model that leverages the increasingly available patient observational data to estimate treatment effects over time and answer such medical questions. To handle the bias from time-varying confounders, covariates affecting the treatment assignment policy in the observational data, CRN uses domain adversarial training to build balancing representations of the patient history. At each timestep, CRN constructs a treatment invariant representation which removes the association between patient history and treatment assignments and thus can be reliably used for making counterfactual predictions. On a simulated model of tumour growth, with varying degree of time-dependent confounding, we show how our model achieves lower error in estimating counterfactuals and in choosing the correct treatment and timing of treatment than current state-of-the-art methods.""","""Reviewers uniformly suggest acceptance. Please look carefully at reviewer comments and address in the camera-ready. Great work!"""
352,"""Flexible and Efficient Long-Range Planning Through Curious Exploration""","['Curiosity', 'Planning', 'Reinforcement Learning', 'Robotics', 'Exploration']","""Identifying algorithms that flexibly and efficiently discover temporally-extended multi-phase plans is an essential next step for the advancement of robotics and model-based reinforcement learning. The core problem of long-range planning is finding an efficient way to search through the tree of possible action sequences which, if left unchecked, grows exponentially with the length of the plan. Existing non-learned planning solutions from the Task and Motion Planning (TAMP) literature rely on the existence of logical descriptions for the effects and preconditions for actions. This constraint allows TAMP methods to efficiently reduce the tree search problem but limits their ability to generalize to unseen and complex physical environments. In contrast, deep reinforcement learning (DRL) methods use flexible neural-network-based function approximators to discover policies that generalize naturally to unseen circumstances. However, DRL methods have had trouble dealing with the very sparse reward landscapes inherent to long-range multi-step planning situations. Here, we propose the Curious Sample Planner (CSP), which fuses elements of TAMP and DRL by using a curiosity-guided sampling strategy to learn to efficiently explore the tree of action effects. We show that CSP can efficiently discover interesting and complex temporally-extended plans for solving a wide range of physically realistic 3D tasks. In contrast, standard DRL and random sampling methods often fail to solve these tasks at all or do so only with a huge and highly variable number of training samples. We explore the use of a variety of curiosity metrics with CSP and analyze the types of solutions that CSP discovers. Finally, we show that CSP supports task transfer so that the exploration policies learned during experience with one task can help improve efficiency on related tasks.""","""The authors consider planning problems with sparse rewards. They propose an algorithm that performs planning based on an auxiliary reward given by a curiosity score. They test they approach on a range of tasks in simulated robotics environments and compare to model-free baselines. The reviewers mainly criticize the lack of competitive baselines; it comes as now surprise that the baselines presented in the paper do not perform well, as they make use of strictly less information of the problem. The authors were very active in the rebuttal period, however eventually did not fully manage to address the points raised by the reviewers. Although the paper proposes an interesting approach, I think this paper is below acceptance threshold. The experimental results lack baselines, Furthermore, critical details of the algorithm are missing / hard to find."""
353,"""Mixture-of-Experts Variational Autoencoder for clustering and generating from similarity-based representations""","['Variational Autoencoder', 'Clustering', 'Generative model']","""Clustering high-dimensional data, such as images or biological measurements, is a long-standing problem and has been studied extensively. Recently, Deep Clustering gained popularity due to the non-linearity of neural networks, which allows for flexibility in fitting the specific peculiarities of complex data. Here we introduce the Mixture-of-Experts Similarity Variational Autoencoder (MoE-Sim-VAE), a novel generative clustering model. The model can learn multi-modal distributions of high-dimensional data and use these to generate realistic data with high efficacy and efficiency. MoE-Sim-VAE is based on a Variational Autoencoder (VAE), where the decoder consists of a Mixture-of-Experts (MoE) architecture. This specific architecture allows for various modes of the data to be automatically learned by means of the experts. Additionally, we encourage the latent representation of our model to follow a Gaussian mixture distribution and to accurately represent the similarities between the data points. We assess the performance of our model on synthetic data, the MNIST benchmark data set, and a challenging real-world task of defining cell subpopulations from mass cytometry (CyTOF) measurements on hundreds of different datasets. MoE-Sim-VAE exhibits superior clustering performance on all these tasks in comparison to the baselines and we show that the MoE architecture in the decoder reduces the computational cost of sampling specific data modes with high fidelity.""","""The paper proposes a VAE with a mixture-of-experts decoder for clustering and generation of high-dimensional data. Overall, the reviewers found the paper well-written and structured , but in post rebuttal discussion questioned the overall importance and interest of the work to the community. This is genuinely a borderline submission. However, the calibrated average score currently falls below the acceptance threshold, so Im recommending rejection, but strongly encouraging the authors to continue the work, better motivating the importance of the work, and resubmitting."""
354,"""On summarized validation curves and generalization""","['model selection', 'deep learning', 'early stopping', 'validation curves']","""The validation curve is widely used for model selection and hyper-parameter search with the curve usually summarized over all the training tasks. However, this summarization tends to lose the intricacies of the per-task curves and it isn't able to reflect if all the tasks are at their validation optimum even if the summarized curve might be. In this work, we explore this loss of information, how it affects the model at testing and how to detect it using interval plots. We propose two techniques as a proof-of-concept of the potential gain in the test performance when per-task validation curves are accounted for. Our experiments on three large datasets show up to a 2.5% increase (averaged over multiple trials) in the test accuracy rate when model selection uses the per-task validation maximums instead of the summarized validation maximum. This potential increase is not a result of any modification to the model but rather at what point of training the weights were selected from. This presents an exciting direction for new training and model selection techniques that rely on more than just averaged metrics. ""","""The reviewers reached a consensus that the paper is preliminary and has a very limited contribution. Therefore, I cannot recommend acceptance at this time."""
355,"""HaarPooling: Graph Pooling with Compressive Haar Basis""","['graph pooling', 'graph neural networks', 'tree', 'graph classification', 'graph regression', 'deep learning', 'Haar wavelet basis', 'fast Haar transforms']","""Deep Graph Neural Networks (GNNs) are instrumental in graph classification and graph-based regression tasks. In these tasks, graph pooling is a critical ingredient by which GNNs adapt to input graphs of varying size and structure. We propose a new graph pooling operation based on compressive Haar transforms, called HaarPooling. HaarPooling is computed following a chain of sequential clusterings of the input graph. The input of each pooling layer is transformed by the compressive Haar basis of the corresponding clustering. HaarPooling operates in the frequency domain by the synthesis of nodes in the same cluster and filters out fine detail information by compressive Haar transforms. Such transforms provide an effective characterization of the data and preserve the structure information of the input graph. By the sparsity of the Haar basis, the computation of HaarPooling is of linear complexity. The GNN with HaarPooling and existing graph convolution layers achieves state-of-the-art performance on diverse graph classification problems.""","""This paper presents a new graph pooling method, called HaarPooling. Based on the hierarchical HaarPooling, the graph classification problems can be solved under the graph neural network framework. One major concern of reviewers is the experiment design. Authors add a new real world dataset in revision. Another concern is computational performance. The main text did not give a comprehensive analysis and the rebuttal did not fully address these problems. Overall, this paper presents an interesting graph pooling approach for graph classification while the presentation needs further polish. Based on the reviewers comments, I choose to reject the paper. """
356,"""Kernel of CycleGAN as a principal homogeneous space""","['Generative models', 'CycleGAN']","""Unpaired image-to-image translation has attracted significant interest due to the invention of CycleGAN, a method which utilizes a combination of adversarial and cycle consistency losses to avoid the need for paired data. It is known that the CycleGAN problem might admit multiple solutions, and our goal in this paper is to analyze the space of exact solutions and to give perturbation bounds for approximate solutions. We show theoretically that the exact solution space is invariant with respect to automorphisms of the underlying probability spaces, and, furthermore, that the group of automorphisms acts freely and transitively on the space of exact solutions. We examine the case of zero pure CycleGAN loss first in its generality, and, subsequently, expand our analysis to approximate solutions for extended CycleGAN loss where identity loss term is included. In order to demonstrate that these results are applicable, we show that under mild conditions nontrivial smooth automorphisms exist. Furthermore, we provide empirical evidence that neural networks can learn these automorphisms with unexpected and unwanted results. We conclude that finding optimal solutions to the CycleGAN loss does not necessarily lead to the envisioned result in image-to-image translation tasks and that underlying hidden symmetries can render the result useless.""","""This paper theoretically studied one of the fundamental issue in CycleGAN (recently gained much attention for image-to-image translation). The authors analyze the space of exact and approximated solutions under automorphisms. Reviewers mostly agree with theoretical value of the paper. Some concerns on practical values are also raised, e.g., limited or no-surprising experimental results. In overall, I think this is a boarderline paper. But, I am a bit toward acceptance as the theoretical contribution is solid, and potentially beneficial to many future works on unpaired image-to-image translation. """
357,"""Unsupervised Intuitive Physics from Past Experiences""","['Intuitive physics', 'Deep learning']","""We consider the problem of learning models of intuitive physics from raw, unlabelled visual input. Differently from prior work, in addition to learning general physical principles, we are also interested in learning ``on the fly'' physical properties specific to new environments, based on a small number of environment-specific experiences. We do all this in an unsupervised manner, using a meta-learning formulation where the goal is to predict videos containing demonstrations of physical phenomena, such as objects moving and colliding with a complex background. We introduce the idea of summarizing past experiences in a very compact manner, in our case using dynamic images, and show that this can be used to solve the problem well and efficiently. Empirically, we show, via extensive experiments and ablation studies, that our model learns to perform physical predictions that generalize well in time and space, as well as to a variable number of interacting physical objects.""","""While the reviewers found the paper interesting, all the reviewers raised concerns about the fairly simple experimental settings, which makes it hard to appreciate the strengths of the proposed method. During rebuttal phase, the reviewers still felt this weakness was not sufficiently addressed."""
358,"""Combining MixMatch and Active Learning for Better Accuracy with Fewer Labels""","['active learning', 'semi-supervised learning']","""We propose using active learning based techniques to further improve the state-of-the-art semi-supervised learning MixMatch algorithm. We provide a thorough empirical evaluation of several active-learning and baseline methods, which successfully demonstrate a significant improvement on the benchmark CIFAR-10, CIFAR-100, and SVHN datasets (as much as 1.5% in absolute accuracy). We also provide an empirical analysis of the cost trade-off between incrementally gathering more labeled versus unlabeled data. This analysis can be used to measure the relative value of labeled/unlabeled data at different points of the learning curve, where we find that although the incremental value of labeled data can be as much as 20x that of unlabeled, it quickly diminishes to less than 3x once more than 2,000 labeled example are observed.""","""This paper extends state of the art semi-supervised learning techniques (i.e., MixMatch) to collect new data adaptively and studies the benefit of getting new labels versus adding more unlabeled data. Active learning is incorporated in a natural and simple (albeit, unsurprising) way and the experiments are convincing that this approach has merit. While the approach works, reviewers were concerned about the novelty of the combination given that its somewhat obvious and straightforward to accomplish. Reviewers were also concerned that the space of both semi-supervised learning algorithms and active learning algorithms was not sufficiently exhaustively studied. As one reviewer points out: neither of these ideas are new or particular to deep learning. Due to lack of novelty, this paper is not suited for a top tier conference. """
359,"""Provable Filter Pruning for Efficient Neural Networks""","['theory', 'compression', 'filter pruning', 'neural networks']","""We present a provable, sampling-based approach for generating compact Convolutional Neural Networks (CNNs) by identifying and removing redundant filters from an over-parameterized network. Our algorithm uses a small batch of input data points to assign a saliency score to each filter and constructs an importance sampling distribution where filters that highly affect the output are sampled with correspondingly high probability. In contrast to existing filter pruning approaches, our method is simultaneously data-informed, exhibits provable guarantees on the size and performance of the pruned network, and is widely applicable to varying network architectures and data sets. Our analytical bounds bridge the notions of compressibility and importance of network structures, which gives rise to a fully-automated procedure for identifying and preserving filters in layers that are essential to the network's performance. Our experimental evaluations on popular architectures and data sets show that our algorithm consistently generates sparser and more efficient models than those constructed by existing filter pruning approaches. ""","""This paper presents a sampling-based approach for generating compact CNNs by pruning redundant filters. One advantage of the proposed method is a bound for the final pruning error. One of the major concerns during review is the experiment design. The original paper lacks the results on real work dataset like ImageNet. Furthermore, the presentation is a little misleading. The authors addressed most of these problems in the revision. Model compression and purring is a very important field for real world application, hence I choose to accept the paper. """
360,"""Skew-Explore: Learn faster in continuous spaces with sparse rewards""","['reinforcement learning', 'exploration', 'sparse reward']","""In many reinforcement learning settings, rewards which are extrinsically available to the learning agent are too sparse to train a suitable policy. Beside reward shaping which requires human expertise, utilizing better exploration strategies helps to circumvent the problem of policy training with sparse rewards. In this work, we introduce an exploration approach based on maximizing the entropy of the visited states while learning a goal-conditioned policy. The main contribution of this work is to introduce a novel reward function which combined with a goal proposing scheme, increases the entropy of the visited states faster compared to the prior work. This improves the exploration capability of the agent, and therefore enhances the agent's chance to solve sparse reward problems more efficiently. Our empirical studies demonstrate the superiority of the proposed method to solve different sparse reward problems in comparison to the prior work. ""","""While the reviewers generally appreciated the ideas presented in the paper and found the overall aims and motivation of the paper to be compelling, there were too many questions raised about the experiments and the soundness of the technical formulation to accept the paper at this time, and the reviewers did not feel that the authors had adequately addressed these issues in their responses. The main concerns were (1) with the correctness and rigor of the technical derivation, which the reviewers generally found to be somewhat questionable -- while the main idea seems reasonable, the details have a few too many question marks; (2) the experimental results have a number of shortcomings that make it difficult to fully understand whether the method really works, and how well."""
361,"""Exploratory Not Explanatory: Counterfactual Analysis of Saliency Maps for Deep Reinforcement Learning""","['explainability', 'saliency maps', 'representations', 'deep reinforcement learning']","""Saliency maps are frequently used to support explanations of the behavior of deep reinforcement learning (RL) agents. However, a review of how saliency maps are used in practice indicates that the derived explanations are often unfalsifiable and can be highly subjective. We introduce an empirical approach grounded in counterfactual reasoning to test the hypotheses generated from saliency maps and assess the degree to which they correspond to the semantics of RL environments. We use Atari games, a common benchmark for deep RL, to evaluate three types of saliency maps. Our results show the extent to which existing claims about Atari games can be evaluated and suggest that saliency maps are best viewed as an exploratory tool rather than an explanatory tool.""","""This was a contentious paper, with quite a large variance in the ratings, and ultimately a lack of consensus. After reading the paper myself, I found it to be a valuable synthesis of common usage of saliency maps and a critique of their improper interpretation. Further, the demonstration of more rigorous methods of evaluating agents based on salience maps using case studies is quite illustrative and compelling. I think we as a field can agree that wed like to gain better understanding our deep RL models. This is not possible if we dont have a good understanding of the analysis tools were using. R2 rightly pointed out a need for quantitative justification for their results, in the form of statistical tests, which the authors were able to provide, leading the reviewer to revise their score to the highest value of 8. I thank them for instigating the discussion. R1 continues to feel that the lack of a methodological contribution (in the form of improving learning within an agent) is a weakness. However, I dont believe that all papers at deep learning conferences have to have the goal of empirically learning better on some benchmark task or dataset, and that theres room at ICLR for more analysis papers. Indeed, itd be nice to see more papers like this. For this reason, Im inclined to recommend accept for this paper. However this paper does have weaknesses, in that the framework proposed could be made more rigorous and formal. Currently it seems rather adhoc and on a task-by-task basis (ie we need to have access to game states or define them ourselves for the task). Its also disappointing that it doesnt work for recurrent agents, which limits its applicability for analyzing current SOTA deep RL agents. I wonder if authors can comment on possible extensions that would allow for this. """
362,"""Generative Teaching Networks: Accelerating Neural Architecture Search by Learning to Generate Synthetic Training Data""","['Generative models', 'generating synthetic data', 'neural architecture search', 'learning to teach', 'meta-learning']","""This paper investigates the intriguing question of whether we can create learning algorithms that automatically generate training data, learning environments, and curricula in order to help AI agents rapidly learn. We show that such algorithms are possible via Generative Teaching Networks (GTNs), a general approach that is applicable to supervised, unsupervised, and reinforcement learning. GTNs are deep neural networks that generate data and/or training environments that a learner (e.g.\ a freshly initialized neural network) trains on before being tested on a target task. We then differentiate \emph{through the entire learning process} via meta-gradients to update the GTN parameters to improve performance on the target task. GTNs have the beneficial property that they can theoretically generate any type of data or training environment, making their potential impact large. This paper introduces GTNs, discusses their potential, and showcases that they can substantially accelerate learning. We also demonstrate a practical and exciting application of GTNs: accelerating the evaluation of candidate architectures for neural architecture search (NAS), which is rate-limited by such evaluations, enabling massive speed-ups in NAS. GTN-NAS improves the NAS state of the art, finding higher performing architectures when controlling for the search proposal mechanism. GTN-NAS also is competitive with the overall state of the art approaches, which achieve top performance while using orders of magnitude less computation than typical NAS methods. Overall, GTNs represent a first step toward the ambitious goal of algorithms that generate their own training data and, in doing so, open a variety of interesting new research questions and directions.""","""Overview: This paper introduces a method to distill a large dataset into a smaller one that allows for faster training. The main application of this technique being studied is neural architecture search, which can be sped up by quickly evaluating architectures on the generated data rather than slowly evaluating them on the original data. Summary of discussion: During the discussion period, the authors appear to have updated the paper quite a bit, leading to the reviewers feeling more positive about it now than in the beginning. In particular, in the beginning, it appears to have been unclear that the distillation is merely used as a speedup trick, not to generate additional information out of thin air. The reviewers' scores left the paper below the decision boundary, but closely enough so that I read it myself. My own judgement: I like the idea, which I find very novel. However, I have to push back on the authors' claims about their good performance in NAS. This has several reasons: 1. In contrast to what is claimed by the authors, the comparison to graph hypernetworks (Zhang et al) is not fair, since the authors used a different protocol: Zhang et al sampled 800 networks and reported the performance (mean +/- std) of the 10 judged to be best by the hypernetwork. In contrast, the authors of the current paper sampled 1000 networks and reported the performance of the single one judged to be best. They repeated this procedure 5 times to get mean +/- std. The best architecture of 1000 is of course more likely to be strong than the average of the top 10 of 800. 2. The comparison to random search with weight sharing (here: 3.92% error) does not appear fair. The cited paper in Table 1 is *not* the paper introducing random search + weight sharing, but the neural architecture optimization paper. The original one reported an error of 2.85% +/- 0.08% with 4.3M params. That paper also has the full source code available, so the authors could have performed a true apples-to-apples comparison. 3. The authors' method requires an additional (one-time) cost for actually creating the 'fake' training data, so their runtimes should be increased by the 8h required for that. 4. The fact that the authors achieve 2.42% error doesn't mean much; that result is just based on scaling the network up to 100M params. (The network obtained by random search also achieves 2.51%.) As it stands, I cannot judge whether the authors' approach yields strong performance for NAS. In order to allow that conclusion, the authors would have to compare to another method based on the same underlying code base and experimental protocol. Also, the authors do not make code available at this time. Their method has a lot of bells and whistles, and I do not expect that I could reproduce it. They promise code, but it is unclear what this would include: the generated training data, code for training the networks, code for the meta-approach, etc? This would have been much easier to judge had the authors made the code available in anonymized fashion during the review. Because of these reasons, in terms of making progress on NAS, the paper does not quite clear the bar for me. The authors also evaluated their method in several other scenarios, including reinforcement learning. These results appear to be very promising, but largely preliminary due to lack of time in the rebuttal phase. Recommendation: The paper is very novel and the results appear very promising, but they are also somewhat preliminary. The reviewers' scores leave the paper just below the acceptance threshold and my own borderline judgement is not positive enough to overrule this. I believe that some more time, and one more iteration of reorganization and review, would allow this paper to ripen into a very strong paper. For a resubmission to the next venue, I would recommend to either perform an apples-to-apples comparison for NAS or reorganize and just use NAS as one of several equally-weighted possible applications. In the current form, I believe the paper is not using its full potential."""
363,"""Smooth Kernels Improve Adversarial Robustness and Perceptually-Aligned Gradients""","['adversarial robustness', 'computer vision', 'smoothness regularization']","""Recent research has shown that CNNs are often overly sensitive to high-frequency textural patterns. Inspired by the intuition that humans are more sensitive to the lower-frequency (larger-scale) patterns we design a regularization scheme that penalizes large differences between adjacent components within each convolutional kernel. We apply our regularization onto several popular training methods, demonstrating that the models with the proposed smooth kernels enjoy improved adversarial robustness. Further, building on recent work establishing connections between adversarial robustness and interpretability, we show that our method appears to give more perceptually-aligned gradients. ""","""The authors propose a regularized for convolutional kernels that seeks to improve adversarial robustness of CNNs and produce more perceptually aligned gradients. While the topic studied by the paper is interesting, reviewers pointed out several deficiencies with the empirical evaluation that call into question the validity of the claims made by the authors. In particular: 1) Adversarial evaluation protocol: There are several red flags in the way the authors perform adversarial evaluation. The authors use a pre-defined adversarial attack toolbox (Foolbox) but are unable to produce successful attacks even for large perturbation radii - this suggests that the attack is not tuned properly. Further, the authors present results over the best case performance over several attacks, which is dubious since the goal of adversarial evaluation is to reveal the worst case performance of the model. 2) Perceptual alignment: The claim of perceptually aligned gradients also does not seem sufficiently justified given the experimental results, since the improvement over the baseline is quite marginal. Here too, the authors report failure of a standard visualization technique that has been successfully used in prior work, calling into question the validity of these results. The authors did not participate in the rebuttal phase and the reviewers maintained their scores after the initial reviews. Overall, given the significant flaws in the empirical evaluation, I recommend that the paper be rejected. I encourage the authors to rerun their experiments following the feedback from reviewers 1 and 3 and resubmit the paper with a more careful empirical evaluation."""
364,"""Filling the Soap Bubbles: Efficient Black-Box Adversarial Certification with Non-Gaussian Smoothing""","['Adversarial Certification', 'Randomized Smoothing', 'Functional Optimization']","""Randomized classifiers have been shown to provide a promising approach for achieving certified robustness against adversarial attacks in deep learning. However, most existing methods only leverage Gaussian smoothing noise and only work for pseudo-formula perturbation. We propose a general framework of adversarial certification with non-Gaussian noise and for more general types of attacks, from a unified functional optimization perspective. Our new framework allows us to identify a key trade-off between accuracy and robustness via designing smoothing distributions, helping to design two new families of non-Gaussian smoothing distributions that work more efficiently for pseudo-formula and pseudo-formula attacks, respectively. Our proposed methods achieve better results than previous works and provide a new perspective on randomized smoothing certification.""","""The authors extend the framework of randomized smoothing to handle non-Gaussian smoothing distribution and use this to show that they can construct smoothed models that perform well against l2 and linf adversarial attacks. They show that the resulting framework can obtain state-of-the-art certified robustness results improving upon prior work. While the paper contains several interesting ideas, the reviewers were concerned about several technical flaws and omissions from the paper: 1) A theorem on strong duality was incorrect in the initial version of the paper, though this was fixed in the rebuttal. However, the reasoning of the authors on the ""fundamental trade-off"" is specific to the particular framework they consider, and is not really a fundamental trade-off. 2) The justification for the new family of distributions constructed by the author is not very clear and the experiments only show marginal improvements over prior work. Thus, the significance of this contribution is not clear. Some of the issues were clarified during the rebuttal, but the reviewers remained unconvinced about the above points. Thus, the paper cannot be accepted in its current form. """
365,"""Learning to Rank Learning Curves""",[],"""Many automated machine learning methods, such as those for hyperparameter and neural architecture optimization, are computationally expensive because they involve training many different model configurations. In this work, we present a new method that saves computational budget by terminating poor configurations early on in the training. In contrast to existing methods, we consider this task as a ranking and transfer learning problem. We qualitatively show that by optimizing a pairwise ranking loss and leveraging learning curves from other data sets, our model is able to effectively rank learning curves without having to observe many or very long learning curves. We further demonstrate that our method can be used to accelerate a neural architecture search by a factor of up to 100 without a significant performance degradation of the discovered architecture. In further experiments we analyze the quality of ranking, the influence of different model components as well as the predictive behavior of the model.""","""Authors propose a new way of early stopping for neural architecture search. In contrast to making keep or kill decisions based on extrapolating the learning curves then making decisions between alternatives, this work learns a model on pairwise comparisons between learning curves directly. Reviewers were concerned with over-claiming of novelty since the original version of this paper overlooked significant hyperparameter tuning works. In a revision, additional experiments were performed using some of the suggested methods but reviewers remained skeptical that the empirical experiments provided enough justification that this work was ready for prime time. """
366,"""Policy Optimization In the Face of Uncertainty""","['Reinforcement Learning', 'Model-based Reinforcement Learning']","""Model-based reinforcement learning has the potential to be more sample efficient than model-free approaches. However, existing model-based methods are vulnerable to model bias, which leads to poor generalization and asymptotic performance compared to model-free counterparts. In this paper, we propose a novel policy optimization framework using an uncertainty-aware objective function to handle those issues. In this framework, the agent simultaneously learns an uncertainty-aware dynamics model and optimizes the policy according to these learned models. Under this framework, the objective function can represented end-to-end as a single computational graph, which allows seamless policy gradient computation via backpropagation through the models. In addition to being theoretically sound, our approach shows promising results on challenging continuous control benchmarks with competitive asymptotic performance and sample complexity compared to state-of-the-art baselines.""","""The main contribution of this work is introducing the uncertainty-aware value function prediction into model-based RL, which can be used to balance the risk and return empirically. The reviewers generally agree that this paper addresses an interesting problem, but there are some concerns that remain (see reviewer comments). I also want to highlight that in terms of empirical results, it is insufficient to present results for 3 different random seeds. To highlight any kind of robustness, I suggest *at least* 10-20 different random seeds; otherwise the findings can/will be misleading. """
367,"""Analyzing Privacy Loss in Updates of Natural Language Models""","['Language Modelling', 'Privacy']","""To continuously improve quality and reflect changes in data, machine learning-based services have to regularly re-train and update their core models. In the setting of language models, we show that a comparative analysis of model snapshots before and after an update can reveal a surprising amount of detailed information about the changes in the data used for training before and after the update. We discuss the privacy implications of our findings, propose mitigation strategies and evaluate their effect.""","""This paper report empirical implications of privacy leaks in language models. Reviewers generally agree that the results look promising and interesting, but the paper isnt fully developed yet. A few pointed out that framing the paper better to better indicate broader implications of the observed symptoms would greatly improve the paper. Another pointed out better placing this work in the context of other related work. Overall, this paper could use another cycle of polishing/enhancing the results. """
368,"""Realism Index: Interpolation in Generative Models With Arbitrary Prior""",[],"""In order to perform plausible interpolations in the latent space of a generative model, we need a measure that credibly reflects if a point in an interpolation is close to the data manifold being modelled, i.e. if it is convincing. In this paper, we introduce a realism index of a point, which can be constructed from an arbitrary prior density, or based on FID score approach in case a prior is not available. We propose a numerically efficient algorithm that directly maximises the realism index of an interpolation which, as we theoretically prove, leads to a search of a geodesic with respect to the corresponding Riemann structure. We show that we obtain better interpolations then the classical linear ones, in particular when either the prior density is not convex shaped, or when the soap bubble effect appears.""","""This paper introduces a realism metric for generated covariates and then leverage this metric to produce a novel method of interpolating between two real covariates. The reviewers found the method novel and were satisfied with the response form the authors to their concerns. However, Reviewer 4 did have reservations about the response to his/her points 3 and 4. Moreover, in the discussion period it was decided that while the method was well justified by intuition and theory, the empirical evaluationwhich is the what matters at the end of the daywas unconvincing. """
369,"""Curriculum Learning for Deep Generative Models with Clustering""","['curriculum learning', 'generative adversarial network']","""Training generative models like Generative Adversarial Network (GAN) is challenging for noisy data. A novel curriculum learning algorithm pertaining to clustering is proposed to address this issue in this paper. The curriculum construction is based on the centrality of underlying clusters in data points. The data points of high centrality takes priority of being fed into generative models during training. To make our algorithm scalable to large-scale data, the active set is devised, in the sense that every round of training proceeds only on an active subset containing a small fraction of already trained data and the incremental data of lower centrality. Moreover, the geometric analysis is presented to interpret the necessity of cluster curriculum for generative models. The experiments on cat and human-face data validate that our algorithm is able to learn the optimal generative models (e.g. ProGAN) with respect to specified quality metrics for noisy data. An interesting finding is that the optimal cluster curriculum is closely related to the critical point of the geometric percolation process formulated in the paper.""","""The paper proposes a curriculum learning approach to training generative models like GANs. The reviewers had a number of questions and concerns related to specific details in the paper and experimental results. While the authors were able to address some of these concerns, the reviewers believe that further refinement is necessary before the paper is ready for publication."""
370,"""Neural Arithmetic Unit by reusing many small pre-trained networks""","['NALU', 'feed forward NN']","""We propose a solution for evaluation of mathematical expression. However, instead of designing a single end-to-end model we propose a Lego bricks style architecture. In this architecture instead of training a complex end-to-end neural network, many small networks can be trained independently each accomplishing one specific operation and acting a single lego brick. More difficult or complex task can then be solved using a combination of these smaller network. In this work we first identify 8 fundamental operations that are commonly used to solve arithmetic operations (such as 1 digit multiplication, addition, subtraction, sign calculator etc). These fundamental operations are then learned using simple feed forward neural networks. We then shows that different operations can be designed simply by reusing these smaller networks. As an example we reuse these smaller networks to develop larger and a more complex network to solve n-digit multiplication, n-digit division, and cross product. This bottom-up strategy not only introduces reusability, we also show that it allows to generalize for computations involving n-digits and we show results for up to 7 digit numbers. Unlike existing methods, our solution also generalizes for both positive as well as negative numbers.""","""This paper proposes to train and compose neural networks for the purposes of arithmetic operations. All reviewers agree that the motivation for such a work is unclear, and the general presentation in the paper can be significantly improved. As such, I cannot recommend this paper in its current state for publication. """
371,"""Neural Linear Bandits: Overcoming Catastrophic Forgetting through Likelihood Matching""",[],"""We study neural-linear bandits for solving problems where both exploration and representation learning play an important role. Neural-linear bandits leverage the representation power of deep neural networks and combine it with efficient exploration mechanisms, designed for linear contextual bandits, on top of the last hidden layer. Since the representation is being optimized during learning, information regarding exploration with ""old"" features is lost. Here, we propose the first limited memory neural-linear bandit that is resilient to this catastrophic forgetting phenomenon. We perform simulations on a variety of real-world problems, including regression, classification, and sentiment analysis, and observe that our algorithm achieves superior performance and shows resilience to catastrophic forgetting. ""","""Reviewers found the problem statement having merit, but found the solution not completely justifiable. Bandit algorithms often come with theoretical justification because the feedback is such that the algorithm could be performing horribly without giving any indication of performance loss. With neural networks this is obviously challenging given the lack of supervised learning guarantees, but reviewers remain skeptical and prefer not to speculate based on empirical results. """
372,"""Perceptual Generative Autoencoders""",[],"""Modern generative models are usually designed to match target distributions directly in the data space, where the intrinsic dimensionality of data can be much lower than the ambient dimensionality. We argue that this discrepancy may contribute to the difficulties in training generative models. We therefore propose to map both the generated and target distributions to the latent space using the encoder of a standard autoencoder, and train the generator (or decoder) to match the target distribution in the latent space. The resulting method, perceptual generative autoencoder (PGA), is then incorporated with a maximum likelihood or variational autoencoder (VAE) objective to train the generative model. With maximum likelihood, PGAs generalize the idea of reversible generative models to unrestricted neural network architectures and arbitrary latent dimensionalities. When combined with VAEs, PGAs can generate sharper samples than vanilla VAEs. Compared to other autoencoder-based generative models using simple priors, PGAs achieve state-of-the-art FID scores on CIFAR-10 and CelebA.""","""The authors present a new training procedure for generative models where the target and generated distributions are first mapped to a latent space and the divergence between then is minimised in this latent space. The authors achieve state of the art results on two datasets. All reviewers agreed that the idea was vert interesting and has a lot of potential. Unfortunately, in the initial version of the paper the main section (section 3) was not very clear with confusing notation and statements. I thank the authors for taking this feedback positively and significantly revising the writeup. However, even after revising the writeup some of the ideas are still not clear. In particular, during discussions between the AC and reviewers it was pointed out that the training procedure is still not convincing. It was not clear whether the heuristic combination of the deterministic PGA parts of the objective (3) with the likelihood/VAE based terms (9) and (12,13), was conceptually very sound. Unfortunately, most of the initial discussions with the authors revolved around clarity and once we crossed the ""clarity"" barrier there wasn't enough time to discuss the other technical details of the paper. As a result, even though the paper seems interesting, the initial lack of clarity went against the paper. In summary, based on the reviewer comments, I recommend that the paper cannot be accepted. """
373,"""Towards Disentangling Non-Robust and Robust Components in Performance Metric""","['adversarial examples', 'robust machine learning']","""The vulnerability to slight input perturbations is a worrying yet intriguing property of deep neural networks (DNNs). Though some efforts have been devoted to investigating the reason behind such adversarial behavior, the relation between standard accuracy and adversarial behavior of DNNs is still little understood. In this work, we reveal such relation by first introducing a metric characterizing the standard performance of DNNs. Then we theoretically show this metric can be disentangled into an information-theoretic non-robust component that is related to adversarial behavior, and a robust component. Then, we show by experiments that DNNs under standard training rely heavily on optimizing the non-robust component in achieving decent performance. We also demonstrate current state-of-the-art adversarial training algorithms indeed try to robustify DNNs by preventing them from using the non-robust component to distinguish samples from different categories. Based on our findings, we take a step forward and point out the possible direction of simultaneously achieving decent standard generalization and adversarial robustness. It is hoped that our theory can further inspire the community to make more interesting discoveries about the relation between standard accuracy and adversarial robustness of DNNs.""","""All reviewers suggest rejection. Beyond that, the more knowledgable two have consistent questions about the motivation for using the CCKL objective. As such, the exposition of this paper, and justification of the work could use improvement, so that experienced reviewers understand the contributions of the paper."""
374,"""Characterize and Transfer Attention in Graph Neural Networks""","['Graph Neural Networks', 'Graph Attention Networks', 'Attention', 'Transfer Learning', 'Empirical Study']","""Does attention matter and, if so, when and how? Our study on both inductive and transductive learning suggests that datasets have a strong influence on the effects of attention in graph neural networks. Independent of learning setting, task and attention variant, attention mostly degenerate to simple averaging for all three citation networks, whereas they behave strikingly different in the protein-protein interaction networks and molecular graphs: nodes attend to different neighbors per head and get more focused in deeper layers. Consequently, attention distributions become telltale features of the datasets themselves. We further explore the possibility of transferring attention for graph sparsification and show that, when applicable, attention-based sparsification retains enough information to obtain good performance while reducing computational and storage costs. Finally, we point out several possible directions for further study and transfer of attention.""","""This paper suggests that datasets have a strong influence on the effects of attention in graph neural networks and explores the possibility of transferring attention for graph sparsification, suggesting that attention-based sparsification retains enough information to obtain good performance while reducing computational and storage costs. Unfortunately I cannot recommend acceptance for this paper in its present form. Some concerns raised by the reviewers are: the analysis lacks theoretical insights and does not seem to be very useful in practice; the proposed method for graph sparsification lacks novelty; the experiments are not thorough to validate its usefulness. I encourage the authors to address these concerns in an eventual resubmission. """
375,"""Functional Regularisation for Continual Learning with Gaussian Processes""","['Continual Learning', 'Gaussian Processes', 'Lifelong learning', 'Incremental Learning']","""We introduce a framework for Continual Learning (CL) based on Bayesian inference over the function space rather than the parameters of a deep neural network. This method, referred to as functional regularisation for Continual Learning, avoids forgetting a previous task by constructing and memorising an approximate posterior belief over the underlying task-specific function. To achieve this we rely on a Gaussian process obtained by treating the weights of the last layer of a neural network as random and Gaussian distributed. Then, the training algorithm sequentially encounters tasks and constructs posterior beliefs over the task-specific functions by using inducing point sparse Gaussian process methods. At each step a new task is first learnt and then a summary is constructed consisting of (i) inducing inputs a fixed-size subset of the task inputs selected such that it optimally represents the task and (ii) a posterior distribution over the function values at these inputs. This summary then regularises learning of future tasks, through Kullback-Leibler regularisation terms. Our method thus unites approaches focused on (pseudo-)rehearsal with those derived from a sequential Bayesian inference perspective in a principled way, leading to strong results on accepted benchmarks.""","""The authors introduce a framework for continual learning in neural networks based on sparse Gaussian process methods. The reviewers had a number of questions and concerns, that were adequately addressed during the discussion phase. This is an interesting addition to the continual learning literature. Please be sure to update the paper based on the discussion."""
376,"""On the Convergence of FedAvg on Non-IID Data""","['Federated Learning', 'stochastic optimization', 'Federated Averaging']","""Federated learning enables a large amount of edge computing devices to jointly learn a model without data sharing. As a leading algorithm in this setting, Federated Averaging (\texttt{FedAvg}) runs Stochastic Gradient Descent (SGD) in parallel on a small subset of the total devices and averages the sequences only once in a while. Despite its simplicity, it lacks theoretical guarantees under realistic settings. In this paper, we analyze the convergence of \texttt{FedAvg} on non-iid data and establish a convergence rate of pseudo-formula for strongly convex and smooth problems, where pseudo-formula is the number of SGDs. Importantly, our bound demonstrates a trade-off between communication-efficiency and convergence rate. As user devices may be disconnected from the server, we relax the assumption of full device participation to partial device participation and study different averaging schemes; low device participation rate can be achieved without severely slowing down the learning. Our results indicate that heterogeneity of data slows down the convergence, which matches empirical observations. Furthermore, we provide a necessary condition for \texttt{FedAvg} on non-iid data: the learning rate pseudo-formula must decay, even if full-gradient is used; otherwise, the solution will be (\eta)$ away from the optimal.""","""This manuscript analyzes the convergence of federated learning wit hstragellers, and provides convergence rates. The proof techniques involve bounding the effects of the non-identical distribution due to stragglers and related issues. The manuscript also includes a thorough empirical evaluation. Overall, the reviewers were quite positive about the manuscript, with a few details that should be improved. """
377,"""Transition Based Dependency Parser for Amharic Language Using Deep Learning""","['Amharic dependency parsing', 'arc-eager transition', 'LSTM', 'Transition action prediction', 'Relationship type prediction']","""Researches shows that attempts done to apply existing dependency parser on morphological rich languages including Amharic shows a poor performance. In this study, a dependency parser for Amharic language is implemented using arc-eager transition system and LSTM network. The study introduced another way of building labeled dependency structure by using a separate network model to predict dependency relation. This helps the number of classes to decrease from 2n+2 into n, where n is the number of relationship types in the language and increases the number of examples for each class in the data set. Evaluation of the parser model results 91.54 and 81.4 unlabeled and labeled attachment score respectively. The major challenge in this study was the decrease of the accuracy of labeled attachment score. This is mainly due to the size and quality of the tree-bank available for Amharic language. Improving the tree-bank by increasing the size and by adding morphological information can make the performance of parser better.""","""The paper builds a transition-based dependency parser for Amharic, first predicting transitions and then dependency labels. The model is poorly motivated, and poorly described. The experiments have serious problems with their train/test splits and lack of baseline. The reviewers all convincingly argue for reject. The authors have not responded. """
378,"""How can we generalise learning distributed representations of graphs?""","['graphs', 'distributed representations', 'similarity learning']","""We propose a general framework to construct unsupervised models capable of learning distributed representations of discrete structures such as graphs based on R-Convolution kernels and distributed semantics research. Our framework combines the insights and observations of Deep Graph Kernels and Graph2Vec towards a unified methodology for performing similarity learning on graphs of arbitrary size. This is exemplified by our own instance G2DR which extends Graph2Vec from labelled graphs towards unlabelled graphs and tackles issues of diagonal dominance through pruning of the subgraph vocabulary composing graphs. These changes produce new state of the art results in the downstream application of G2DR embeddings in graph classification tasks over datasets with small labelled graphs in binary classification to multi-class classification on large unlabelled graphs using an off-the-shelf support vector machine. ""","""The paper proposed a general framework to construct unsupervised models for representation learning of discrete structures. The reviewers feel that the approach is taken directly from graph kernels, and the novelty is not high enough. """
379,"""The Early Phase of Neural Network Training""","['empirical', 'learning dynamics', 'lottery tickets', 'critical periods', 'early']","""Recent studies have shown that many important aspects of neural network learning take place within the very earliest iterations or epochs of training. For example, sparse, trainable sub-networks emerge (Frankle et al., 2019), gradient descent moves into a small subspace (Gur-Ari et al., 2018), and the network undergoes a critical period (Achille et al., 2019). Here we examine the changes that deep neural networks undergo during this early phase of training. We perform extensive measurements of the network state and its updates during these early iterations of training, and leverage the framework of Frankle et al. (2019) to quantitatively probe the weight distribution and its reliance on various aspects of the dataset. We find that, within this framework, deep networks are not robust to reinitializing with random weights while maintaining signs, and that weight distributions are highly non-independent even after only a few hundred iterations. Despite this, pre-training with blurred inputs or an auxiliary self-supervised task can approximate the changes in supervised networks, suggesting that these changes are label-agnostic, though labels significantly accelerate this process. Together, these results help to elucidate the network changes occurring during this pivotal initial period of learning.""","""This paper studies numerous ways in which the statistics of network weights evolve during network training. Reviewers are not entirely sure what conclusions to make from these studies, and training dynamics can be strongly impacted by arbitrary choices made in the training process. Despite these issues, the reviewers think the observed results are interesting enough to clear the bar for publication."""
380,"""Policy Optimization with Stochastic Mirror Descent""","['reinforcement learning', 'policy gradient', 'stochastic variance reduce gradient', 'sample efficiency', 'stochastic mirror descent']","""Improving sample efficiency has been a longstanding goal in reinforcement learning. In this paper, we propose the pseudo-formula : a sample efficient policy gradient method with stochastic mirror descent. A novel variance reduced policy gradient estimator is the key of pseudo-formula to improve sample efficiency. Our pseudo-formula needs only pseudo-formula sample trajectories to achieve an pseudo-formula -approximate first-order stationary point, which matches the best-known sample complexity. We conduct extensive experiments to show our algorithm outperforms state-of-the-art policy gradient methods in various settings.""","""This paper proposes a new policy gradient method based on stochastic mirror descent and variance reduction. Both theoretical analysis and experiments are provided to demonstrate the sample efficiency of the proposed algorithm. The main concerns of this paper include: (1) unclear presentation in both the main results and the proof; and (2) missing baselines (e.g., HAPG) in the experiments. This paper has been carefully discussed but even after author response and reviewer discussion, it does not gather sufficient support. Note: the authors disclosed their identity by adding the author names in the revision during the author response. After discussion with PC chair, the openreview team helped remove that revision during the reviewer discussion to avoid desk reject. """
381,"""Constant Curvature Graph Convolutional Networks""","['graph convolutional neural networks', 'hyperbolic spaces', 'gyrvector spaces', 'riemannian manifolds', 'graph embeddings']",""" Interest has been rising lately towards methods representing data in non-Euclidean spaces, e.g. hyperbolic or spherical. These geometries provide specific inductive biases useful for certain real-world data properties, e.g. scale-free or hierarchical graphs are best embedded in a hyperbolic space. However, the very popular class of graph neural networks is currently limited to model data only via Euclidean node embeddings and associated vector space operations. In this work, we bridge this gap by proposing mathematically grounded generalizations of graph convolutional networks (GCN) to (products of) constant curvature spaces. We do this by i) extending the gyro-vector space theory from hyperbolic to spherical spaces, providing a unified and smooth view of the two geometries, ii) leveraging gyro-barycentric coordinates that generalize the classic Euclidean concept of the center of mass. Our class of models gives strict generalizations in the sense that they recover their Euclidean counterparts when the curvature goes to zero from either side. Empirically, our methods outperform different types of classic Euclidean GCNs in the tasks of node classification and minimizing distortion for symbolic data exhibiting non-Euclidean behavior, according to their discrete curvature. ""","""This paper proposes using non-Euclidean spaces for GCNs, leveraging the gyrovector space formalism. The model allows products of constant curvature, both positive and negative, generalizing hyperbolic embeddings. Reviewers got mixed impressions on this paper. Whereas some found its methodology compelling and its empirical evaluation satisfactory, it was generally perceived that this paper will greatly benefit from another round of reviewing. In particular, the authors should improve readability of the main text and provide a more thorough discussion on related recent (and concurrent) work. """
382,"""Storage Efficient and Dynamic Flexible Runtime Channel Pruning via Deep Reinforcement Learning""",[],"""In this paper, we propose a deep reinforcement learning (DRL) based framework to efficiently perform runtime channel pruning on convolutional neural networks (CNNs). Our DRL-based framework aims to learn a pruning strategy to determine how many and which channels to be pruned in each convolutional layer, depending on each specific input instance in runtime. The learned policy optimizes the performance of the network by restricting the computational resource on layers under an overall computation budget. Furthermore, unlike other runtime pruning methods which require to store all channels parameters in inference, our framework can reduce parameters storage consumption at deployment by introducing a static pruning component. Comparison experimental results with existing runtime and static pruning methods on state-of-the-art CNNs demonstrate that our proposed framework is able to provide a tradeoff between dynamic flexibility and storage efficiency in runtime channel pruning. ""","""Main content: Proposes a deep RL unified framework to manage the trade-off between static pruning to decrease storage requirements and network flexibility for dynamic pruning to decrease runtime costs Summary of discussion: reviewer1: Reviewer likes the proposed DRL approach, but writing and algorithmic details are lacking reviewer2: Pruning methods are certainly imortant, but there are details missing wrt the algorithm in the paper. reviewer3: Presents a novel RL algorithm, showing good results on CIFAR10 and ISLVRC2012. Algorithmic details and parameters are not clearly explained. Recommendation: All reviewers liked the work but the writing/algorithmic details are lacking. I recommend Reject. """
383,"""Optimizing Data Usage via Differentiable Rewards""","['data selection', 'multilingual neural machine translation', 'data usage optimzation', 'transfer learning', 'classification']","""To acquire a new skill, humans learn better and faster if a tutor, based on their current knowledge level, informs them of how much attention they should pay to particular content or practice problems. Similarly, a machine learning model could potentially be trained better with a scorer that adapts to its current learning state and estimates the importance of each training data instance. Training such an adaptive scorer efficiently is a challenging problem; in order to precisely quantify the effect of a data instance at a given time during the training, it is typically necessary to first complete the entire training process. To efficiently optimize data usage, we propose a reinforcement learning approach called Differentiable Data Selection (DDS). In DDS, we formulate a scorer network as a learnable function of the training data, which can be efficiently updated along with the main model being trained. Specifically, DDS updates the scorer with an intuitive reward signal: it should up-weigh the data that has a similar gradient with a dev set upon which we would finally like to perform well. Without significant computing overhead, DDS delivers strong and consistent improvements over several strong baselines on two very different tasks of machine translation and image classification.""","""The paper proposes an iterative learning method that jointly trains both a model and a scorer network that places a non-uniform weights on data points, which estimates the importance of each data point for training. This leads to significant improvement on several benchmarks. The reviewers mostly agreed that the approach is novel and that the benchmark results were impressive, especially on Imagenet. There were both clarity issues about methodology and experiments, as well as concerns about several technical issues. The reviewers felt that the rebuttal resolved the majority of minor technical issues, but did not sufficiently clarify the more significant methodological concerns. Thus, I recommend rejection at this time."""
384,"""Energy-based models for atomic-resolution protein conformations""","['energy-based model', 'transformer', 'energy function', 'protein conformation']","""We propose an energy-based model (EBM) of protein conformations that operates at atomic scale. The model is trained solely on crystallized protein data. By contrast, existing approaches for scoring conformations use energy functions that incorporate knowledge of physical principles and features that are the complex product of several decades of research and tuning. To evaluate the model, we benchmark on the rotamer recovery task, the problem of predicting the conformation of a side chain from its context within a protein structure, which has been used to evaluate energy functions for protein design. The model achieves performance close to that of the Rosetta energy function, a state-of-the-art method widely used in protein structure prediction and design. An investigation of the models outputs and hidden representations finds that it captures physicochemical properties relevant to protein energy.""","""The paper proposes a data-driven approach to learning atomic-resolution energy functions. Experiment results show that the proposed energy function is similar to the state-of-art method (Rosetta) based on physical principles and engineered features. The paper addresses an interesting and challenging problem. The results are very promising. It is a good showcase of how ML can be applied to solve an important application problem. For the final version, we suggest that the authors can tune down some claims in the paper to fairly reflect the contribution of the work. """
385,"""A Copula approach for hyperparameter transfer learning""","['Hyperparameter optimization', 'Bayesian Optimization', 'Gaussian Process', 'Copula', 'Transfer-learning']","""Bayesian optimization (BO) is a popular methodology to tune the hyperparameters of expensive black-box functions. Despite its success, standard BO focuses on a single task at a time and is not designed to leverage information from related functions, such as tuning performance metrics of the same algorithm across multiple datasets. In this work, we introduce a novel approach to achieve transfer learning across different datasets as well as different metrics. The main idea is to regress the mapping from hyperparameter to metric quantiles with a semi-parametric Gaussian Copula distribution, which provides robustness against different scales or outliers that can occur in different tasks. We introduce two methods to leverage this estimation: a Thompson sampling strategy as well as a Gaussian Copula process using such quantile estimate as a prior. We show that these strategies can combine the estimation of multiple metrics such as runtime and accuracy, steering the optimization toward cheaper hyperparameters for the same level of accuracy. Experiments on an extensive set of hyperparameter tuning tasks demonstrate significant improvements over state-of-the-art methods.""","""This paper tackles the problem of transferring learning between tasks when performing Bayesian hyperparameter optimization. In this setting, tasks can correspond to different datasets or different metrics. The proposed approach uses Gaussian copulas to synchronize the different scales of the considered tasks and uses Thompson Sampling from the resulting Gaussian Copula Process for selecting next hyperparameters. The main weakness of the paper resides in the concerns raised about the experiments. First, the results are hard to interpret, leading to a misunderstanding of performances. Moreover, the considered baselines may not be adapted (they may be trivial). This might be due to a misunderstanding of the paper, which would align with the third major concern, that is the lack of clarity. These points could be addressed in a future version of the work, but it would need to be reviewed again and therefore would be too late for the current camera-ready. Hence, I recommend rejecting this paper."""
386,"""Using Objective Bayesian Methods to Determine the Optimal Degree of Curvature within the Loss Landscape""","['Objective Bayes', 'Information Geometry', 'Artificial Neural Networks']","""The efficacy of the width of the basin of attraction surrounding a minimum in parameter space as an indicator for the generalizability of a model parametrization is a point of contention surrounding the training of artificial neural networks, with the dominant view being that wider areas in the landscape reflect better generalizability by the trained model. In this work, however, we aim to show that this is only true for a noiseless system and in general the trend of the model towards wide areas in the landscape reflect the propensity of the model to overfit the training data. Utilizing the objective Bayesian (Jeffreys) prior we instead propose a different determinant of the optimal width within the parameter landscape determined solely by the curvature of the landscape. In doing so we utilize the decomposition of the landscape into the dimensions of principal curvature and find the first principal curvature dimension of the parameter space to be independent of noise within the training data.""","""There has been significant discussion in the literature on the effect of the properties of the curvature of minima on generalization in deep learning. This paper aims to shed some light on that discussion through the lens of theoretical analysis and the use of a Bayesian Jeffrey's prior. It seems clear that the reviewers appreciated the work and found the analysis insightful. However, a major issue cited by the reviewers is a lack of compelling empirical evidence that the claims of the paper are true. The authors run experiments on very small networks and reviewers felt that the results of these experiments were unlikely to extrapolate to large scale modern models and problems. One reviewer was concerned about the quality of the exposition in terms of the writing and language and care in terminology. Unfortunately, this paper falls below the bar for acceptance, but it seems likely that stronger empirical results and a careful treatment of the writing would make this a much stronger paper for future submission."""
387,"""BlockSwap: Fisher-guided Block Substitution for Network Compression on a Budget""","['model compression', 'architecture search', 'efficiency', 'budget', 'convolutional neural networks']","""The desire to map neural networks to varying-capacity devices has led to the development of a wealth of compression techniques, many of which involve replacing standard convolutional blocks in a large network with cheap alternative blocks. However, not all blocks are created equally; for a required compute budget there may exist a potent combination of many different cheap blocks, though exhaustively searching for such a combination is prohibitively expensive. In this work, we develop BlockSwap: a fast algorithm for choosing networks with interleaved block types by passing a single minibatch of training data through randomly initialised networks and gauging their Fisher potential. These networks can then be used as students and distilled with the original large network as a teacher. We demonstrate the effectiveness of the chosen networks across CIFAR-10 and ImageNet for classification, and COCO for detection, and provide a comprehensive ablation study of our approach. BlockSwap quickly explores possible block configurations using a simple architecture ranking system, yielding highly competitive networks in orders of magnitude less time than most architecture search techniques (e.g. under 5 minutes on a single GPU for CIFAR-10).""","""Two reviewers recommend acceptance. One reviewer is negative, however, does not provide reasons for rejection. The AC read the paper and agrees with the positive reviewers. in that the paper provides value for the community on an important topic of network compression."""
388,"""Learning Temporal Abstraction with Information-theoretic Constraints for Hierarchical Reinforcement Learning""","['hierarchical reinforcement learning', 'temporal abstraction']","""Applying reinforcement learning (RL) to real-world problems will require reasoning about action-reward correlation over long time horizons. Hierarchical reinforcement learning (HRL) methods handle this by dividing the task into hierarchies, often with hand-tuned network structure or pre-defined subgoals. We propose a novel HRL framework TAIC, which learns the temporal abstraction from past experience or expert demonstrations without task-specific knowledge. We formulate the temporal abstraction problem as learning latent representations of action sequences and present a novel approach of regularizing the latent space by adding information-theoretic constraints. Specifically, we maximize the mutual information between the latent variables and the state changes. A visualization of the latent space demonstrates that our algorithm learns an effective abstraction of the long action sequences. The learned abstraction allows us to learn new tasks on higher level more efficiently. We convey a significant speedup in convergence over benchmark learning problems. These results demonstrate that learning temporal abstractions is an effective technique in increasing the convergence rate and sample efficiency of RL algorithms.""","""This paper presents a novel hierarchical reinforcement learning framework, based on learning temporal abstractions from past experience or expert demonstrations using recurrent variational autoencoders and regularising the representations. This is certainly an interesting line of work, but there were two primary areas of concern in the reviews: the clarity of details of the approach, and the lack of comparison to baselines. While the former issue was largely dealt with in the rebuttals, the latter remained an issue for all reviewers. For this reason, I recommend rejection of the paper in its current form."""
389,"""Drawing Early-Bird Tickets: Toward More Efficient Training of Deep Networks""",[],"""(Frankle & Carbin, 2019) shows that there exist winning tickets (small but critical subnetworks) for dense, randomly initialized networks, that can be trained alone to achieve comparable accuracies to the latter in a similar number of iterations. However, the identification of these winning tickets still requires the costly train-prune-retrain process, limiting their practical benefits. In this paper, we discover for the first time that the winning tickets can be identified at the very early training stage, which we term as Early-Bird (EB) tickets, via low-cost training schemes (e.g., early stopping and low-precision training) at large learning rates. Our finding of EB tickets is consistent with recently reported observations that the key connectivity patterns of neural networks emerge early. Furthermore, we propose a mask distance metric that can be used to identify EB tickets with low computational overhead, without needing to know the true winning tickets that emerge after the full training. Finally, we leverage the existence of EB tickets and the proposed mask distance to develop efficient training methods, which are achieved by first identifying EB tickets via low-cost schemes, and then continuing to train merely the EB tickets towards the target accuracy. Experiments based on various deep networks and datasets validate: 1) the existence of EB tickets, and the effectiveness of mask distance in efficiently identifying them; and 2) that the proposed efficient training via EB tickets can achieve up to 4.7x energy savings while maintaining comparable or even better accuracy, demonstrating a promising and easily adopted method for tackling cost-prohibitive deep network training.""","""This work studies small but critical subnetworks, called winning tickets, that have very similar performance to an entire network, even with much less training. They show how to identify these early in the training of the entire network, saving computation and time in identifying them and then overall for the prediction task as a whole. The reviewers agree this paper is well-presented and of general interest to the community. Therefore, we recommend that the paper be accepted."""
390,"""Progressive Upsampling Audio Synthesis via Effective Adversarial Training""","['audio synthesis', 'sound effect generation', 'generative adversarial network', 'progressive training', 'raw-waveform']","""This paper proposes a novel generative model called PUGAN, which progressively synthesizes high-quality audio in a raw waveform. PUGAN leverages on the recently proposed idea of progressive generation of higher-resolution images by stacking multiple encode-decoder architectures. To effectively apply it to raw audio generation, we propose two novel modules: (1) a neural upsampling layer and (2) a sinc convolutional layer. Compared to the existing state-of-the-art model called WaveGAN, which uses a single decoder architecture, our model generates audio signals and converts them in a higher resolution in a progressive manner, while using a significantly smaller number of parameters, e.g., 20x smaller for 44.1kHz output, than an existing technique called WaveGAN. Our experiments show that the audio signals can be generated in real-time with the comparable quality to that of WaveGAN with respect to the inception scores and the human evaluation.""","""Inspired by WaveGAN, this paper proposes a PUGAN to synthesizes high-quality audio in a raw waveform. The paper is well motivated. But all the reviewers find that the paper is lack of clarity and details, and there are some problems in the experiments."""
391,"""Detecting Out-of-Distribution Inputs to Deep Generative Models Using Typicality""","['Deep generative models', 'out-of-distribution detection', 'safety']","""Recent work has shown that deep generative models can assign higher likelihood to out-of-distribution data sets than to their training data [Nalisnick et al., 2019; Choi et al., 2019]. We posit that this phenomenon is caused by a mismatch between the model's typical set and its areas of high probability density. In-distribution inputs should reside in the former but not necessarily in the latter, as previous work has presumed [Bishop, 1994]. To determine whether or not inputs reside in the typical set, we propose a statistically principled, easy-to-implement test using the empirical distribution of model likelihoods. The test is model agnostic and widely applicable, only requiring that the likelihood can be computed or closely approximated. We report experiments showing that our procedure can successfully detect the out-of-distribution sets in several of the challenging cases reported by Nalisnick et al. [2019].""","""This paper tackles the problem of detecting out of distribution (OoD) samples. To this end, the authors propose a new approach based on typical sets, i.e. sets of samples whose expected log likelihood approximate the model's entropy. The idea is then to rely on statistical testing using the empirical distribution of model likelihoods in order to determine whether samples lie in the typical set of the considered model. Experiments are provided where the proposed approach show competitive performance on MNIST and natural image tasks. This work has major drawbacks: novelty, theoretical soundness, and robustness in settings with model misspecification. Using the typicality notion has already been explored in Choi. et al. 2019 (for flow-based model), which dampers the novelty of this work. The conditions under which the typicality notion can be used are also not clear, e.g. in the small data regime. Finally, the current experiments are lacking a characterization of robustness to model misspecification. Given these limitations, I recommend to reject this paper. """
392,"""Data-Efficient Image Recognition with Contrastive Predictive Coding""","['Deep learning', 'representation learning', 'contrastive methods', 'unsupervised learning', 'self-supervised learning', 'vision', 'data-efficiency']","""Human observers can learn to recognize new categories of objects from a handful of examples, yet doing so with machine perception remains an open challenge. We hypothesize that data-efficient recognition is enabled by representations which make the variability in natural signals more predictable, as suggested by recent perceptual evidence. We therefore revisit and improve Contrastive Predictive Coding, a recently-proposed unsupervised learning framework, and arrive at a representation which enables generalization from small amounts of labeled data. When provided with only 1% of ImageNet labels (i.e. 13 per class), this model retains a strong classification performance, 73% Top-5 accuracy, outperforming supervised networks by 28% (a 65% relative improvement) and state-of-the-art semi-supervised methods by 14%. We also find this representation to serve as a useful substrate for object detection on the PASCAL-VOC 2007 dataset, approaching the performance of representations trained with a fully annotated ImageNet dataset.""","""The paper tackles the key question of achieving high prediction performances with few labels. The proposed approach builds upon Contrastive Predictive Coding (van den Oord et al. 2018). The contribution lies in i) refining CPC along several axes including model capacity, directional predictions, patch-based augmentation; ii) showing that the refined representation learned by the called CPC.v2 supports an efficient classification in a few-label regime, and can be transferred to another dataset; iii) showing that the auxiliary losses involved in the CPC are not necessarily predictive of the eventual performance of the network. This paper generated a hot discussion. Reviewers were not convinced that the paper contributions are sufficiently innovative to deserve being published at ICLR. Authors argued that novelty does not have to lie in equations, and that the new ideas and evidence presented are worth. The area chair thinks that the paper raises profound questions (e.g., what auxiliary losses are most conducive to learning a good representation; how to divide the computational efforts among the preliminary phase of representation learning and the later phase of classifier learning), but given the number of options and details involved, these results may support several interpretations besides the authors'. The authors might also want to leave the claim about the generality of the CPC++ principles (e.g., regarding audio) for further work - or to bring additional evidence backing up this claim. In conclusion, this paper contains brilliant ideas and I hope to see them published with a strengthened analysis of its components. """
393,"""Stochastic Weight Averaging in Parallel: Large-Batch Training That Generalizes Well""","['Large batch training', 'Distributed neural network training', 'Stochastic Weight Averaging']","""We propose Stochastic Weight Averaging in Parallel (SWAP), an algorithm to accelerate DNN training. Our algorithm uses large mini-batches to compute an approximate solution quickly and then refines it by averaging the weights of multiple models computed independently and in parallel. The resulting models generalize equally well as those trained with small mini-batches but are produced in a substantially shorter time. We demonstrate the reduction in training time and the good generalization performance of the resulting models on the computer vision datasets CIFAR10, CIFAR100, and ImageNet.""","""The authors proposed a simple and effective approach to parallel training based on stochastic weight averaging. Moreover, the authors have carefully addressed the reviewer comments in the discussion period, particularly the relation to local SGD, to the satisfaction of reviewers. Local SGD mimics sequential SGD with noise induced by lack of synchronization, whereas SWAP averages multiple samples from a stationary distribution, and synchronizes at the end. Please clarify these points and carefully account for reviewer comments in the final version. Overall, the proposed approach will make an excellent addition to the program, both elegant and practically useful."""
394,"""Gradient-free Neural Network Training by Multi-convex Alternating Optimization""","['neural network', 'alternating minimization', 'global convergence']","""In recent years, stochastic gradient descent (SGD) and its variants have been the dominant optimization methods for training deep neural networks. However, SGD suffers from limitations such as the lack of theoretical guarantees, vanishing gradients, excessive sensitivity to input, and difficulties solving highly non-smooth constraints and functions. To overcome these drawbacks, alternating minimization-based methods for deep neural network optimization have attracted fast-increasing attention recently. As an emerging and open domain, however, several new challenges need to be addressed, including 1) Convergence depending on the choice of hyperparameters, and 2) Lack of unified theoretical frameworks with general conditions. We, therefore, propose a novel Deep Learning Alternating Minimization (DLAM) algorithm to deal with these two challenges. Our innovative inequality-constrained formulation infinitely approximates the original problem with non-convex equality constraints, enabling our proof of global convergence of the DLAM algorithm under mild, practical conditions, regardless of the choice of hyperparameters and wide range of various activation functions. Experiments on benchmark datasets demonstrate the effectiveness of DLAM.""","""The paper proposes a new learning algorithm for deep neural networks that first reformulates the problem as a multi-convex and then uses an alternating update to solve. The reviewers are concerned about the closeness to previous work, comparisons with related work like dlADMM, and the difficulty of the dataset. While the authors proposed the possibility of addressing some of these issues, the reviewers feel that without actually addressing them, the paper is not yet ready for publication. """
395,"""EXPLOITING SEMANTIC COHERENCE TO IMPROVE PREDICTION IN SATELLITE SCENE IMAGE ANALYSIS: APPLICATION TO DISEASE DENSITY ESTIMATION""","['semantic coherence', 'satellite scene image analysis', 'convolutional neural networks', 'disease density']","""High intra-class diversity and inter-class similarity is a characteristic of remote sensing scene image data sets currently posing significant difficulty for deep learning algorithms on classification tasks. To improve accuracy, post-classification methods have been proposed for smoothing results of model predictions. However, those approaches require an additional neural network to perform the smoothing operation, which adds overhead to the task. We propose an approach that involves learning deep features directly over neighboring scene images without requiring use of a cleanup model. Our approach utilizes a siamese network to improve the discriminative power of convolutional neural networks on a pair of neighboring scene images. It then exploits semantic coherence between this pair to enrich the feature vector of the image for which we want to predict a label. Empirical results show that this approach provides a viable alternative to existing methods. For example, our model improved prediction accuracy by 1 percentage point and dropped the mean squared error value by 0.02 over the baseline, on a disease density estimation task. These performance gains are comparable with results from existing post-classification methods, moreover without implementation overheads.""","""This papers proposed a solution to the problem of disease density estimation using satellite scene images. The method combines a classification and regression task. The reviewers were unanimous in their recommendation that the submission not be accepted to ICLR. The main concern was a lack of methodological novelty. The authors responded to reviewer comments, and indicated a list of improvements that still remain to be done indicating that the paper should at least go through another review cycle."""
396,"""Regulatory Focus: Promotion and Prevention Inclinations in Policy Search""","['Reinforcement Learning', 'Regulatory Focus', 'Promotion and Prevention', 'Exploration']","""The estimation of advantage is crucial for a number of reinforcement learning algorithms, as it directly influences the choices of future paths. In this work, we propose a family of estimates based on the order statistics over the path ensemble, which allows one to flexibly drive the learning process in a promotion focus or prevention focus. On top of this formulation, we systematically study the impacts of different regulatory focuses. Our findings reveal that regulatory focus, when chosen appropriately, can result in significant benefits. In particular, for the environments with sparse rewards, promotion focus would lead to more efficient exploration of the policy space; while for those where individual actions can have critical impacts, prevention focus is preferable. On various benchmarks, including MuJoCo continuous control, Terrain locomotion, Atari games, and sparse-reward environments, the proposed schemes consistently demonstrate improvement over mainstream methods, not only accelerating the learning process but also obtaining substantial performance gains.""","""The authors take inspiration from regulatory fit theory and propose a new parameter for policy gradient algorithms in RL that can manage the ""regulatory focus"" of an agent. They hypothesize that this can affect performance in a problem-specific way, especially when trading off between broad exploration and risk. The reviewers expressed concerns about the usefulness of the proposed algorithm in practice and a lack of thorough empirical comparisons or theoretical results. Unfortunately, the authors did not provide a rebuttal, so no further discussion of these issues was possible; thus, I recommend to reject."""
397,"""Training Deep Neural Networks by optimizing over nonlocal paths in hyperparameter space""","['deep learning', 'Hyperparameter optimization', 'dropout']","""Hyperparameter optimization is both a practical issue and an interesting theoretical problem in training of deep architectures. Despite many recent advances the most commonly used methods almost universally involve training multiple and decoupled copies of the model, in effect sampling the hyperparameter space. We show that at a negligible additional computational cost, results can be improved by sampling \emph{nonlocal paths} instead of points in hyperparameter space. To this end we interpret hyperparameters as controlling the level of correlated noise in training, which can be mapped to an effective temperature. The usually independent instances of the model are coupled and allowed to exchange their hyperparameters throughout the training using the well established parallel tempering technique of statistical physics. Each simulation corresponds then to a unique path, or history, in the joint hyperparameter/model-parameter space. We provide empirical tests of our method, in particular for dropout and learning rate optimization. We observed faster training and improved resistance to overfitting and showed a systematic decrease in the absolute validation error, improving over benchmark results.""","""This paper uses a variant of parallel tempering to tune the subset of neural net hyperparameters which control the amount of noise and/or rate of diffusion (e.g. learning rate, batch size). It's certainly an appealing idea to run multiple chains in parallel and periodically propose swaps between them. However, I'm not persuaded about the details. The argumentation in the paper is fairly informal, and it uses ideas from optimization and MCMC somewhat interchangeably. Since the individual chains aren't sampling from any known stationary distribution, it's not clear to me what MH-based swaps will achieve. The authors are upset with one of the reviews and think it misrepresents their paper. However, I find myself agreeing with most of the reviewer's points. Furthermore, as a general principle, the availability of code doesn't by itself make a paper reproducible. One should be able to reproduce it without the code, and one shouldn't need to refer to the code for important details about the algorithm. Another limitation (pointed out by various reviewers) is that there aren't any comparisons against prior work on hyperparameter optimization. Overall, I think there are some promising and appealing ideas in this submission, but it needs to be cleaned up before it's ready for publication at ICLR. """
398,"""Versatile Anomaly Detection with Outlier Preserving Distribution Mapping Autoencoders""","['Anomaly detection', 'outliers', 'deep learning', 'distribution mapping', 'wasserstein autoencoders']",""" State-of-the-art deep learning methods for outlier detection make the assumption that anomalies will appear far away from inlier data in the latent space produced by distribution mapping deep networks. However, this assumption fails in practice, because the divergence penalty adopted for this purpose encourages mapping outliers into the same high-probability regions as inliers. To overcome this shortcoming, we introduce a novel deep learning outlier detection method, called Outlier Preserving Distribution Mapping Autoencoder (OP-DMA), which succeeds to map outliers to low probability regions in the latent space of an autoencoder. For this we leverage the insight that outliers are likely to have a higher reconstruction error than inliers. We thus achieve outlier-preserving distribution mapping through weighting the reconstruction error of individual points by the value of a multivariate Gaussian probability density function evaluated at those points. This weighting implies that outliers will result overall penalty if they are mapped to low-probability regions. We show that if the global minimum of our newly proposed loss function is achieved, then our OP-DMA maps inliers to regions with a Mahalanobis distance less than delta, and outliers to regions past this delta, delta being the inverse Chi Squared CDF evaluated at (1-alpha) with alpha the percentage of outliers in the dataset. Our experiments confirm that OP-DMA consistently outperforms the state-of-art methods on a rich variety of outlier detection benchmark datasets.""","""This paper proposes an outlier detection method that maps outliers to low probability regions of the latent space. The novelty is in proposing a weighted reconstruction error penalizing the mapping of outliers into high probability regions. The reviewers find the idea promising. They have also raised several questions. It seems the questions are at least partially addressed in the rebuttal, and as a result one of our expert reviewers (R5) has increased their score from WR to WA. But since we did not have a champion for this paper and its overall score is not high enough, I can only recommend a reject at this stage."""
399,"""Discovering Motor Programs by Recomposing Demonstrations""","['Learning from Demonstration', 'Imitation Learning', 'Motor Primitives']","""In this paper, we present an approach to learn recomposable motor primitives across large-scale and diverse manipulation demonstrations. Current approaches to decomposing demonstrations into primitives often assume manually defined primitives and bypass the difficulty of discovering these primitives. On the other hand, approaches in primitive discovery put restrictive assumptions on the complexity of a primitive, which limit applicability to narrow tasks. Our approach attempts to circumvent these challenges by jointly learning both the underlying motor primitives and recomposing these primitives to form the original demonstration. Through constraints on both the parsimony of primitive decomposition and the simplicity of a given primitive, we are able to learn a diverse set of motor primitives, as well as a coherent latent representation for these primitives. We demonstrate both qualitatively and quantitatively, that our learned primitives capture semantically meaningful aspects of a demonstration. This allows us to compose these primitives in a hierarchical reinforcement learning setup to efficiently solve robotic manipulation tasks like reaching and pushing. Our results may be viewed at pseudo-url. ""","""The work presents a novel and effective solution to learning reusable motor skills. The urgency of this problem and the considerable rebuttal of the authors merits publication of this paper, which is not perfect but needs community attention."""
400,"""Training Neural Networks for and by Interpolation""","['optimization', 'adaptive learning-rate', 'Polyak step-size', 'Newton-Raphson']","""In modern supervised learning, many deep neural networks are able to interpolate the data: the empirical loss can be driven to near zero on all samples simultaneously. In this work, we explicitly exploit this interpolation property for the design of a new optimization algorithm for deep learning. Specifically, we use it to compute an adaptive learning-rate in closed form at each iteration. This results in the Adaptive Learning-rates for Interpolation with Gradients (ALI-G) algorithm. ALI-G retains the main advantage of SGD which is a low computational cost per iteration. But unlike SGD, the learning-rate of ALI-G uses a single constant hyper-parameter and does not require a decay schedule, which makes it considerably easier to tune. We provide convergence guarantees of ALI-G in the stochastic convex setting. Notably, all our convergence results tackle the realistic case where the interpolation property is satisfied up to some tolerance. We provide experiments on a variety of architectures and tasks: (i) learning a differentiable neural computer; (ii) training a wide residual network on the SVHN data set; (iii) training a Bi-LSTM on the SNLI data set; and (iv) training wide residual networks and densely connected networks on the CIFAR data sets. ALI-G produces state-of-the-art results among adaptive methods, and even yields comparable performance with SGD, which requires manually tuned learning-rate schedules. Furthermore, ALI-G is simple to implement in any standard deep learning framework and can be used as a drop-in replacement in existing code.""","""This paper uses the interpolation property to design a new optimization algorithm for deep learning, which computes an adaptive learning-rate in closed form at each iteration. The authors also analyzed the convergence rate of the proposed algorithm in the stochastic convex optimization setting. Experiments on several benchmark neural networks and datasets verify the effectiveness of the proposed algorithm. This is a borderline paper and has been carefully discussed. The main objection of the reviewers include: (1) The interplay between regularization and the interpolation property is not clear; and (2) the proposed algorithm is no better than SGD in any of the benchmarks except one, where SGD's learning rate is set to be a constant. After the author response, this paper still does not gather sufficient support. So I encourage the authors to improve this paper and resubmit it to future conference."""
401,"""Isolating Latent Structure with Cross-population Variational Autoencoders""","['variational autoencoder', 'latent variable model', 'probabilistic graphical model', 'machine learning', 'deep learning', 'continual learning']","""A significant body of recent work has examined variational autoencoders as a powerful approach for tasks which involve modeling the distribution of complex data such as images and text. In this work, we present a framework for modeling multiple data sets which come from differing distributions but which share some common latent structure. By incorporating architectural constraints and using a mutual information regularized form of the variational objective, our method successfully models differing data populations while explicitly encouraging the isolation of the shared and private latent factors. This enables our model to learn useful shared structure across similar tasks and to disentangle cross-population representations in a weakly supervised way. We demonstrate the utility of our method on several applications including image denoising, sub-group discovery, and continual learning.""","""The paper proposes a hierarchical Bayesian model over multiple data sets that has both data set specific as well as shared parameters. The data set specific parameters are further encouraged to only capture aspects that vary across data sets by an addition mutual information contribution to the training loss. The proposed method is compared to standard VAEs on multiple data sets. The reviewers agree that the main approach of the paper is sensible. However, concerns were raised about general novelty, about the theoretical justification for the proposed loss function and about the lack of non-trivial baselines. The authors' rebuttal did not manage to full address these points. Based on the reviews and my own reading, I think this paper is slightly below acceptance threshold."""
402,"""Sharing Knowledge in Multi-Task Deep Reinforcement Learning""","['Deep Reinforcement Learning', 'Multi-Task']","""We study the benefit of sharing representations among tasks to enable the effective use of deep neural networks in Multi-Task Reinforcement Learning. We leverage the assumption that learning from different tasks, sharing common properties, is helpful to generalize the knowledge of them resulting in a more effective feature extraction compared to learning a single task. Intuitively, the resulting set of features offers performance benefits when used by Reinforcement Learning algorithms. We prove this by providing theoretical guarantees that highlight the conditions for which is convenient to share representations among tasks, extending the well-known finite-time bounds of Approximate Value-Iteration to the multi-task setting. In addition, we complement our analysis by proposing multi-task extensions of three Reinforcement Learning algorithms that we empirically evaluate on widely used Reinforcement Learning benchmarks showing significant improvements over the single-task counterparts in terms of sample efficiency and performance.""","""This paper considers the benefits of deep multi-task RL with shared representations, by deriving bounds for multi-task approximate value and policy iteration bounds. This shows both theoretically and empirically that shared representations across multiple tasks can outperform single task performance. There were a number of minor concerns from the reviewers regarding relation to prior work and details of the analysis, but these were clarified in the discussion. This paper adds important theoretical analysis to the literature, and so I recommend it is accepted."""
403,"""Crafting Data-free Universal Adversaries with Dilate Loss""",[],"""We introduce a method to create Universal Adversarial Perturbations (UAP) for a given CNN in a data-free manner. Data-free approaches suite scenarios where the original training data is unavailable for crafting adversaries. We show that the adversary generation with full training data can be approximated to a formulation without data. This is realized through a sequential optimization of the adversarial perturbation with the proposed dilate loss. Dilate loss basically maximizes the Euclidean norm of the output before nonlinearity at any layer. By doing so, the perturbation constrains the ReLU activation function at every layer to act roughly linear for data points and thus eliminate the dependency on data for crafting UAPs. Extensive experiments demonstrate that our method not only has theoretical support, but achieves higher fooling rate than the existing data-free work. Furthermore, we evidence improvement in limited data cases as well.""","""This paper focuses on finding universal adversarial perturbations, that is, a single noise pattern that can be applied to any input to fool the network in many cases. Further more, it focuses on the data-free setting, where such a perturbation is found without having access to data (images) from the distribution that train- and test data comes from. The reviewers were very conflicted about this paper. Among others, the strong experimental results and the clarity of writing and analysis were praised. However, there was also criticism of the amount of novelty compared to GDUAP, on the strong assumptions needed (potentially limiting the applicability), and on some weakness in the theoretical analysis. In the end, the paper seems in current form not convincing enough for me to recommend acceptance for ICLR. """
404,"""Rigging the Lottery: Making All Tickets Winners""","['sparse training', 'sparsity', 'pruning', 'lottery tickets', 'imagenet', 'resnet', 'mobilenet', 'efficiency', 'optimization', 'local minima']","""Sparse neural networks have been shown to yield computationally efficient networks with improved inference times. There is a large body of work on training dense networks to yield sparse networks for inference (Molchanov et al., 2017;Zhu & Gupta, 2018; Louizos et al., 2017; Li et al., 2016; Guo et al., 2016). This limits the size of the largest trainable sparse model to that of the largest trainable dense model. In this paper we introduce a method to train sparse neural networks with a fixed parameter count and a fixed computational cost throughout training, without sacrificing accuracy relative to existing dense-to-sparse training methods. Our method updates the topology of the network during training by using parameter magnitudes and infrequent gradient calculations. We show that this approach requires less floating-point operations (FLOPs) to achieve a given level of accuracy compared to prior techniques. We demonstrate state-of-the-art sparse training results with ResNet-50, MobileNet v1 and MobileNet v2 on the ImageNet-2012 dataset. Finally, we provide some insights into why allowing the topology to change during the optimization can overcome local minima encountered when the topology remains static.""","""A somewhat new approach to growing sparse networks. Experimental validation is good, focussing on ImageNet and CIFAR-10, plus experiments on language modelling. Though efficient in computation and storage size, the approach does not have a theoretical foundation. That does not agree with the intended scope of ICLR. I strongly suggest the authors submit elsewhere."""
405,"""Stochastic Latent Actor-Critic: Deep Reinforcement Learning with a Latent Variable Model""",[],"""Deep reinforcement learning (RL) algorithms can use high-capacity deep networks to learn directly from image observations. However, these kinds of observation spaces present a number of challenges in practice, since the policy must now solve two problems: a representation learning problem, and a task learning problem. In this paper, we aim to explicitly learn representations that can accelerate reinforcement learning from images. We propose the stochastic latent actor-critic (SLAC) algorithm: a sample-efficient and high-performing RL algorithm for learning policies for complex continuous control tasks directly from high-dimensional image inputs. SLAC learns a compact latent representation space using a stochastic sequential latent variable model, and then learns a critic model within this latent space. By learning a critic within a compact state space, SLAC can learn much more efficiently than standard RL methods. The proposed model improves performance substantially over alternative representations as well, such as variational autoencoders. In fact, our experimental evaluation demonstrates that the sample efficiency of our resulting method is comparable to that of model-based RL methods that directly use a similar type of model for control. Furthermore, our method outperforms both model-free and model-based alternatives in terms of final performance and sample efficiency, on a range of difficult image-based control tasks. Our code and videos of our results are available at our website.""","""An actor-critic method is introduced that explicitly aims to learn a good representation using a stochastic latent variable model. There is disagreement among the reviewers regarding the significance of this paper. Two of the three reviewers argue that several strong claims made in the paper that are not properly backed up by evidence. In particular, it is not sufficiently clear to what degree the shown performance improvement is due to the stochastic nature of the model used, one of the key points of the paper. I recommend that the authors provide more empirical evidence to back up their claims and then resubmit."""
406,"""Shallow VAEs with RealNVP Prior Can Perform as Well as Deep Hierarchical VAEs""","['Variational Auto-encoder', 'RealNVP', 'learnable prior']","""Using powerful posterior distributions is a popular technique in variational inference. However, recent works showed that the aggregated posterior may fail to match unit Gaussian prior, even with expressive posteriors, thus learning the prior becomes an alternative way to improve the variational lower-bound. We show that using learned RealNVP prior and just one latent variable in VAE, we can achieve test NLL comparable to very deep state-of-the-art hierarchical VAE, outperforming many previous works with complex hierarchical VAE architectures. We hypothesize that, when coupled with Gaussian posteriors, the learned prior can encourage appropriate posterior overlapping, which is likely to improve reconstruction loss and lower-bound, supported by our experimental results. We demonstrate that, with learned RealNVP prior, -VAE can have better rate-distortion curve than using fixed Gaussian prior.""","""This paper provides an interesting insight into the fitting of variational autoencoders. While much of the recent literature focuses on training ever more expressive models, the authors demonstrate that learning a flexible prior can provide an equally strong model. Unfortunately one review is somewhat terse. Among the other reviews, one reviewer found the paper very interesting and compelling but did not feel comfortable raising their score to ""accept"" in the discussion phase citing a lack of compelling empirical results in compared to baselines. Both reviewers were concerned about novelty in light of Huang et al., in which a RealNVP prior is also learned in a VAE. AnonReviewer3 also felt that the experiments were not thorough enough to back up the claims in the paper. Unfortunately, for these reasons the recommendation is to reject. More compelling empirical results with carefully chosen baselines to back up the claims of the paper and comparison to existing literature (Huang et al) would make this paper much stronger. """
407,"""Decoupling Hierarchical Recurrent Neural Networks With Locally Computable Losses""",[],"""Learning long-term dependencies is a key long-standing challenge of recurrent neural networks (RNNs). Hierarchical recurrent neural networks (HRNNs) have been considered a promising approach as long-term dependencies are resolved through shortcuts up and down the hierarchy. Yet, the memory requirements of Truncated Backpropagation Through Time (TBPTT) still prevent training them on very long sequences. In this paper, we empirically show that in (deep) HRNNs, propagating gradients back from higher to lower levels can be replaced by locally computable losses, without harming the learning capability of the network, over a wide range of tasks. This decoupling by local losses reduces the memory requirements of training by a factor exponential in the depth of the hierarchy in comparison to standard TBPTT.""","""All reviewers gave this paper a score of 1. The AC recommends rejection."""
408,"""Evaluations and Methods for Explanation through Robustness Analysis""","['Interpretability', 'Explanations', 'Adversarial Robustness']","""Among multiple ways of interpreting a machine learning model, measuring the importance of a set of features tied to a prediction is probably one of the most intuitive way to explain a model. In this paper, we establish the link between a set of features to a prediction with a new evaluation criteria, robustness analysis, which measures the minimum tolerance of adversarial perturbation. By measuring the tolerance level for an adversarial attack, we can extract a set of features that provides most robust support for a current prediction, and also can extract a set of features that contrasts the current prediction to a target class by setting a targeted adversarial attack. By applying this methodology to various prediction tasks across multiple domains, we observed the derived explanations are indeed capturing the significant feature set qualitatively and quantitatively.""","""The paper proposes an approach for finding an explainable subset of features by choosing features that simultaneously are: most important for the prediction task, and robust against adversarial perturbation. The paper provides quantitative and qualitative evidence that the proposed method works. The paper had two reviews (both borderline), and the while the authors responded enthusiastically, the reviewers did not further engage during the discussion period. The paper has a promising idea, but the presentation and execution in its current form have been found to be not convincing by the reviewers. Unfortunately, the submission as it stands is not yet suitable for ICLR."""
409,"""Deep Hierarchical-Hyperspherical Learning (DH^2L)""",[],"""Regularization is known to be an inexpensive and reasonable solution to alleviate over-fitting problems of inference models, including deep neural networks. In this paper, we propose a hierarchical regularization which preserves the semantic structure of a sample distribution. At the same time, this regularization promotes diversity by imposing distance between parameter vectors enlarged within semantic structures. To generate evenly distributed parameters, we constrain them to lie on \emph{hierarchical hyperspheres}. Evenly distributed parameters are considered to be less redundant. To define hierarchical parameter space, we propose to reformulate the topology space with multiple hypersphere space. On each hypersphere space, the projection parameter is defined by two individual parameters. Since maximizing groupwise pairwise distance between points on hypersphere is nontrivial (generalized Thomson problem), we propose a new discrete metric integrated with continuous angle metric. Extensive experiments on publicly available datasets (CIFAR-10, CIFAR-100, CUB200-2011, and Stanford Cars), our proposed method shows improved generalization performance, especially when the number of super-classes is larger.""","""The paper proposes a hierarchical diversity promoting regularizer for neural networks. Experiments are shown with this regularizer applied to the last fully-connected layer of the network, in addition to L2 and energy regularizers on other layers. Reviewers found the paper well-motivated but had concerns on writing/readability of the paper and that it provides only marginal improvements over existing simple regularizers such as L2. I would encourage the authors to look for scenarios where the proposed regularizer can show clear improvements and resubmit to a future venue. """
410,"""Self-Attentional Credit Assignment for Transfer in Reinforcement Learning""","['reinforcement learning', 'transfer learning', 'credit assignment']","""The ability to transfer knowledge to novel environments and tasks is a sensible desiderata for general learning agents. Despite the apparent promises, transfer in RL is still an open and little exploited research area. In this paper, we take a brand-new perspective about transfer: we suggest that the ability to assign credit unveils structural invariants in the tasks that can be transferred to make RL more sample efficient. Our main contribution is Secret, a novel approach to transfer learning for RL that uses a backward-view credit assignment mechanism based on a self-attentive architecture. Two aspects are key to its generality: it learns to assign credit as a separate offline supervised process and exclusively modifies the reward function. Consequently, it can be supplemented by transfer methods that do not modify the reward function and it can be plugged on top of any RL algorithm.""","""The paper introduces a novel approach to transfer learning in RL based on credit assignment. The reviewers had quite diverse opinions on this paper. The strength of the paper is that it introduces an interesting new direction for transfer learning in RL. However, there are some questions regarding design choices and whether the experiments sufficiently validate the idea (i.e., the sensitivity to hyperparameters is a question that is not sufficiently addressed). Overall, this research has great potential. However, a more extensive empirical study is necessary before it can be accepted."""
411,"""Critical initialisation in continuous approximations of binary neural networks""",[],"""The training of stochastic neural network models with binary ( pseudo-formula ) weights and activations via continuous surrogate networks is investigated. We derive new surrogates using a novel derivation based on writing the stochastic neural network as a Markov chain. This derivation also encompasses existing variants of the surrogates presented in the literature. Following this, we theoretically study the surrogates at initialisation. We derive, using mean field theory, a set of scalar equations describing how input signals propagate through the randomly initialised networks. The equations reveal whether so-called critical initialisations exist for each surrogate network, where the network can be trained to arbitrary depth. Moreover, we predict theoretically and confirm numerically, that common weight initialisation schemes used in standard continuous networks, when applied to the mean values of the stochastic binary weights, yield poor training performance. This study shows that, contrary to common intuition, the means of the stochastic binary weights should be initialised close to 1$, for deeper networks to be trainable.""","""The authors study neural networks with binary weights or activations, and the so-called ""differentiable surrogates"" used to train them. They present an analysis that unifies previously proposed surrogates and they study critical initialization of weights to facilitate trainability. The reviewers agree that the main topic of the paper is important (in particular initialization heuristics of neural networks), however they found the presentation of the content lacking in clarity as well as in clearly emphasizing the main contributions. The authors imporved the readability of the manuscript in the rebuttal. This paper seems to be at acceptance threshold and 2 of 3 reviewers indicated low confidence. Not being familiar with this line of work, I recommend acceptance following the average review score."""
412,"""MMD GAN with Random-Forest Kernels""","['GANs', 'MMD', 'kernel', 'random forest', 'unbiased gradients']","""In this paper, we propose a novel kind of kernel, random forest kernel, to enhance the empirical performance of MMD GAN. Different from common forests with deterministic routings, a probabilistic routing variant is used in our innovated random-forest kernel, which is possible to merge with the CNN frameworks. Our proposed random-forest kernel has the following advantages: From the perspective of random forest, the output of GAN discriminator can be viewed as feature inputs to the forest, where each tree gets access to merely a fraction of the features, and thus the entire forest benefits from ensemble learning. In the aspect of kernel method, random-forest kernel is proved to be characteristic, and therefore suitable for the MMD structure. Besides, being an asymmetric kernel, our random-forest kernel is much more flexible, in terms of capturing the differences between distributions. Sharing the advantages of CNN, kernel method, and ensemble learning, our random-forest kernel based MMD GAN obtains desirable empirical performances on CIFAR-10, CelebA and LSUN bedroom data sets. Furthermore, for the sake of completeness, we also put forward comprehensive theoretical analysis to support our experimental results.""","""Reviewers raise the serious issue that the proof of Theorem 2 is plagiarized from Theorem 1 of ""Demystifying MMD GANs"" (pseudo-url). With no response from the authors, this is a clear reject. """
413,"""GraphSAINT: Graph Sampling Based Inductive Learning Method""","['Graph Convolutional Networks', 'Graph sampling', 'Network embedding']","""Graph Convolutional Networks (GCNs) are powerful models for learning representations of attributed graphs. To scale GCNs to large graphs, state-of-the-art methods use various layer sampling techniques to alleviate the ""neighbor explosion"" problem during minibatch training. We propose GraphSAINT, a graph sampling based inductive learning method that improves training efficiency and accuracy in a fundamentally different way. By changing perspective, GraphSAINT constructs minibatches by sampling the training graph, rather than the nodes or edges across GCN layers. Each iteration, a complete GCN is built from the properly sampled subgraph. Thus, we ensure fixed number of well-connected nodes in all layers. We further propose normalization technique to eliminate bias, and sampling algorithms for variance reduction. Importantly, we can decouple the sampling from the forward and backward propagation, and extend GraphSAINT with many architecture variants (e.g., graph attention, jumping connection). GraphSAINT demonstrates superior performance in both accuracy and training time on five large graphs, and achieves new state-of-the-art F1 scores for PPI (0.995) and Reddit (0.970). ""","""All three reviewers advocated acceptance. The AC agrees, feeling the paper is interesting. """
414,"""Oblique Decision Trees from Derivatives of ReLU Networks""","['oblique decision trees', 'ReLU networks']","""We show how neural models can be used to realize piece-wise constant functions such as decision trees. The proposed architecture, which we call locally constant networks, builds on ReLU networks that are piece-wise linear and hence their associated gradients with respect to the inputs are locally constant. We formally establish the equivalence between the classes of locally constant networks and decision trees. Moreover, we highlight several advantageous properties of locally constant networks, including how they realize decision trees with parameter sharing across branching / leaves. Indeed, only pseudo-formula neurons suffice to implicitly model an oblique decision tree with pseudo-formula leaf nodes. The neural representation also enables us to adopt many tools developed for deep networks (e.g., DropConnect (Wan et al., 2013)) while implicitly training decision trees. We demonstrate that our method outperforms alternative techniques for training oblique decision trees in the context of molecular property classification and regression tasks. ""","""This paper leverages the piecewise linearity of predictions in ReLU neural networks to encode and learn piecewise constant predictors akin to oblique decision trees. The reviewers think the paper is interesting, and the idea is clever. The paper can be further improved in experiments. This includes comparison to ensembles of traditional trees or (in some cases) simple ReLU networks. Also the tradeoffs other than accuracy between the method and baselines are also interesting. """
415,"""INFERENCE, PREDICTION, AND ENTROPY RATE OF CONTINUOUS-TIME, DISCRETE-EVENT PROCESSES""",['continuous-time prediction'],"""The inference of models, prediction of future symbols, and entropy rate estimation of discrete-time, discrete-event processes is well-worn ground. However, many time series are better conceptualized as continuous-time, discrete-event processes. Here, we provide new methods for inferring models, predicting future symbols, and estimating the entropy rate of continuous-time, discrete-event processes. The methods rely on an extension of Bayesian structural inference that takes advantage of neural networks universal approximation power. Based on experiments with simple synthetic data, these new methods seem to be competitive with state-of- the-art methods for prediction and entropy rate estimation as long as the correct model is inferred.""","""The authors present a Bayesian model for time series which are represented as discrete events in continuous time and describe methods for doing parameter inference, future event prediction and entropy rate estimation for such processes. However, the reviewers find that the novelty of the paper is not high enough, and without sufficient acknowledgement and comparison to existing literature."""
416,"""Wildly Unsupervised Domain Adaptation and Its Powerful and Efficient Solution""",[],"""In unsupervised domain adaptation (UDA), classifiers for the target domain (TD) are trained with clean labeled data from the source domain (SD) and unlabeled data from TD. However, in the wild, it is hard to acquire a large amount of perfectly clean labeled data in SD given limited budget. Hence, we consider a new, more realistic and more challenging problem setting, where classifiers have to be trained with noisy labeled data from SD and unlabeled data from TD---we name it wildly UDA (WUDA). We show that WUDA ruins all UDA methods if taking no care of label noise in SD, and to this end, we propose a Butterfly framework, a powerful and efficient solution to WUDA. Butterfly maintains four models (e.g., deep networks) simultaneously, where two take care of all adaptations (i.e., noisy-to-clean, labeled-to-unlabeled, and SD-to-TD-distributional) and then the other two can focus on classification in TD. As a consequence, Butterfly possesses all the conceptually necessary components for solving WUDA. Experiments demonstrate that under WUDA, Butterfly significantly outperforms existing baseline methods.""","""The authors proposed a new problem setting called Wildly UDA (WUDA) where the labels in the source domain are noisy. They then proposed the ""butterfly"" method, combining co-teaching with pseudo labeling and evaluated the method on a range of WUDA problem setup. In general, there is a concern that Butterfly as the combination between co-teaching and pseudo labeling is weak on the novelty side. In this case the value of the method can be assessed by strong empirical result. However as pointed out by Reviewer 3, a common setup (SVHN<-> MNIST) that appeared in many UDA paper was missing in the original draft. The author added the result for SVHN<-> MNIST as a response to review 3, however they only considered the UDA setting, not WUDA, hence the value of that experiment was limited. In addition, there are other UDA methods that achieve significantly better performance on SVHN<->MNIST that should be considered among the baselines. For example DIRT-T (Shu et al 2018) has a second phase where the decision boundary on the target domain is adjusted, and that could provide some robustness against a decision boundary affected by noise. Shu et al (2018) A DIRT-T Approach to Unsupervised Domain Adaptation. ICLR 2018. pseudo-url I suggest that the authors consider performing the full experiment with WUDA using SVHN<->MNIST, and also consider the use of stronger UDA methods among the baseline. """
417,"""CP-GAN: Towards a Better Global Landscape of GANs""","['GAN', 'global landscape', 'non-convex optimization', 'min-max optimization', 'dynamics']","""GANs have been very popular in data generation and unsupervised learning, but our understanding of GAN training is still very limited. One major reason is that GANs are often formulated as non-convex-concave min-max optimization. As a result, most recent studies focused on the analysis in the local region around the equilibrium. In this work, we perform a global analysis of GANs from two perspectives: the global landscape of the outer-optimization problem and the global behavior of the gradient descent dynamics. We find that the original GAN has exponentially many bad strict local minima which are perceived as mode-collapse, and the training dynamics (with linear discriminators) cannot escape mode collapse. To address these issues, we propose a simple modification to the original GAN, by coupling the generated samples and the true samples. We prove that the new formulation has no bad basins, and its training dynamics (with linear discriminators) has a Lyapunov function that leads to global convergence. Our experiments on standard datasets show that this simple loss outperforms the original GAN and WGAN-GP. ""","""The paper is proposed a rejection based on majority reviews."""
418,"""Towards Modular Algorithm Induction""","['algorithm induction', 'reinforcement learning', 'program synthesis', 'modular']","""We present a modular neural network architecture MAIN that learns algorithms given a set of input-output examples. MAIN consists of a neural controller that interacts with a variable-length input tape and learns to compose modules together with their corresponding argument choices. Unlike previous approaches, MAIN uses a general domain-agnostic mechanism for selection of modules and their arguments. It uses a general input tape layout together with a parallel history tape to indicate most recently used locations. Finally, it uses a memoryless controller with a length-invariant self-attention based input tape encoding to allow for random access to tape locations. The MAIN architecture is trained end-to-end using reinforcement learning from a set of input-output examples. We evaluate MAIN on five algorithmic tasks and show that it can learn policies that generalizes perfectly to inputs of much longer lengths than the ones used for training.""","""The reviewers all agreed that although there is a sensible idea here, the method and presentation need a lot of work, especially their treatment of related methods."""
419,"""CloudLSTM: A Recurrent Neural Model for Spatiotemporal Point-cloud Stream Forecasting""","['spatio-temporal forecasting', 'point cloud stream forecasting', 'recurrent neural network']","""This paper introduces CloudLSTM, a new branch of recurrent neural models tailored to forecasting over data streams generated by geospatial point-cloud sources. We design a Dynamic Point-cloud Convolution (D-Conv) operator as the core component of CloudLSTMs, which performs convolution directly over point-clouds and extracts local spatial features from sets of neighboring points that surround different elements of the input. This operator maintains the permutation invariance of sequence-to-sequence learning frameworks, while representing neighboring correlations at each time step -- an important aspect in spatiotemporal predictive learning. The D-Conv operator resolves the grid-structural data requirements of existing spatiotemporal forecasting models and can be easily plugged into traditional LSTM architectures with sequence-to-sequence learning and attention mechanisms. We apply our proposed architecture to two representative, practical use cases that involve point-cloud streams, i.e. mobile service traffic forecasting and air quality indicator forecasting. Our results, obtained with real-world datasets collected in diverse scenarios for each use case, show that CloudLSTM delivers accurate long-term predictions, outperforming a variety of neural network models.""","""The paper presents an approach to forecasting over temporal streams of permutation-invariant data such as point clouds. The approach is based on an operator (DConv) that is related to continuous convolution operators such as X-Conv and others. The reviews are split. After the authors' responses, concerns remain and two ratings remain ""3"". The AC agrees with the concerns and recommends against accepting the paper."""
420,"""V4D: 4D Convolutional Neural Networks for Video-level Representation Learning""","['video-level representation learning', 'video action recognition', '4D CNNs']","""Most existing 3D CNN structures for video representation learning are clip-based methods, and do not consider video-level temporal evolution of spatio-temporal features. In this paper, we propose Video-level 4D Convolutional Neural Networks, namely V4D, to model the evolution of long-range spatio-temporal representation with 4D convolutions, as well as preserving 3D spatio-temporal representations with residual connections. We further introduce the training and inference methods for the proposed V4D. Extensive experiments are conducted on three video recognition benchmarks, where V4D achieves excellent results, surpassing recent 3D CNNs by a large margin.""","""This paper proposes video-level 4D CNNs and the corresponding training and inference methods for improved video representation learning. The proposed model achieves state-of-the-art performance on three action recognition tasks. Reviewers agree that the idea well motivated and interesting, but were initially concerned with positioning with respect to the related work, novelty, and computational tractability. As these issues were mostly resolved during the discussion phase, I will recommend the acceptance of this paper. We ask the authors to address the points raised during the discussion to the manuscript, with a focus on the tradeoff between the improved performance and computational cost."""
421,"""Event Discovery for History Representation in Reinforcement Learning""","['reinforcement learning', 'self-supervision', 'POMDP']","""Environments in Reinforcement Learning (RL) are usually only partially observable. To address this problem, a possible solution is to provide the agent with information about past observations. While common methods represent this history using a Recurrent Neural Network (RNN), in this paper we propose an alternative representation which is based on the record of the past events observed in a given episode. Inspired by the human memory, these events describe only important changes in the environment and, in our approach, are automatically discovered using self-supervision. We evaluate our history representation method using two challenging RL benchmarks: some games of the Atari-57 suite and the 3D environment Obstacle Tower. Using these benchmarks we show the advantage of our solution with respect to common RNN-based approaches.""","""The authors propose approaches to handle partial observability in reinforcement learning. The reviewers agree that the paper does not sufficiently justify the methods that are proposed and even the experimental performance shows that the proposed method is not always better than baselines."""
422,"""MissDeepCausal: causal inference from incomplete data using deep latent variable models""","['treatment effect estimation', 'missing values', 'variational autoencoders', 'importance sampling', 'double robustness']","""Inferring causal effects of a treatment, intervention or policy from observational data is central to many applications. However, state-of-the-art methods for causal inference seldom consider the possibility that covariates have missing values, which is ubiquitous in many real-world analyses. Missing data greatly complicate causal inference procedures as they require an adapted unconfoundedness hypothesis which can be difficult to justify in practice. We circumvent this issue by considering latent confounders whose distribution is learned through variational autoencoders adapted to missing values. They can be used either as a pre-processing step prior to causal inference but we also suggest to embed them in a multiple imputation strategy to take into account the variability due to missing values. Numerical experiments demonstrate the effectiveness of the proposed methodology especially for non-linear models compared to competitors.""","""This paper addresses the problem of causal inference from incomplete data. The main idea is to use a latent confounders through a VAE. A multiple imputation strategy is then used to account for missing values. Reviewers have mixed responses to this paper. Initially, the scores were 8,6,3. After discussion the reviewer who rated is 8 reduced their score to 6, but at the same time the score of 3 went up to 6. The reviewers agree that the problem tackled in the paper is difficult, and also acknowledge that the rebuttal of the paper was reasonable and honest. The authors added a simulation study which shows good results. The main argument towards rejection is that the paper does not beat the state of the art. I do think that this is still ok if the paper brings useful insights for the community even though it does not beat the state fo the art. For now, with the current score, the paper does not make the cut. For this reason, I recommend to reject the paper, but I encourage the authors to resubmit this to another venue after improving the paper."""
423,"""How noise affects the Hessian spectrum in overparameterized neural networks""","['noise', 'optimization', 'loss landscape', 'Hessian']","""Stochastic gradient descent (SGD) forms the core optimization method for deep neural networks. While some theoretical progress has been made, it still remains unclear why SGD leads the learning dynamics in overparameterized networks to solutions that generalize well. Here we show that for overparameterized networks with a degenerate valley in their loss landscape, SGD on average decreases the trace of the Hessian of the loss. We also generalize this result to other noise structures and show that isotropic noise in the non-degenerate subspace of the Hessian decreases its determinant. In addition to explaining SGDs role in sculpting the Hessian spectrum, this opens the door to new optimization approaches that may confer better generalization performance. We test our results with experiments on toy models and deep neural networks.""","""The study of the impact of the noise on the Hessian is interesting and I commend the authors for attacking this difficult problem. After the rebuttal and discussion, the reviewers had two concerns: - The strength of the assumptions of the theorem - Assuming the assumptions are reasonable, the conclusions to draw given the current weak link between Hessian and generalization. I'm confident the authors will be able to address these issues for a later submission."""
424,"""Unsupervised Few Shot Learning via Self-supervised Training""","['few shot learning', 'self-supervised learning', 'meta-learning']","""Learning from limited exemplars (few-shot learning) is a fundamental, unsolved problem that has been laboriously explored in the machine learning community. However, current few-shot learners are mostly supervised and rely heavily on a large amount of labeled examples. Unsupervised learning is a more natural procedure for cognitive mammals and has produced promising results in many machine learning tasks. In the current study, we develop a method to learn an unsupervised few-shot learner via self-supervised training (UFLST), which can effectively generalize to novel but related classes. The proposed model consists of two alternate processes, progressive clustering and episodic training. The former generates pseudo-labeled training examples for constructing episodic tasks; and the later trains the few-shot learner using the generated episodic tasks which further optimizes the feature representations of data. The two processes facilitate with each other, and eventually produce a high quality few-shot learner. Using the benchmark dataset Omniglot, we show that our model outperforms other unsupervised few-shot learning methods to a large extend and approaches to the performances of supervised methods. Using the benchmark dataset Market1501, we further demonstrate the feasibility of our model to a real-world application on person re-identification.""","""This paper proposes an approach for unsupervised meta-learning for few-shot learning that iteratively combines clustering and episodic learning. The approach is interesting, and the topic is of interest to the ICLR community. Further, it is nice to see experiments on a more real world setting with the Market1501 dataset. However, the paper lacks any meaningful comparison to prior works on unsupervised meta-learning. While it is accurate that the architecture used and/or assumptions used in this paper are somewhat different from those in prior works, it's important to find a way to compare to at least one of these prior methods in a meaningful way (e.g. by setting up a controlled comparison by running these prior methods in the experimental set-up considered in this work). Without such as comparison, it's impossible to judge the significance of this work in the context of prior papers. The paper isn't ready for publication at ICLR."""
425,"""Where is the Information in a Deep Network?""","['Information', 'Learning Dynamics', 'PAC-Bayes', 'Deep Learning']","""Whatever information a deep neural network has gleaned from past data is encoded in its weights. How this information affects the response of the network to future data is largely an open question. In fact, even how to define and measure information in a network entails some subtleties. We measure information in the weights of a deep neural network as the optimal trade-off between accuracy of the network and complexity of the weights relative to a prior. Depending on the prior, the definition reduces to known information measures such as Shannon Mutual Information and Fisher Information, but in general it affords added flexibility that enables us to relate it to generalization, via the PAC-Bayes bound, and to invariance. For the latter, we introduce a notion of effective information in the activations, which are deterministic functions of future inputs. We relate this to the Information in the Weights, and use this result to show that models of low (information) complexity not only generalize better, but are bound to learn invariant representations of future inputs. These relations hinge not only on the architecture of the model, but also on how it is trained.""","""This paper is full of ideas. However, a logical argument is only as strong as its weakest link, and I believe the current paper has some weak links. For example, the attempt to tie the behavior of SGD to free energy minimization relies on unrealistic approximations. Second, the bounds based on limiting flat priors become trivial. The authors in-depth response to my own review was much appreciated, especially given its last minute appearance. Unfortunately, I was not convinced by the arguments. In part, the authors argue that the logical argument they are making is not sensitive to certain issues that I raised, but this only highlights for me that the argument being made is not very precise. I can imagine a version of this work with sharper claims, built on clearly stated assumptions/conjectures about SGD's dynamics, RATHER THAN being framed as the consequences of clearly inaccurate approximations. The behavior of diffusions can be presented as evidence that the assumptions/conjectures (that cannot be proven at the moment, but which are needed to complete the logical argument) are reasonable. However, I am also not convinced that it is trivial to do this, and so the community must have a chance to review a major revision."""
426,"""Picking Winning Tickets Before Training by Preserving Gradient Flow""","['neural network', 'pruning before training', 'weight pruning']","""Overparameterization has been shown to benefit both the optimization and generalization of neural networks, but large networks are resource hungry at both training and test time. Network pruning can reduce test-time resource requirements, but is typically applied to trained networks and therefore cannot avoid the expensive training process. We aim to prune networks at initialization, thereby saving resources at training time as well. Specifically, we argue that efficient training requires preserving the gradient flow through the network. This leads to a simple but effective pruning criterion we term Gradient Signal Preservation (GraSP). We empirically investigate the effectiveness of the proposed method with extensive experiments on CIFAR-10, CIFAR-100, Tiny-ImageNet and ImageNet, using VGGNet and ResNet architectures. Our method can prune 80% of the weights of a VGG-16 network on ImageNet at initialization, with only a 1.6% drop in top-1 accuracy. Moreover, our method achieves significantly better performance than the baseline at extreme sparsity levels. Our code is made public at: pseudo-url.""","""This paper proposes a method to improve the training of sparse network by ensuring the gradient is preserved at initialization. The reviewers found that the approach was well motivated and well explained. The experimental evaluation considers challenging benchmarks such as Imagenet and includes strong baselines. """
427,"""LAMOL: LAnguage MOdeling for Lifelong Language Learning""","['NLP', 'Deep Learning', 'Lifelong Learning']","""Most research on lifelong learning applies to images or games, but not language. We present LAMOL, a simple yet effective method for lifelong language learning (LLL) based on language modeling. LAMOL replays pseudo-samples of previous tasks while requiring no extra memory or model capacity. Specifically, LAMOL is a language model that simultaneously learns to solve the tasks and generate training samples. When the model is trained for a new task, it generates pseudo-samples of previous tasks for training alongside data for the new task. The results show that LAMOL prevents catastrophic forgetting without any sign of intransigence and can perform five very different language tasks sequentially with only one model. Overall, LAMOL outperforms previous methods by a considerable margin and is only 2-3% worse than multitasking, which is usually considered the LLL upper bound. The source code is available at pseudo-url.""","""This paper proposes a new method for lifelong learning of language using language modeling. Their training scheme is designed so as to prevent catastrophic forgetting. The reviewers found the motivation clear and that the proposed method outperforms prior related work. Reviewers raised concerns about the title and the lack of some baselines which the authors have addressed in the rebuttal and their revision."""
428,"""Information-Theoretic Local Minima Characterization and Regularization""","['local minima', 'generalization', 'regularization', 'deep learning theory']","""Recent advances in deep learning theory have evoked the study of generalizability across different local minima of deep neural networks (DNNs). While current work focused on either discovering properties of good local minima or developing regularization techniques to induce good local minima, no approach exists that can tackle both problems. We achieve these two goals successfully in a unified manner. Specifically, based on the Fisher information we propose a metric both strongly indicative of generalizability of local minima and effectively applied as a practical regularizer. We provide theoretical analysis including a generalization bound and empirically demonstrate the success of our approach in both capturing and improving the generalizability of DNNs. Experiments are performed on CIFAR-10 and CIFAR-100 for various network architectures.""","""This paper proposes using the Fisher information matrix to characterize local minima of deep network loss landscapes to indicate generalizability of a local minimum. While the reviewers agree that this paper contains interesting ideas and its presentation has been substantially improved during the discussion period, there are still issues that remain unanswered, in particular between the main objective/claims and the presented evidence. The paper will benefit from a revision and resubmission to another venue."""
429,"""Temporal-difference learning for nonlinear value function approximation in the lazy training regime""","['deep reinforcement learning', 'function approximation', 'temporal-difference', 'lazy training']","""We discuss the approximation of the value function for infinite-horizon discounted Markov Reward Processes (MRP) with nonlinear functions trained with the Temporal-Difference (TD) learning algorithm. We consider this problem under a certain scaling of the approximating function, leading to a regime called lazy training. In this regime the parameters of the model vary only slightly during the learning process, a feature that has recently been observed in the training of neural networks, where the scaling we study arises naturally, implicit in the initialization of their parameters. Both in the under- and over-parametrized frameworks, we prove exponential convergence to local, respectively global minimizers of the above algorithm in the lazy training regime. We then give examples of such convergence results in the case of models that diverge if trained with non-lazy TD learning, and in the case of neural networks.""","""This paper provides convergence results for Non-linear TD under lazy training. This paper tackles the important and challenging task of improving our theoretical understanding of deep RL. We have lots of empirical evidence Q-learning and TD can work with NNs, and even empirical work that attempts to characterize when we should expect it to fail. Such empirical work is always limited and we need theory to supplement our empirical knowledge. This paper attempts to extend recent theoretical work on the convergence of supervised training of NN to the policy evaluation setting with TD. The main issue revolves around the presentation of the work. The reviewers found the paper difficult to read (ok for theory work). But, the paper did not clearly discuss and characterize the significance of the work: how limited is the lazy training regime, when would it be useful? Now that we have this result, do we have any more insights for algorithm design (improving nonlinear TD), or comments about when we expect NN policy evaluation to work? This all reads like: the paper needs a better intro and discussion of the implications and limitations of the results, and indeed this is what the reviewers were looking for. Unfortunately the author response and paper submitted were lacking in this respect. Even the strongest advocates of the work found it severely lacking explanation and discussion. They felt that the paper could be accepted, but only after extensive revision. The direction of the work is important. The work is novel, and not a small undertaking. However, to be published the authors should spend more time explaining the framework, the results, and the limitations to the reader. """
430,"""Optimal Attacks on Reinforcement Learning Policies""",[],"""Control policies, trained using the Deep Reinforcement Learning, have been recently shown to be vulnerable to adversarial attacks introducing even very small perturbations to the policy input. The attacks proposed so far have been designed using heuristics, and build on existing adversarial example crafting techniques used to dupe classifiers in supervised learning. In contrast, this paper investigates the problem of devising optimal attacks, depending on a well-defined attacker's objective, e.g., to minimize the main agent average reward. When the policy and the system dynamics, as well as rewards, are known to the attacker, a scenario referred to as a white-box attack, designing optimal attacks amounts to solving a Markov Decision Process. For what we call black-box attacks, where neither the policy nor the system is known, optimal attacks can be trained using Reinforcement Learning techniques. Through numerical experiments, we demonstrate the efficiency of our attacks compared to existing attacks (usually based on Gradient methods). We further quantify the potential impact of attacks and establish its connection to the smoothness of the policy under attack. Smooth policies are naturally less prone to attacks (this explains why Lipschitz policies, with respect to the state, are more resilient). Finally, we show that from the main agent perspective, the system uncertainties and the attacker can be modelled as a Partially Observable Markov Decision Process. We actually demonstrate that using Reinforcement Learning techniques tailored to POMDP (e.g. using Recurrent Neural Networks) leads to more resilient policies. ""","""This paper studies the problem of devising optimal attacks in deep RL to minimize the main agent average reward. In the white-box attack setting, optimal attacks amounts to solving a Markov Decision Process, while in black-box attacks, optimal attacks can be trained using RL techniques. Empirical efficiency of the attacks was demonstrated. It has valuable contributions on studying the adversarial robustness on deep RL. However, the current motivation and setup needs to be made clearer, and so is not being accepted at this time. We hope for these comments to help improve a future version."""
431,"""How much Position Information Do Convolutional Neural Networks Encode?""","['network understanding', 'absolute position information']","""In contrast to fully connected networks, Convolutional Neural Networks (CNNs) achieve efficiency by learning weights associated with local filters with a finite spatial extent. An implication of this is that a filter may know what it is looking at, but not where it is positioned in the image. Information concerning absolute position is inherently useful, and it is reasonable to assume that deep CNNs may implicitly learn to encode this information if there is a means to do so. In this paper, we test this hypothesis revealing the surprising degree of absolute position information that is encoded in commonly used neural networks. A comprehensive set of experiments show the validity of this hypothesis and shed light on how and where this information is represented while offering clues to where positional information is derived from in deep CNNs.""","""This paper analyzes the weights associated with filters in CNNs and finds that they encode positional information (i.e. near the edges of the image). A detailed discussion and analysis is performed, which shows where this positional information comes from. The reviewers were happy with your paper and found it to be quite interesting. The reviewers felt your paper addressed an important (and surprising!) issue not previously recognized in CNNs."""
432,"""Fully Convolutional Graph Neural Networks using Bipartite Graph Convolutions""","['Graph Neural Networks', 'Graph Convolutional Networks']","""Graph neural networks have been adopted in numerous applications ranging from learning relational representations to modeling data on irregular domains such as point clouds, social graphs, and molecular structures. Though diverse in nature, graph neural network architectures remain limited by the graph convolution operator whose input and output graphs must have the same structure. With this restriction, representational hierarchy can only be built by graph convolution operations followed by non-parameterized pooling or expansion layers. This is very much like early convolutional network architectures, which later have been replaced by more effective parameterized strided and transpose convolution operations in combination with skip connections. In order to bring a similar change to graph convolutional networks, here we introduce the bipartite graph convolution operation, a parameterized transformation between different input and output graphs. Our framework is general enough to subsume conventional graph convolution and pooling as its special cases and supports multi-graph aggregation leading to a class of flexible and adaptable network architectures, termed BiGraphNet. By replacing the sequence of graph convolution and pooling in hierarchical architectures with a single parametric bipartite graph convolution, (i) we answer the question of whether graph pooling matters, and (ii) accelerate computations and lower memory requirements in hierarchical networks by eliminating pooling layers. Then, with concrete examples, we demonstrate that the general BiGraphNet formalism (iii) provides the modeling flexibility to build efficient architectures such as graph skip connections, and autoencoders.""","""All three reviewers are consistently negative on this paper. Thus a reject is recommended."""
433,"""Generating Semantic Adversarial Examples with Differentiable Rendering""","['semantic adversarial examples', 'inverse graphics', 'differentiable rendering']","""Machine learning (ML) algorithms, especially deep neural networks, have demonstrated success in several domains. However, several types of attacks have raised concerns about deploying ML in safety-critical domains, such as autonomous driving and security. An attacker perturbs a data point slightly in the pixel space and causes the ML algorithm to misclassify (e.g. a perturbed stop sign is classified as a yield sign). These perturbed data points are called adversarial examples, and there are numerous algorithms in the literature for constructing adversarial examples and defending against them. In this paper we explore semantic adversarial examples (SAEs) where an attacker creates perturbations in the semantic space. For example, an attacker can change the background of the image to be cloudier to cause misclassification. We present an algorithm for constructing SAEs that uses recent advances in differential rendering and inverse graphics. ""","""The authors present a way for generating adversarial examples using discrete perturbations, i.e., perturbations that, unlike pixel ones, carry some semantics. Thus, in order to do so, they assume the existence of an inverse graphics framework. Results are conducted in the VKITTI dataset. Overall, the main serious concern expressed by the reviewers has to do with the general applicability of this method, since it requires an inverse graphics framework, which all-in-all is not a trivial task, so it is not clear how such a method would scale to more real datasets. A secondary concern has to do with the fact that the proposed method seems to be mostly a way to perform semantic data-augmentation rather than a way to avoid malicious attacks. In the latter case, we would want to know something about the generality of this method (e.g., what happens a model is trained for this attacks but then a more pixel-based attack is applied). As such, I do not believe that this submission is ready for publication at ICLR. However, the technique is an interesting idea it would be interesting if a later submission would provide empirical evidence about/investigate the generality of this idea. """
434,"""A Dynamic Approach to Accelerate Deep Learning Training""","['reduced precision', 'bfloat16', 'CNN', 'DNN', 'dynamic precision', 'mixed precision']","""Mixed-precision arithmetic combining both single- and half-precision operands in the same operation have been successfully applied to train deep neural networks. Despite the advantages of mixed-precision arithmetic in terms of reducing the need for key resources like memory bandwidth or register file size, it has a limited capacity for diminishing computing costs and requires 32 bits to represent its output operands. This paper proposes two approaches to replace mixed-precision for half-precision arithmetic during a large portion of the training. The first approach achieves accuracy ratios slightly slower than the state-of-the-art by using half-precision arithmetic during more than 99% of training. The second approach reaches the same accuracy as the state-of-the-art by dynamically switching between half- and mixed-precision arithmetic during training. It uses half-precision during more than 94% of the training process. This paper is the first in demonstrating that half-precision can be used for a very large portion of DNNs training and still reach state-of-the-art accuracy.""","""The submission proposes a dynamic approach to training a neural net which switches between half and full-precision operations while maintaining the same classifier accuracy, resulting in a speed up in training time. Empirical results show the value of the approach, and the authors have added additional sensitivity analysis by sweeping over hyperparameters. The reviewers were concerned about the novelty of the approach as well as the robustness of the claims that accuracy can be maintained even in the accelerated, dynamic regime. After discussion there were still concerns about the sensitivity analysis and the significance of the results. The recommendation is to reject the paper at this time."""
435,"""Curvature Graph Network""","['Deep Learning', 'Graph Convolution', 'Ricci Curvature.']","""Graph-structured data is prevalent in many domains. Despite the widely celebrated success of deep neural networks, their power in graph-structured data is yet to be fully explored. We propose a novel network architecture that incorporates advanced graph structural features. In particular, we leverage discrete graph curvature, which measures how the neighborhoods of a pair of nodes are structurally related. The curvature of an edge (x, y) defines the distance taken to travel from neighbors of x to neighbors of y, compared with the length of edge (x, y). It is a much more descriptive feature compared to previously used features that only focus on node specific attributes or limited topological information such as degree. Our curvature graph convolution network outperforms state-of-the-art on various synthetic and real-world graphs, especially the larger and denser ones.""","""The paper presents a novel graph convolutional network by integrating the curvature information (based on the concept of Ricci curvature). The key idea is well motivated and the paper is clearly written. Experimental results show that the proposed curvature graph network methods outperform existing graph convolution algorithms. One potential limitation is the computational cost of computing the Ricci curvature, which is discussed in the appendix. Overall, the concept of using curvature in graph convolutional networks seems like a novel and promising idea, and I also recommend acceptance."""
436,"""Disentanglement by Nonlinear ICA with General Incompressible-flow Networks (GIN)""","['disentanglement', 'nonlinear ICA', 'representation learning', 'feature discovery', 'theoretical justification']","""A central question of representation learning asks under which conditions it is possible to reconstruct the true latent variables of an arbitrarily complex generative process. Recent breakthrough work by Khemakhem et al. (2019) on nonlinear ICA has answered this question for a broad class of conditional generative processes. We extend this important result in a direction relevant for application to real-world data. First, we generalize the theory to the case of unknown intrinsic problem dimension and prove that in some special (but not very restrictive) cases, informative latent variables will be automatically separated from noise by an estimating model. Furthermore, the recovered informative latent variables will be in one-to-one correspondence with the true latent variables of the generating process, up to a trivial component-wise transformation. Second, we introduce a modification of the RealNVP invertible neural network architecture (Dinh et al. (2016)) which is particularly suitable for this type of problem: the General Incompressible-flow Network (GIN). Experiments on artificial data and EMNIST demonstrate that theoretical predictions are indeed verified in practice. In particular, we provide a detailed set of exactly 22 informative latent variables extracted from EMNIST.""","""This paper builds on the recent theoretical work by Khemakhem et al. (2019) to propose a novel flow-based method for performing non-linear ICA. The paper is well written, includes theoretical justifications for the proposed approach and convincing experimental results. Many of the initial minor concerns raised by the reviewers were addressed during the discussion stage, and all of the reviewers agree that this paper is an important contribution to the field and hence should be accepted. Hence, I am happy to recommend the acceptance of this paper as an oral. """
437,"""Molecular Graph Enhanced Transformer for Retrosynthesis Prediction""",[],"""With massive possible synthetic routes in chemistry, retrosynthesis prediction is still a challenge for researchers. Recently, retrosynthesis prediction is formulated as a Machine Translation (MT) task. Namely, since each molecule can be represented as a Simplied Molecular-Input Line-Entry System (SMILES) string, the process of synthesis is analogized to a process of language translation from reactants to products. However, the MT models that applied on SMILES data usually ignore the information of natural atomic connections and the topology of molecules. In this paper, we propose a Graph Enhanced Transformer (GET) framework, which adopts both the sequential and graphical information of molecules. Four different GET designs are proposed, which fuse the SMILES representations with atom embedding learned from our improved Graph Neural Network (GNN). Empirical results show that our model signicantly outperforms the Transformer model in test accuracy.""","""Several approaches can be used to feed structured data to a neural network, such as convolutions or recurrent network. This paper proposes to combine both roads, by presenting molecular structures to the network using both their graph structured and a serialized representation (SMILES), that are processed by a framework combining the strenth of Graph Neural Network and the sequential transformer architecture. The technical quality of the paper seems good, with R1 commenting on the performance relative to SOTA seq2seq based methods and R3 commenting on the benefits of using more plausible constraints. The problem of using data with complex structure is highly relevant for ICLR. However, the novelty was deemed on the low side. As a very competitive conference, this is one of the key aspects necessary for successful ICLR papers. All reviewers agree that the novelty is too low for the current (high) bar of ICLR. """
438,"""FLUID FLOW MASS TRANSPORT FOR GENERATIVE NETWORKS""","['generative network', 'optimal mass transport', 'gaussian mixture', 'model matching']","""Generative Adversarial Networks have been shown to be powerful tools for generating content resulting in them being intensively studied in recent years. Training these networks requires maximizing a generator loss and minimizing a discriminator loss, leading to a difficult saddle point problem that is slow and difficult to converge. Motivated by techniques in the registration of point clouds and the fluid flow formulation of mass transport, we investigate a new formulation that is based on strict minimization, without the need for the maximization. This formulation views the problem as a matching problem rather than an adversarial one, and thus allows us to quickly converge and obtain meaningful metrics in the optimization path.""","""The submission is concerned with providing a transport based formulation for generative modeling in order to avoid the standard max/min optimization challenge of GANs. The authors propose representing the divergence with a fluid flow model, the solution of which can be found by discretizing the space, resulting in an alignment of high dimensional point clouds. The authors disagreed about the novelty and clarity of the work, but they did agree that the empirical and theoretical support was lacking, and that the paper could be substantially improved through better validation and better results - in particular, the approach struggles with MNIST digit generation compared to other methods. The recommendation is to not accept the submission at this time."""
439,"""DIME: AN INFORMATION-THEORETIC DIFFICULTY MEASURE FOR AI DATASETS""","['Information Theory', 'Fano’s Inequality', 'Difficulty Measure', 'Donsker-Varadhan Representation', 'Theory']","""Evaluating the relative difficulty of widely-used benchmark datasets across time and across data modalities is important for accurately measuring progress in machine learning. To help tackle this problem, we proposeDIME, an information-theoretic DIfficulty MEasure for datasets, based on conditional entropy estimation of the sample-label distribution. Theoretically, we prove a model-agnostic and modality-agnostic lower bound on the 0-1 error by extending Fanos inequality to the common supervised learning scenario where labels are discrete and features are continuous. Empirically, we estimate this lower bound using a neural network to compute DIME. DIME can be decomposed into components attributable to the data distribution and the number of samples. DIME can also compute per-class difficulty scores. Through extensive experiments on both vision and language datasets, we show that DIME is well-aligned with empirically observed performance of state-of-the-art machine learning models. We hope that DIME can aid future dataset design and model-training strategies.""","""This paper proposes a measure of inherent difficulty of datasets. While reviewers agree that there are good ideas in this paper that is worth pursuing, several concerns has been risen by reviewers, which are mostly acknowledged by the authors. We look forward to seeing an improved version of this paper soon! """
440,"""Disentangling Trainability and Generalization in Deep Learning""","['NTK', 'NNGP', 'mean field theory', 'CNN', 'trainability and generalization', 'Gaussian process']","""A fundamental goal in deep learning is the characterization of trainability and generalization of neural networks as a function of their architecture and hyperparameters. In this paper, we discuss these challenging issues in the context of wide neural networks at large depths where we will see that the situation simplifies considerably. To do this, we leverage recent advances that have separately shown: (1) that in the wide network limit, random networks before training are Gaussian Processes governed by a kernel known as the Neural Network Gaussian Process (NNGP) kernel, (2) that at large depths the spectrum of the NNGP kernel simplifies considerably and becomes ``weakly data-dependent'', and (3) that gradient descent training of wide neural networks is described by a kernel called the Neural Tangent Kernel (NTK) that is related to the NNGP. Here we show that by combining the in the large depth limit the spectrum of the NTK simplifies in much the same way as that of the NNGP kernel. By analyzing this spectrum, we arrive at a precise characterization of trainability and generalization across a range of architectures including Fully Connected Networks (FCNs) and Convolutional Neural Networks (CNNs). We find that there are large regions of hyperparameter space where networks will train but will fail to generalize, in contrast with several recent results. By comparing CNNs with- and without-global average pooling, we show that CNNs without average pooling have very nearly identical learning dynamics to FCNs while CNNs with pooling contain a correction that alters its generalization performance. We perform a thorough empirical investigation of these theoretical results and finding excellent agreement on real datasets.""","""The paper investigates the trainability and generalization of deep networks as a function of hyperparameters/architecture, while focusing on wide nets of large depth; it aims to characterize regions of hyperparameter space where networks generalize well vs where they do not; empirical observations are demonstrated to support theoretical results. However, all reviewers agree that, while the topic of the paper is important and interesting, more work is required to improve the readability and clarify the exposition to support the proposed theoretical results. """
441,"""Representation Learning with Multisets""","['multisets', 'fuzzy sets', 'permutation invariant', 'representation learning', 'containment', 'partial order', 'clustering']","""We study the problem of learning permutation invariant representations that can capture containment relations. We propose training a model on a novel task: predicting the size of the symmetric difference between pairs of multisets, sets which may contain multiple copies of the same object. With motivation from fuzzy set theory, we formulate both multiset representations and how to predict symmetric difference sizes given these representations. We model multiset elements as vectors on the standard simplex and multisets as the summations of such vectors, and we predict symmetric difference as the l1-distance between multiset representations. We demonstrate that our representations more effectively predict the sizes of symmetric differences than DeepSets-based approaches with unconstrained object representations. Furthermore, we demonstrate that the model learns meaningful representations, mapping objects of different classes to different standard basis vectors.""","""While the reviewers appreciated the problem to learn a multiset representation, two reviewers found the technical contribution to be minor, as well as limited experiments. The rebuttal and revision addressed concerns about the motivation of the approach, but the experimental issues remain. The paper would likely substantially improve with additional experiments."""
442,"""Benchmarking Robustness in Object Detection: Autonomous Driving when Winter is Coming""","['deep learning', 'object detection', 'robustness', 'neural networks', 'data augmentation', 'autonomous driving']","""The ability to detect objects regardless of image distortions or weather conditions is crucial for real-world applications of deep learning like autonomous driving. We here provide an easy-to-use benchmark to assess how object detection models perform when image quality degrades. The three resulting benchmark datasets, termed PASCAL-C, COCO-C and Cityscapes-C, contain a large variety of image corruptions. We show that a range of standard object detection models suffer a severe performance loss on corrupted images (down to 30-60% of the original performance). However, a simple data augmentation trick - stylizing the training images - leads to a substantial increase in robustness across corruption type, severity and dataset. We envision our comprehensive benchmark to track future progress towards building robust object detection models. Benchmark, code and data are available at: (hidden for double blind review)""","""This paper proposes a benchmark for assessing the impact of image quality degradation (e.g. simulated fog, snow, frost) on the performance of object detection models. The authors introduce corrupted versions of popular object detection datasets, namely PASCAL-C, COCO-C and Cityscapes-C, and an evaluation protocol which reveals that the current models are not robust to such corruptions (losing as much as 60% of the performance). The authors then show that a simple data augmentation scheme significantly improves robustness. The reviewers agree that the manuscript is well written and that the proposed benchmark reveals major drawbacks of current detection models. However, two critical issues with the paper paper remain, namely lack of novelty in light of Geirhos et al., and how to actually use this benchmark in practice. I will hence recommend the rejection of this paper in the current state. Nevertheless, we encourage the authors to address the raised shortcomings (the new experiments reported in the rebuttal are a good starting point). """
443,"""Learning Surrogate Losses""","['Surrogate losses', 'Non-differentiable losses']","""The minimization of loss functions is the heart and soul of Machine Learning. In this paper, we propose an off-the-shelf optimization approach that can seamlessly minimize virtually any non-differentiable and non-decomposable loss function (e.g. Miss-classification Rate, AUC, F1, Jaccard Index, Mathew Correlation Coefficient, etc.). Our strategy learns smooth relaxation versions of the true losses by approximating them through a surrogate neural network. The proposed loss networks are set-wise models which are invariant to the order of mini-batch instances. Ultimately, the surrogate losses are learned jointly with the prediction model via bilevel optimization. Empirical results on multiple datasets with diverse real-life loss functions compared with state-of-the-art baselines demonstrate the efficiency of learning surrogate losses.""","""Unfortunately, this was a borderline paper that generated disagreement among the reviewers. After high level round of additional deliberation it was decided that this paper does not yet meet the standard for acceptance. The paper proposes a potentially interesting approach to learning surrogates for non-differentiable and non-decomposable loss functions. However, the work is a bit shallow technically, as any supporting theoretical justification is supplied by pointing to other work. The paper would be stronger with a more serious and comprehensive analysis. The reviewers criticized the lack of clarity in the technical exposition, which the authors attempted to mitigate in the rebuttal/revision process. The paper would benefit from additional clarity and systematic presentation of complete details to allow reproduction."""
444,"""You CAN Teach an Old Dog New Tricks! On Training Knowledge Graph Embeddings""","['knowledge graph embeddings', 'hyperparameter optimization']","""Knowledge graph embedding (KGE) models learn algebraic representations of the entities and relations in a knowledge graph. A vast number of KGE techniques for multi-relational link prediction have been proposed in the recent literature, often with state-of-the-art performance. These approaches differ along a number of dimensions, including different model architectures, different training strategies, and different approaches to hyperparameter optimization. In this paper, we take a step back and aim to summarize and quantify empirically the impact of each of these dimensions on model performance. We report on the results of an extensive experimental study with popular model architectures and training strategies across a wide range of hyperparameter settings. We found that when trained appropriately, the relative performance differences between various model architectures often shrinks and sometimes even reverses when compared to prior results. For example, RESCAL~\citep{nickel2011three}, one of the first KGE models, showed strong performance when trained with state-of-the-art techniques; it was competitive to or outperformed more recent architectures. We also found that good (and often superior to prior studies) model configurations can be found by exploring relatively few random samples from a large hyperparameter space. Our results suggest that many of the more advanced architectures and techniques proposed in the literature should be revisited to reassess their individual benefits. To foster further reproducible research, we provide all our implementations and experimental results as part of the open source LibKGE framework.""","""The authors analyze knowledge graph embedding models for multi-relational link predictions. Three reviewers like the work and recommend acceptance. The paper further received several positive comments from the public. This is solid work and should be accepted."""
445,"""Scalable Model Compression by Entropy Penalized Reparameterization""","['deep learning', 'model compression', 'computer vision', 'information theory']","""We describe a simple and general neural network weight compression approach, in which the network parameters (weights and biases) are represented in a latent space, amounting to a reparameterization. This space is equipped with a learned probability model, which is used to impose an entropy penalty on the parameter representation during training, and to compress the representation using a simple arithmetic coder after training. Classification accuracy and model compressibility is maximized jointly, with the bitrateaccuracy trade-off specified by a hyperparameter. We evaluate the method on the MNIST, CIFAR-10 and ImageNet classification benchmarks using six distinct model architectures. Our results show that state-of-the-art model compression can be achieved in a scalable and general way without requiring complex procedures such as multi-stage training.""","""The paper describes a simple method for neural network compression by applying Shannon-type encoding. This is a fresh and nice idea, as noted by reviewers. A disadvantage is that the architectures on ImageNet are not the most efficient ones. Also, the review misses several important works on low-rank factorization of weights for the compression (Lebedev et. al, Novikov et. al). But overall, a good paper."""
446,"""Better Knowledge Retention through Metric Learning""","['metric learning', 'continual learning', 'catastrophic forgetting']","""In a continual learning setting, new categories may be introduced over time, and an ideal learning system should perform well on both the original categories and the new categories. While deep neural nets have achieved resounding success in the classical setting, they are known to forget about knowledge acquired in prior episodes of learning if the examples encountered in the current episode of learning are drastically different from those encountered in prior episodes. This makes deep neural nets ill-suited to continual learning. In this paper, we propose a new model that can both leverage the expressive power of deep neural nets and is resilient to forgetting when new categories are introduced. We demonstrate an improvement in terms of accuracy on original classes compared to a vanilla deep neural net.""","""Catastrophic forgetting in neural networks is a real problem, and this paper suggests a mechanism for avoiding this using a k-nearest neighbor mechanism in the final layer. The reason is that the layers below the last layer should not change significantly when very different data is introduced. While the idea is interesting none of the reviewers is entirely convinced about the execution and empirical tests, which had partially inconclusive. The reviewers had a number of questions, which were only partially satisfactorily answered. While some of the reviewers had less familiarity with the specific research topic, the seemingly most knowledgeable reviewer does not think the paper is ready for publication. On balance, I think the paper cannot be accepted in its current state. The idea is interesting, but needs more work."""
447,"""Dual-module Inference for Efficient Recurrent Neural Networks""","['memory-efficient RNNs', 'dynamic execution', 'computation skipping']","""Using Recurrent Neural Networks (RNNs) in sequence modeling tasks is promising in delivering high-quality results but challenging to meet stringent latency requirements because of the memory-bound execution pattern of RNNs. We propose a big-little dual-module inference to dynamically skip unnecessary memory access and computation to speedup RNN inference. Leveraging the error-resilient feature of nonlinear activation functions used in RNNs, we propose to use a lightweight little module that approximates the original RNN layer, which is referred to as the big module, to compute activations of the insensitive region that are more error-resilient. The expensive memory access and computation of the big module can be reduced as the results are only used in the sensitive region. Our method can reduce the overall memory access by 40% on average and achieve 1.54x to 1.75x speedup on CPU-based server platform with negligible impact on model quality.""","""This paper presents an efficient RNN architecture that dynamically switches big and little modules during inference. In the experiments, authors demonstrate that the proposed method achieves favorable speed up compared to baselines, and the contribution is orthogonal to weight pruning. All reviewers agree that the paper is well-written and that the proposed method is easy to understand and reasonable. However, its methodological contribution is limited because the core idea is essentially the same as distillation, and dynamically gating the modules is a common technique in general. Moreover, I agree with the reviewers that the method should be compared with more other state-of-the-art methods in this context. Accelerating or compressing DNNs are intensively studied topics and there are many approaches other than weight pruning, as authors also mention in the paper. As the possible contribution of the paper is more on the empirical side, it is necessary to thoroughly compare with other possible approaches to show that the proposed method is really a good solution in practice. For these reasons, Id like to recommend rejection. """
448,"""SNOW: Subscribing to Knowledge via Channel Pooling for Transfer & Lifelong Learning of Convolutional Neural Networks""","['channel pooling', 'efficient training and inferencing', 'lifelong learning', 'transfer learning', 'multi task']","""SNOW is an efficient learning method to improve training/serving throughput as well as accuracy for transfer and lifelong learning of convolutional neural networks based on knowledge subscription. SNOW selects the top-K useful intermediate feature maps for a target task from a pre-trained and frozen source model through a novel channel pooling scheme, and utilizes them in the task-specific delta model. The source model is responsible for generating a large number of generic feature maps. Meanwhile, the delta model selectively subscribes to those feature maps and fuses them with its local ones to deliver high accuracy for the target task. Since a source model takes part in both training and serving of all target tasks in an inference-only mode, one source model can serve multiple delta models, enabling significant computation sharing. The sizes of such delta models are fractional of the source model, thus SNOW also provides model-size efficiency. Our experimental results show that SNOW offers a superior balance between accuracy and training/inference speed for various image classification tasks to the existing transfer and lifelong learning practices.""","""This paper proposes a method, SNOW, for improving the speed of training and inference for transfer and lifelong learning by subscribing the target delta model to the knowledge of source pretrained model via channel pooling. Reviewers and AC agree that this paper is well written, with simple but sound technique towards an important problem and with promising empirical performance. The main critique is that the approach can only tackle transfer learning while failing in the lifelong setting. Authors provided convincing feedbacks on this key point. Details requested by the reviewers were all well addressed in the revision. Hence I recommend acceptance."""
449,"""Unsupervised Distillation of Syntactic Information from Contextualized Word Representations""","['dismantlement', 'contextualized word representations', 'language models', 'representation learning']","""Contextualized word representations, such as ELMo and BERT, were shown to perform well on a various of semantic and structural (syntactic) task. In this work, we tackle the task of unsupervised disentanglement between semantics and structure in neural language representations: we aim to learn a transformation of the contextualized vectors, that discards the lexical semantics, but keeps the structural information. To this end, we automatically generate groups of sentences which are structurally similar but semantically different, and use metric-learning approach to learn a transformation that emphasizes the structural component that is encoded in the vectors. We demonstrate that our transformation clusters vectors in space by structural properties, rather than by lexical semantics. Finally, we demonstrate the utility of our distilled representations by showing that they outperform the original contextualized representations in few-shot parsing setting.""","""This paper aims to disentangle semantics and syntax inside of popular contextualized word embedding models. They use the model to generate sentences which are structurally similar but semantically different. This paper generated a lot of discussion. The reviewers do like the method for generating structurally similar sentences, and the triplet loss. They felt the evaluation methods were clever. However, one reviewer raised several issues. First, they thought the idea of syntax had not been well defined. They also thought the evaluation did not support the claims. The reviewer also argued very hard for the need to compare performance to SOTA models. The authors argued that beating SOTA is not the goal of their work, rather it is to understand what SOTA models are doing. The reviewers also argue that nearest neighbors is not a good method for evaluating the syntactic information in the representations. I hope all of the comments of the reviewers will help improve the paper as it is revised for a future submission."""
450,"""Accelerating Reinforcement Learning Through GPU Atari Emulation""","['GPU', 'reinforcement learning']","""We introduce CuLE (CUDA Learning Environment), a CUDA port of the Atari Learning Environment (ALE) which is used for the development of deep reinforcement algorithms. CuLE overcomes many limitations of existing CPU-based emulators and scales naturally to multiple GPUs. It leverages GPU parallelization to run thousands of games simultaneously and it renders frames directly on the GPU, to avoid the bottleneck arising from the limited CPU-GPU communication bandwidth. CuLE generates up to 155M frames per hour on a single GPU, a finding previously achieved only through a cluster of CPUs. Beyond highlighting the differences between CPU and GPU emulators in the context of reinforcement learning, we show how to leverage the high throughput of CuLE by effective batching of the training data, and show accelerated convergence for A2C+V-trace. CuLE is available at [hidden URL].""","""The paper presented a detailed discussion on the implementation of a library emulating Atari games on GPU for efficient reinforcement learning. The analysis is very thoroughly done. The major concern is whether this paper is a good fit to this conference. The developed library would be useful to researchers and the discussion is interesting with respect to system design and implementation, but the technical depth seems not sufficient."""
451,"""NAS-Bench-1Shot1: Benchmarking and Dissecting One-shot Neural Architecture Search""","['Neural Architecture Search', 'Deep Learning', 'Computer Vision']","""One-shot neural architecture search (NAS) has played a crucial role in making NAS methods computationally feasible in practice. Nevertheless, there is still a lack of understanding on how these weight-sharing algorithms exactly work due to the many factors controlling the dynamics of the process. In order to allow a scientific study of these components, we introduce a general framework for one-shot NAS that can be instantiated to many recently-introduced variants and introduce a general benchmarking framework that draws on the recent large-scale tabular benchmark NAS-Bench-101 for cheap anytime evaluations of one-shot NAS methods. To showcase the framework, we compare several state-of-the-art one-shot NAS methods, examine how sensitive they are to their hyperparameters and how they can be improved by tuning their hyperparameters, and compare their performance to that of blackbox optimizers for NAS-Bench-101.""","""The authors present a new benchmark for architecture search. Reviews were somewhat mixed, but also with mixed confidence scores. I recommend acceptance as poster - and encourage the authors to also cite pseudo-url"""
452,"""Customizing Sequence Generation with Multi-Task Dynamical Systems""","['Time-series modelling', 'Dynamical systems', 'RNNs', 'Multi-task learning']","""Dynamical system models (including RNNs) often lack the ability to adapt the sequence generation or prediction to a given context, limiting their real-world application. In this paper we show that hierarchical multi-task dynamical systems (MTDSs) provide direct user control over sequence generation, via use of a latent code z that specifies the customization to the individual data sequence. This enables style transfer, interpolation and morphing within generated sequences. We show the MTDS can improve predictions via latent code interpolation, and avoid the long-term performance degradation of standard RNN approaches.""","""This work proposes a dynamical systems model to allow the user to better control sequence generation via the latent z. Reviewers all agreed the that the proposed method is quite interesting. However, reviewers also felt that current evaluations were weak and were ultimately unconvinced by the author rebuttal. I recommend the authors resubmit with a stronger set of experiments as suggested by Reviewers 2 and 3."""
453,"""Domain Adaptive Multibranch Networks""","['Domain Adaptation', 'Computer Vision']","""We tackle unsupervised domain adaptation by accounting for the fact that different domains may need to be processed differently to arrive to a common feature representation effective for recognition. To this end, we introduce a deep learning framework where each domain undergoes a different sequence of operations, allowing some, possibly more complex, domains to go through more computations than others. This contrasts with state-of-the-art domain adaptation techniques that force all domains to be processed with the same series of operations, even when using multi-stream architectures whose parameters are not shared. As evidenced by our experiments, the greater flexibility of our method translates to higher accuracy. Furthermore, it allows us to handle any number of domains simultaneously.""","""Although some criticism remains for experiments, I suggest to accept this paper."""
454,"""Semi-supervised Learning by Coaching""","['semi-supervised', 'teacher', 'student', 'label propagation', 'image classification']","""Recent semi-supervised learning (SSL) methods often have a teacher to train a student in order to propagate labels from labeled data to unlabeled data. We argue that a weakness of these methods is that the teacher does not learn from the students mistakes during the course of students learning. To address this weakness, we introduce Coaching, a framework where a teacher generates pseudo labels for unlabeled data, from which a student will learn and the students performance on labeled data will be used as reward to train the teacher using policy gradient. Our experiments show that Coaching significantly improves over state-of-the-art SSL baselines. For instance, on CIFAR-10, with only 4,000 labeled examples, a WideResNet-28-2 trained by Coaching achieves 96.11% accuracy, which is better than 94.9% achieved by the same architecture trained with 45,000 labeled. On ImageNet with 10% labeled examples, Coaching trains a ResNet-50 to 72.94% top-1 accuracy, comfortably outperforming the existing state-of-the-art by more than 4%. Coaching also scales successfully to the high data regime with full ImageNet. Specifically, with additional 9 million unlabeled images from OpenImages, Coaching trains a ResNet-50 to 82.34% top-1 accuracy, setting a new state-of-the-art for the architecture on ImageNet without using extra labeled data.""","""Authors propose a new method of semi-supervised learning and provide empirical results. Reviewers found the presentation of the method confusing and poorly motivated. Despite the rebuttal, reviewers still did not find clarity on how or why the method works as well as it does."""
455,"""Hierarchical Graph Matching Networks for Deep Graph Similarity Learning""","['Graph Neural Network', 'Graph Matching Network', 'Graph Similarity Learning']","""While the celebrated graph neural networks yields effective representations for individual nodes of a graph, there has been relatively less success in extending to deep graph similarity learning. Recent work has considered either global-level graph-graph interactions or low-level node-node interactions, ignoring the rich cross-level interactions between parts of a graph and a whole graph. In this paper, we propose a Hierarchical Graph Matching Network (HGMN) for computing the graph similarity between any pair of graph-structured objects. Our model jointly learns graph representations and a graph matching metric function for computing graph similarity in an end-to-end fashion. The proposed HGMN model consists of a multi-perspective node-graph matching network for effectively learning cross-level interactions between parts of a graph and a whole graph, and a siamese graph neural network for learning global-level interactions between two graphs. Our comprehensive experiments demonstrate that our proposed HGMN consistently outperforms state-of-the-art graph matching networks baselines for both classification and regression tasks. ""","""The submission proposes an architecture to learn a similarity metric for graph matching. The architecture uses node-graph information in order to learn a more expressive, multi-level similarity score. The hierarchical approach is empirically validated on a limited set of graphs for which pairwise matching information is available and is shown to outperform other methods for classification and regression tasks. The reviewers were divided in their scores for this paper, but all noted that the approach was somewhat incremental and empirically motivated, without adequate analysis, theoretical justification, or extensive benchmark validation. Although the approach has value, more work is needed to support the method fully. Recommendation is to reject at this time."""
456,"""Towards Effective 2-bit Quantization: Pareto-optimal Bit Allocation for Deep CNNs Compression""",[],"""State-of-the-art quantization methods can compress deep neural networks down to 4 bits without losing accuracy. However, when it comes to 2 bits, the performance drop is still noticeable. One problem in these methods is that they assign equal bit rate to quantize weights and activations in all layers, which is not reasonable in the case of high rate compression (such as 2-bit quantization), as some of layers in deep neural networks are sensitive to quantization and performing coarse quantization on these layers can hurt the accuracy. In this paper, we address an important problem of how to optimize the bit allocation of weights and activations for deep CNNs compression. We first explore the additivity of output error caused by quantization and find that additivity property holds for deep neural networks which are continuously differentiable in the layers. Based on this observation, we formulate the optimal bit allocation problem of weights and activations in a joint framework and propose a very efficient method to solve the optimization problem via Lagrangian Formulation. Our method obtains excellent results on deep neural networks. It can compress deep CNN ResNet-50 down to 2 bits with only 0.7% accuracy loss. To the best our knowledge, this is the first paper that reports 2-bit results on deep CNNs without hurting the accuracy.""","""This works presents a method for inferring the optimal bit allocation for quantization of weights and activations in CNNs. The formulation is sound and the experiments are complete. However, the main concern is that the paper is very similar to a recent work by the authors, which is not cited."""
457,"""Shifted Randomized Singular Value Decomposition""","['SVD', 'PCA', 'Randomized Algorithms']","""We extend the randomized singular value decomposition (SVD) algorithm (Halko et al., 2011) to estimate the SVD of a shifted data matrix without explicitly constructing the matrix in the memory. With no loss in the accuracy of the original algorithm, the extended algorithm provides for a more efficient way of matrix factorization. The algorithm facilitates the low-rank approximation and principal component analysis (PCA) of off-center data matrices. When applied to different types of data matrices, our experimental results confirm the advantages of the extensions made to the original algorithm.""","""The proposed algorithm is found to be a straightforward extension of the previous work, which is not sufficient to warrant publication in ICLR2020."""
458,"""Graph Constrained Reinforcement Learning for Natural Language Action Spaces""","['natural language generation', 'deep reinforcement learning', 'knowledge graphs', 'interactive fiction']","""Interactive Fiction games are text-based simulations in which an agent interacts with the world purely through natural language. They are ideal environments for studying how to extend reinforcement learning agents to meet the challenges of natural language understanding, partial observability, and action generation in combinatorially-large text-based action spaces. We present KG-A2C, an agent that builds a dynamic knowledge graph while exploring and generates actions using a template-based action space. We contend that the dual uses of the knowledge graph to reason about game state and to constrain natural language generation are the keys to scalable exploration of combinatorially large natural language actions. Results across a wide variety of IF games show that KG-A2C outperforms current IF agents despite the exponential increase in action space size.""","""This paper applies reinforcement learning to text adventure games by using knowledge graphs to constrain the action space. This is an exciting problem with relatively little work performed on it. Reviews agree that this is an interesting paper, well written, with good results. There are some concerns about novelty but general agreement that the paper should be accepted. I therefore recommend acceptance."""
459,"""Accelerating First-Order Optimization Algorithms""","['Neural Networks', 'Gradient Descent', 'First order optimization']","""Several stochastic optimization algorithms are currently available. In most cases, selecting the best optimizer for a given problem is not an easy task. Therefore, instead of looking for yet another absolute best optimizer, accelerating existing ones according to the context might prove more effective. This paper presents a simple and intuitive technique to accelerate first-order optimization algorithms. When applied to first-order optimization algorithms, it converges much more quickly and achieves lower function/loss values when compared to traditional algorithms. The proposed solution modifies the update rule, based on the variation of the direction of the gradient during training. Several tests were conducted with SGD, AdaGrad, Adam and AMSGrad on three public datasets. Results clearly show that the proposed technique, has the potential to improve the performance of existing optimization algorithms.""","""All reviewers recommend rejection, and the authors have not provided a response. """
460,"""Hamiltonian Generative Networks""","['Hamiltonian dynamics', 'normalising flows', 'generative model', 'physics']","""The Hamiltonian formalism plays a central role in classical and quantum physics. Hamiltonians are the main tool for modelling the continuous time evolution of systems with conserved quantities, and they come equipped with many useful properties, like time reversibility and smooth interpolation in time. These properties are important for many machine learning problems - from sequence prediction to reinforcement learning and density modelling - but are not typically provided out of the box by standard tools such as recurrent neural networks. In this paper, we introduce the Hamiltonian Generative Network (HGN), the first approach capable of consistently learning Hamiltonian dynamics from high-dimensional observations (such as images) without restrictive domain assumptions. Once trained, we can use HGN to sample new trajectories, perform rollouts both forward and backward in time, and even speed up or slow down the learned dynamics. We demonstrate how a simple modification of the network architecture turns HGN into a powerful normalising flow model, called Neural Hamiltonian Flow (NHF), that uses Hamiltonian dynamics to model expressive densities. Hence, we hope that our work serves as a first practical demonstration of the value that the Hamiltonian formalism can bring to machine learning. More results and video evaluations are available at: pseudo-url""","""The paper introduces a novel way of learning Hamiltonian dynamics with a generative network. The Hamiltonian generative network (HGN) learns the dynamics directly from data by embedding observations in a latent space, which is then transformed into a phase space describing the system's initial (abstract) position and momentum. Using a second network, the Hamiltonian network, the position and momentum are reduced to a scalar, interpreted as the Hamiltonian of the system, which can then be used to do rollouts in the phase space using techniques known from, e.g., Hamiltonian Monte Carlo sampling. An important ingredient of the paper is the fact that no access to the derivatives of the Hamiltonian is needed. The reviewers agree that this paper is a good contribution, and I recommend acceptance."""
461,"""Latent Normalizing Flows for Many-to-Many Cross-Domain Mappings""",[],"""Learned joint representations of images and text form the backbone of several important cross-domain tasks such as image captioning. Prior work mostly maps both domains into a common latent representation in a purely supervised fashion. This is rather restrictive, however, as the two domains follow distinct generative processes. Therefore, we propose a novel semi-supervised framework, which models shared information between domains and domain-specific information separately. The information shared between the domains is aligned with an invertible neural network. Our model integrates normalizing flow-based priors for the domain-specific information, which allows us to learn diverse many-to-many mappings between the two domains. We demonstrate the effectiveness of our model on diverse tasks, including image captioning and text-to-image synthesis.""","""This paper addresses the problem of many-to-many cross-domain mapping tasks with a double variational auto-encoder architecture, making use of the normalizing flow-based priors. Reviewers and AC unanimously agree that it is a well written paper with a solid approach to a complicated real problem supported by good experimental results. There are still some concerns with confusing notations, and with human study to further validate their approach, which should be addressed in a future version. I recommend acceptance."""
462,"""Sticking to the Facts: Confident Decoding for Faithful Data-to-Text Generation""","['Natural Language Processing', 'Text Generation', 'Data-to-Text Generation', 'Hallucination', 'Calibration', 'Variational Bayes']","""Neural conditional text generation systems have achieved significant progress in recent years, showing the ability to produce highly fluent text. However, the inherent lack of controllability in these systems allows them to hallucinate factually incorrect phrases that are unfaithful to the source, making them often unsuitable for many real world systems that require high degrees of precision. In this work, we propose a novel confidence oriented decoder that assigns a confidence score to each target position. This score is learned in training using a variational Bayes objective, and can be leveraged at inference time using a calibration technique to promote more faithful generation. Experiments on a structured data-to-text dataset -- WikiBio -- show that our approach is more faithful to the source than existing state-of-the-art approaches, according to both automatic metrics and human evaluation.""","""This paper proposes to improve the faithfulness of data-to-text generation models, through an attention-based confidence measure and a variational approach for learning the model. There is some reviewer disagreement on this paper. All agree that the problem is important and ideas interesting, while some reviewers feel that the methods are insufficiently justified and/or the results unconvincing. In addition, there is not much technical novelty here from a machine learning perspective; the contribution is to a specific task. Overall I think this paper would fit in much better in an NLP conference/journal."""
463,"""Forecasting Deep Learning Dynamics with Applications to Hyperparameter Tuning""",[],"""Well-performing deep learning models have enormous impact, but getting them to perform well is complicated, as the model architecture must be chosen and a number of hyperparameters tuned. This requires experimentation, which is timeconsuming and costly. We propose to address the problem of hyperparameter tuning by learning to forecast the training behaviour of deep learning architectures. Concretely, we introduce a forecasting model that, given a hyperparameter schedule (e.g., learning rate, weight decay) and a history of training observations (such as loss and accuracy), predicts how the training will continue. Naturally, forecasting is much faster and less expensive than running actual deep learning experiments. The main question we study is whether the forecasting model is good enough to be of use - can it indeed replace real experiments? We answer this affirmatively in two ways. For one, we show that the forecasted curves are close to real ones. On the practical side, we apply our forecaster to learn hyperparameter tuning policies. We experiment on a version of ResNet on CIFAR10 and on Transformer in a language modeling task. The policies learned using our forecaster match or exceed the ones learned in real experiments and in one case even the default schedules discovered by researchers. We study the learning rate schedules created using the forecaster are find that they are not only effective, but also lead to interesting insights.""","""This paper trains a transformer to extrapolate learning curves, and uses this in a model-based RL framework to automatically tune hyperparameters. This might be a good approach, but it's hard to know because the experiments don't include direct comparisons against existing hyperparameter optimization/adaptation techniques (either the ones based on extrapolating training curves, or standard ones like BayesOpt or PBT). The presentation is also fairly informal, and it's not clear if a reader would be able to reproduce the results. Overall, I think there's significant cleanup and additional experiments needed before publication in ICLR. """
464,"""CZ-GEM: A FRAMEWORK FOR DISENTANGLED REPRESENTATION LEARNING""","['disentangled representation learning', 'gan', 'generative model', 'simulator']","""Learning disentangled representations of data is one of the central themes in unsupervised learning in general and generative modelling in particular. In this work, we tackle a slightly more intricate scenario where the observations are generated from a conditional distribution of some known control variate and some latent noise variate. To this end, we present a hierarchical model and a training method (CZ-GEM) that leverages some of the recent developments in likelihood-based and likelihood-free generative models. We show that by formulation, CZ-GEM introduces the right inductive biases that ensure the disentanglement of the control from the noise variables, while also keeping the components of the control variate disentangled. This is achieved without compromising on the quality of the generated samples. Our approach is simple, general, and can be applied both in supervised and unsupervised settings.""","""The paper addresses the problem of learning disentangled representations in supervised and unsupervised settings. In general, the problem of representation learning in of course a core problem in ICLR. However, in the set-up described by the authors, R2 commented on the the set-up for supervised being a bit unnatural in as detailed labels need to be given (somewhat confusingly, the labels are called control variates in the paper). Several reviewers commented on the novelty of the paper being on the low side, with R2 commenting the contribution being fairly small, and R3 noting similarities to stackgan. There were also some comments on quality, and clarity. On the topic of technical quality, R2 did note that the authors present extensive results, but R3 mentions that the case for the disentanglement improving is not sufficiently supported. In terms of clarity, there was some initial confusing about e.g. the inference procedure, though the authors addressed these issues in the discussion. """
465,"""GRASPEL: GRAPH SPECTRAL LEARNING AT SCALE""","['Spectral graph theory', 'graph learning', 'data clustering', 't-SNE visualization']","""Learning meaningful graphs from data plays important roles in many data mining and machine learning tasks, such as data representation and analysis, dimension reduction, data clustering, and visualization, etc. In this work, we present a scalable spectral approach to graph learning from data. By limiting the precision matrix to be a graph Laplacian, our approach aims to estimate ultra-sparse weighted graphs and has a clear connection with the prior graphical Lasso method. By interleaving nearly-linear time spectral graph sparsification, coarsening and embedding procedures, ultra-sparse yet spectrally-stable graphs can be iteratively constructed in a highly-scalable manner. Compared with prior graph learning approaches that do not scale to large problems, our approach is highly-scalable for constructing graphs that can immediately lead to substantially improved computing efficiency and solution quality for a variety of data mining and machine learning applications, such as spectral clustering (SC), and t-Distributed Stochastic Neighbor Embedding (t-SNE). ""","""This paper proposes a scalable approach for graph learning from data. The reviewers think the approach appears heuristic and it is not clear the algorithm is optimizing the proposed sparse graph recovery objective. """
466,"""Biologically inspired sleep algorithm for increased generalization and adversarial robustness in deep neural networks""","['Adversarial Robustness', 'Generalization', 'Neural Computing', 'Deep Learning']","""Current artificial neural networks (ANNs) can perform and excel at a variety of tasks ranging from image classification to spam detection through training on large datasets of labeled data. While the trained network may perform well on similar testing data, inputs that differ even slightly from the training data may trigger unpredictable behavior. Due to this limitation, it is possible to design inputs with very small perturbations that can result in misclassification. These adversarial attacks present a security risk to deployed ANNs and indicate a divergence between how ANNs and humans perform classification. Humans are robust at behaving in the presence of noise and are capable of correctly classifying objects that are noisy, blurred, or otherwise distorted. It has been hypothesized that sleep promotes generalization of knowledge and improves robustness against noise in animals and humans. In this work, we utilize a biologically inspired sleep phase in ANNs and demonstrate the benefit of sleep on defending against adversarial attacks as well as in increasing ANN classification robustness. We compare the sleep algorithm's performance on various robustness tasks with two previously proposed adversarial defenses - defensive distillation and fine-tuning. We report an increase in robustness after sleep phase to adversarial attacks as well as to general image distortions for three datasets: MNIST, CUB200, and a toy dataset. Overall, these results demonstrate the potential for biologically inspired solutions to solve existing problems in ANNs and guide the development of more robust, human-like ANNs.""","""""Sleep"" is introduced as a way of increasing robustness in neural network training. To sleep, the network is converted into a spiking network and goes through phases of more and less intense activation. The results are quite good when it comes to defending against adversarial examples. Reviewers agree that the method is novel and interesting. Authors responded to the reviewers' questions (one of the reviewers had a quite extensive set of questions) satisfactorily, and improved the paper significantly in the process. I think the paper should be accepted on the grounds of novelty and good results."""
467,"""Utilizing Edge Features in Graph Neural Networks via Variational Information Maximization""","['Graph Neural Network', 'Edge Feature', 'Mutual Information']","""Graph Neural Networks (GNNs) broadly follow the scheme that the representation vector of each node is updated recursively using the message from neighbor nodes, where the message of a neighbor is usually pre-processed with a parameterized transform matrix. To make better use of edge features, we propose the Edge Information maximized Graph Neural Network (EIGNN) that maximizes the Mutual Information (MI) between edge features and message passing channels. The MI is reformulated as a differentiable objective via a variational approach. We theoretically show that the newly introduced objective enables the model to preserve edge information, and empirically corroborate the enhanced performance of MI-maximized models across a broad range of learning tasks including regression on molecular graphs and relation prediction in knowledge graphs.""","""This paper proposed an auxiliary loss based on mutual information for graph neural network. Such loss is to maximize the mutual information between edge representation and corresponding edge feature in GNN message passing function. GNN with edge features have already been proposed in the literature. Furthermore, the reviewers think the paper needs to improve further in terms of explain more clearly the motivation and rationale behind the method. """
468,"""Progressive Memory Banks for Incremental Domain Adaptation""","['natural language processing', 'domain adaptation']","""This paper addresses the problem of incremental domain adaptation (IDA) in natural language processing (NLP). We assume each domain comes one after another, and that we could only access data in the current domain. The goal of IDA is to build a unified model performing well on all the domains that we have encountered. We adopt the recurrent neural network (RNN) widely used in NLP, but augment it with a directly parameterized memory bank, which is retrieved by an attention mechanism at each step of RNN transition. The memory bank provides a natural way of IDA: when adapting our model to a new domain, we progressively add new slots to the memory bank, which increases the number of parameters, and thus the model capacity. We learn the new memory slots and fine-tune existing parameters by back-propagation. Experimental results show that our approach achieves significantly better performance than fine-tuning alone. Compared with expanding hidden states, our approach is more robust for old domains, shown by both empirical and theoretical results. Our model also outperforms previous work of IDA including elastic weight consolidation and progressive neural networks in the experiments.""","""This paper introduces an RNN based approach to incremental domain adaptation in natural language processing, where the RNN is progressively augmented with the parameterized memory bank which is shown to be better than expanding the RNN states. Reviewers and AC acknowledge that this paper is well written with interesting ideas and practical value. Domain adaptation in the incremental setting, where domains come in a streaming way with only the current one accessible, can find some realistic application scenarios. The proposed extensible attention mechanism is solid and works well on several NLP tasks. Several concerns were raised by the reviewers regarding the comparative and ablation studies, which were well resolved in the rebuttal. The authors are encouraged to generalize their approach to other application domains other than NLP to show the generality of their approach. I recommend acceptance."""
469,"""Variable Complexity in the Univariate and Multivariate Structural Causal Model""",[],"""We show that by comparing the individual complexities of univariante cause and effect in the Structural Causal Model, one can identify the cause and the effect, without considering their interaction at all. The entropy of each variable is ineffective in measuring the complexity, and we propose to capture it by an autoencoder that operates on the list of sorted samples. Comparing the reconstruction errors of the two autoencoders, one for each variable, is shown to perform well on the accepted benchmarks of the field. In the multivariate case, where one can ensure that the complexities of the cause and effect are balanced, we propose a new method that mimics the disentangled structure of the causal model. We extend the results of~\cite{Zhang:2009:IPC:1795114.1795190} to the multidimensional case, showing that such modeling is only likely in the direction of causality. Furthermore, the learned model is shown theoretically to perform the separation to the causal component and to the residual (noise) component. Our multidimensional method obtains a significantly higher accuracy than the literature methods.""","""The author response and revisions to the manuscript motivated two reviewers to increase their scores to weak accept. While these revisions increased the quality of the work, the overall assessment is just shy of the threshold for inclusion."""
470,"""Spectral Embedding of Regularized Block Models""","['Spectral embedding', 'regularization', 'block models', 'clustering']","""Spectral embedding is a popular technique for the representation of graph data. Several regularization techniques have been proposed to improve the quality of the embedding with respect to downstream tasks like clustering. In this paper, we explain on a simple block model the impact of the complete graph regularization, whereby a constant is added to all entries of the adjacency matrix. Specifically, we show that the regularization forces the spectral embedding to focus on the largest blocks, making the representation less sensitive to noise or outliers. We illustrate these results on both on both synthetic and real data, showing how regularization improves standard clustering scores. ""","""The paper proposes a nice and easy way to regularize spectral graph embeddings, and explains the effect through a nice set of experiments. Therefore, I recommend acceptance."""
471,"""Reweighted Proximal Pruning for Large-Scale Language Representation""","['Language Representation', 'Machine Learning', 'Deep Learning', 'Optimizer', 'Statistical Learning', 'Model Compression']","""Recently, pre-trained language representation flourishes as the mainstay of the natural language understanding community, e.g., BERT. These pre-trained language representations can create state-of-the-art results on a wide range of downstream tasks. Along with continuous significant performance improvement, the size and complexity of these pre-trained neural models continue to increase rapidly. Is it possible to compress these large-scale language representation models? How will the pruned language representation affect the downstream multi-task transfer learning objectives? In this paper, we propose Reweighted Proximal Pruning (RPP), a new pruning method specifically designed for a large-scale language representation model. Through experiments on SQuAD and the GLUE benchmark suite, we show that proximal pruned BERT keeps high accuracy for both the pre-training task and the downstream multiple fine-tuning tasks at high prune ratio. RPP provides a new perspective to help us analyze what large-scale language representation might learn. Additionally, RPP makes it possible to deploy a large state-of-the-art language representation model such as BERT on a series of distinct devices (e.g., online servers, mobile phones, and edge devices).""","""This paper proposes a novel pruning method for use with transformer text encoding models like BERT, and show that it can dramatically reduce the number of non-zero weights in a trained model while only slightly harming performance. This is one of the hardest cases in my pile. The topic is obviously timely and worthwhile. None of the reviewers was able to give a high-confidence assessment, but the reviews were all ultimately leaning positive. However, the reviewers didn't reach a clear consensus on the main strengths of the paper, even after some private discussion, and they raised many concerns. These concerns, taken together, make me doubt that the current paper represents a substantial, sound contribution to the model compression literature in NLP. I'm voting to reject, on the basis of: - Recurring concerns about missing strong baselines, which make it less clear that the new method is an ideal choice. - Relatively weak motivations for the proposed method (pruning a pre-trained model before fine-tuning) in the proposed application domain (mobile devices). - Recurring concerns about thin analysis."""
472,"""Deep Imitative Models for Flexible Inference, Planning, and Control""","['imitation learning', 'planning', 'autonomous driving']","""Imitation Learning (IL) is an appealing approach to learn desirable autonomous behavior. However, directing IL to achieve arbitrary goals is difficult. In contrast, planning-based algorithms use dynamics models and reward functions to achieve goals. Yet, reward functions that evoke desirable behavior are often difficult to specify. In this paper, we propose ""Imitative Models"" to combine the benefits of IL and goal-directed planning. Imitative Models are probabilistic predictive models of desirable behavior able to plan interpretable expert-like trajectories to achieve specified goals. We derive families of flexible goal objectives, including constrained goal regions, unconstrained goal sets, and energy-based goals. We show that our method can use these objectives to successfully direct behavior. Our method substantially outperforms six IL approaches and a planning-based approach in a dynamic simulated autonomous driving task, and is efficiently learned from expert demonstrations without online data collection. We also show our approach is robust to poorly-specified goals, such as goals on the wrong side of the road.""","""This paper proposes to build an 'imitative model' to improve the performance for imitation learning. The main idea is to combine the model-based RL type of work to the imitation learning approach. The model is trained using a probabilistic method and can help the agent imitate goals that were previously not easy to achieve with previous works. Reviewers 2 and 3 strongly agree that the paper should be accepted. R3 has increased their score after the rebuttal, and the authors' response helped in this case. Based on reviewers score, I recommend to accept this paper."""
473,"""Supervised learning with incomplete data via sparse representations""","['Incomplete data', 'supervised learning', 'sparse representations']","""This paper addresses the problem of training a classifier on incomplete data and its application to a complete or incomplete test dataset. A supervised learning method is developed to train a general classifier, such as a logistic regression or a deep neural network, using only a limited number of observed entries, assuming sparse representations of data vectors on an unknown dictionary. The proposed method simultaneously learns the classifier, the dictionary and the corresponding sparse representations of each input data sample. A theoretical analysis is also provided comparing this method with the standard imputation approach, which consists on performing data completion followed by training the classifier based on their reconstructions. The limitations of this last ""sequential"" approach are identified, and a description of how the proposed new ""simultaneous"" method can overcome the problem of indiscernible observations is provided. Additionally, it is shown that, if it is possible to train a classifier on incomplete observations so that its reconstructions are well separated by a hyperplane, then the same classifier also correctly separates the original (unobserved) data samples. Extensive simulation results are presented on synthetic and well-known reference datasets that demonstrate the effectiveness of the proposed method compared to traditional data imputation methods.""","""This was a difficult paper to decide, given the strong disagreement between reviewer assessments. After the discussion it became clear that the paper tackles some well studied issues while neglecting to cite some relevant works. The significance and novelty of the contribution was directly challenged, yet I could not see a convincing case presented to mitigate these criticisms. The paper needs to do a better job of placing the work in the context of the existing literature, and establishing the significance and novelty of its main contributions."""
474,"""Transformer-XH: Multi-Evidence Reasoning with eXtra Hop Attention""","['Transformer-XH', 'multi-hop QA', 'fact verification', 'extra hop attention', 'structured modeling']","""Transformers have achieved new heights modeling natural language as a sequence of text tokens. However, in many real world scenarios, textual data inherently exhibits structures beyond a linear sequence such as trees and graphs; many tasks require reasoning with evidence scattered across multiple pieces of texts. This paper presents Transformer-XH, which uses eXtra Hop attention to enable intrinsic modeling of structured texts in a fully data-driven way. Its new attention mechanism naturally hops across the connected text sequences in addition to attending over tokens within each sequence. Thus, Transformer-XH better conducts joint multi-evidence reasoning by propagating information between documents and constructing global contextualized representations. On multi-hop question answering, Transformer-XH leads to a simpler multi-hop QA system which outperforms previous state-of-the-art on the HotpotQA FullWiki setting. On FEVER fact verification, applying Transformer-XH provides state-of-the-art accuracy and excels on claims whose verification requires multiple evidence.""","""This work examines a problem that is of considerable interest to the community and does a good job of presenting the work. The AC recommends acceptance."""
475,"""Optimal Strategies Against Generative Attacks""",[],"""Generative neural models have improved dramatically recently. With this progress comes the risk that such models will be used to attack systems that rely on sensor data for authentication and anomaly detection. Many such learning systems are installed worldwide, protecting critical infrastructure or private data against malfunction and cyber attacks. We formulate the scenario of such an authentication system facing generative impersonation attacks, characterize it from a theoretical perspective and explore its practical implications. In particular, we ask fundamental theoretical questions in learning, statistics and information theory: How hard is it to detect a ""fake reality""? How much data does the attacker need to collect before it can reliably generate nominally-looking artificial data? Are there optimal strategies for the attacker or the authenticator? We cast the problem as a maximin game, characterize the optimal strategy for both attacker and authenticator in the general case, and provide the optimal strategies in closed form for the case of Gaussian source distributions. Our analysis reveals the structure of the optimal attack and the relative importance of data collection for both authenticator and attacker. Based on these insights we design practical learning approaches and show that they result in models that are more robust to various attacks on real-world data.""","""This paper concerns the problem of defending against generative ""attacks"": that is, falsification of data for malicious purposes through the use of synthesized data based on ""leaked"" samples of real data. The paper casts the problem formally and assesses the problem of authentication in terms of the sample complexity at test time and the sample budget of the attacker. The authors prove a Nash equillibrium exists, derive a closed form for the special case of multivariate Gaussian data, and propose an algorithm called GAN in the Middle leveraging the developed principles, showing an implementation to perform better than authentication baselines and suggesting other applications. Reviewers were overall very positive, in agreement that the problem addressed is important and the contribution made is significant. Most criticisms were superficial. This is a dense piece of work, and presentation could still be improved. However this is clearly a significant piece of work addressing a problem of increasing importance, and is worthy of acceptance."""
476,"""Contextual Text Style Transfer""",[],"""In this paper, we introduce a new task, Contextual Text Style Transfer, to translate a sentence within a paragraph context into the desired style (e.g., informal to formal, offensive to non-offensive). Two new datasets, Enron-Context and Reddit-Context, are introduced for this new task, focusing on formality and offensiveness, respectively. Two key challenges exist in contextual text style transfer: 1) how to preserve the semantic meaning of the target sentence and its consistency with the surrounding context when generating an alternative sentence with a specific style; 2) how to deal with the lack of labeled parallel data. To address these challenges, we propose a Context-Aware Style Transfer (CAST) model, which leverages both parallel and non-parallel data for joint model training. For parallel training data, CAST uses two separate encoders to encode each input sentence and its surrounding context, respectively. The encoded feature vector, together with the target style information, are then used to generate the target sentence. A classifier is further used to ensure contextual consistency of the generated sentence. In order to lever-age massive non-parallel corpus and to enhance sentence encoder and decoder training, additional self-reconstruction and back-translation losses are introduced. Experimental results on Enron-Context and Reddit-Context demonstrate the effectiveness of the proposed model over state-of-the-art style transfer methods, across style accuracy, content preservation, and contextual consistency metrics.""","""The paper proposes a new style transfer task, contextual style transfer, which hypothesises that the document context of the sentence is important, as opposed to previous work which only looked at sentence context. A major contribution of the paper is the creation of two new crowd-sourced datasets, Enron-Context and Reddit-Context focussed on formality and offensiveness. The reviewers are skeptical that it was context that has really improved results on the style transfer tasks. The authors responded to all the reviewers but there was no further discussion. I feel like this paper has not convinced me or the reviewers of the strength of its contribution and, although interesting, I recommend for it to be rejected. """
477,"""A Mechanism of Implicit Regularization in Deep Learning""","['Implicit Regularization', 'Generalization', 'Deep Neural Network', 'Low Complexity']","""Despite a lot of theoretical efforts, very little is known about mechanisms of implicit regularization by which the low complexity contributes to generalization in deep learning. In particular, causality between the generalization performance, implicit regularization and nonlinearity of activation functions is one of the basic mysteries of deep neural networks (DNNs). In this work, we introduce a novel technique for DNNs called random walk analysis and reveal a mechanism of the implicit regularization caused by nonlinearity of ReLU activation. Surprisingly, our theoretical results suggest that the learned DNNs interpolate almost linearly between data points, which leads to the low complexity solutions in the over-parameterized regime. As a result, we prove that stochastic gradient descent can learn a class of continuously differentiable functions with generalization bounds of the order of pseudo-formula ( pseudo-formula : the number of samples). Furthermore, our analysis is independent of the kernel methods, including neural tangent kernels.""","""This paper analyzes a mechanism of the implicit regularization caused by nonlinearity of ReLU activation, and suggests that the learned DNNs interpolate almost linearly between data points, which leads to the low complexity solutions in the over-parameterized regime. The main objections include (1) some claims in this paper are not appropriate; (2) lack of proper comparison with prior work; and many other issues in the presentation. I agree with the reviewers evaluation and encourage the authors to improve this paper and resubmit to future conference. """
478,"""Spike-based causal inference for weight alignment""","['causal', 'inference', 'weight', 'transport', 'rdd', 'regression', 'discontinuity', 'design', 'cifar10', 'biologically', 'plausible']","""In artificial neural networks trained with gradient descent, the weights used for processing stimuli are also used during backward passes to calculate gradients. For the real brain to approximate gradients, gradient information would have to be propagated separately, such that one set of synaptic weights is used for processing and another set is used for backward passes. This produces the so-called ""weight transport problem"" for biological models of learning, where the backward weights used to calculate gradients need to mirror the forward weights used to process stimuli. This weight transport problem has been considered so hard that popular proposals for biological learning assume that the backward weights are simply random, as in the feedback alignment algorithm. However, such random weights do not appear to work well for large networks. Here we show how the discontinuity introduced in a spiking system can lead to a solution to this problem. The resulting algorithm is a special case of an estimator used for causal inference in econometrics, regression discontinuity design. We show empirically that this algorithm rapidly makes the backward weights approximate the forward weights. As the backward weights become correct, this improves learning performance over feedback alignment on tasks such as Fashion-MNIST and CIFAR-10. Our results demonstrate that a simple learning rule in a spiking network can allow neurons to produce the right backward connections and thus solve the weight transport problem.""","""All authors agree the paper is well written, and there is a good consensus on acceptance. The last reviewer was concerned about a lack of diversity in datasets, but this was addressed in the rebuttal."""
479,"""RISE and DISE: Two Frameworks for Learning from Time Series with Missing Data""","['Time Series', 'Missing Data', 'RNN']","""Time series with missing data constitute an important setting for machine learning. The most successful prior approaches for modeling such time series are based on recurrent neural networks that learn to impute unobserved values and then treat the imputed values as observed. We start by introducing Recursive Input and State Estimation (RISE), a general framework that encompasses such prior approaches as specific instances. Since RISE instances tend to suffer from poor long-term performance as errors are amplified in feedback loops, we propose Direct Input and State Estimation (DISE), a novel framework in which input and state representations are learned from observed data only. The key to DISE is to include time information in representation learning, which enables the direct modeling of arbitrary future time steps by effectively skipping over missing values, rather than imputing them, thus overcoming the error amplification encountered by RISE methods. We benchmark instances of both frameworks on two forecasting tasks, observing that DISE achieves state-of-the-art performance on both.""","""The paper attacks the important problem of learning time series models with missing data and proposes two learning frameworks, RISE and DISE, for this problem. The reviewers had several concerns about the paper and experimental setup and agree that this paper is not yet ready for publication. Please pay careful attention to the reviewer comments and particularly address the comments related to experimental design, clarity, and references to prior work while editing the paper."""
480,"""Efficient generation of structured objects with Constrained Adversarial Networks""","['deep generative models', 'generative adversarial networks', 'constraints']","""Despite their success, generative adversarial networks (GANs) cannot easily generate structured objects like molecules or game maps. The issue is that such objects must satisfy structural requirements (e.g., molecules must be chemically valid, game maps must guarantee reachability of the end goal) that are difficult to capture with examples alone. As a remedy, we propose constrained adversarial networks (CANs), which embed the constraints into the model during training by penalizing the generator whenever it outputs invalid structures. As in unconstrained GANs, new objects can be sampled straightforwardly from the generator, but in addition they satisfy the constraints with high probability. Our approach handles arbitrary logical constraints and leverages knowledge compilation techniques to efficiently evaluate the expected disagreement between the model and the constraints. This setup is further extended to hybrid logical-neural constraints for capturing complex requirements like graph reachability. An extensive empirical analysis on constrained images, molecules, and video game levels shows that CANs efficiently generate valid structures that are both high-quality and novel.""","""This paper develops ideas for enabling the data generation with GANs in the presence of structured constraints on the data manifold. This problem is interesting and quite relevant to the ICLR community. The reviewers raised concerns about the similarity to prior work (Xu et al '17), and missing comparisons to previous approaches that study this problem (e.g. Hu et al '18) that make it difficult to judge the significance of the work. Overall, the paper is slightly below the bar for acceptance."""
481,"""Continuous Meta-Learning without Tasks""","['Meta-learning', 'Continual learning', 'changepoint detection', 'Bayesian learning']","""Meta-learning is a promising strategy for learning to efficiently learn within new tasks, using data gathered from a distribution of tasks. However, the meta-learning literature thus far has focused on the task segmented setting, where at train-time, offline data is assumed to be split according to the underlying task, and at test-time, the algorithms are optimized to learn in a single task. In this work, we enable the application of generic meta-learning algorithms to settings where this task segmentation is unavailable, such as continual online learning with a time-varying task. We present meta-learning via online changepoint analysis (MOCA), an approach which augments a meta-learning algorithm with a differentiable Bayesian changepoint detection scheme. The framework allows both training and testing directly on time series data without segmenting it into discrete tasks. We demonstrate the utility of this approach on a nonlinear meta-regression benchmark as well as two meta-image-classification benchmarks.""","""In this paper the authors view meta-learning under a general, less studied viewpoint, which does not make the typical assumption that task segmentation is provided. In this context, change-point analysis is used as a tool to complement meta-learning in this expanded domain. The expansion of meta-learning in this more general and often more practical context is significant and the paper is generally well written. However, considering this particular (non)segmentation setting is not an entirely novel idea; for example the reviewers have already pointed out [1] (which the authors agreed to discuss), but also [2] is another relevant work. The authors are highly encouraged to incorporate results, or at least a discussion, with respect to at least [2]. It seems likely that inferring boundaries could be more powerful, but it is important to better motivate this for a final paper. Moreover, the paper could be strengthened by significantly expanding the discussion about practical usefulness of the approach. R3 provides a suggestion towards this direction, that is, to explore the performance in a situation where task segmentation is truly unavailable. [1] Rahaf et el. ""Task-Free Continual Learning"". [2] Riemer et al. ""Learning to learn without forgetting by maximizing transfer and minimizing interference"". """
482,"""Network Pruning for Low-Rank Binary Index""","['Pruning', 'Model compression', 'Index compression', 'low-rank', 'binary matrix decomposition']",""" Pruning is an efficient model compression technique to remove redundancy in the connectivity of deep neural networks (DNNs). A critical problem to represent sparse matrices after pruning is that if fewer bits are used for quantization and pruning rate is enhanced, then the amount of index becomes relatively larger. Moreover, an irregular index form leads to low parallelism for convolutions and matrix multiplications. In this paper, we propose a new network pruning technique that generates a low-rank binary index matrix to compress index data significantly. Specifically, the proposed compression method finds a particular fine-grained pruning mask that can be decomposed into two binary matrices while decompressing index data is performed by simple binary matrix multiplication. We also propose a tile-based factorization technique that not only lowers memory requirements but also enhances compression ratio. Various DNN models (including conv layers and LSTM layers) can be pruned with much fewer indices compared to previous sparse matrix formats while maintaining the same pruning rate.""","""The submission proposes a method to improve over a standard binary network pruning strategy by the inclusion of a structured matrix product to encourage network weight sparsification that can have better memory and computational properties. The idea is well motivated, but there were reviewer concerns about the quality of writing and in particular the quality of the experiments. The reviewers were unanimous that the paper is not suitable for acceptance at ICLR, and no rebuttal was provided."""
483,"""Lift-the-flap: what, where and when for context reasoning""","['contextual reasoning', 'visual recognition', 'human behavior', 'intelligent sampling']","""Context reasoning is critical in a wide variety of applications where current inputs need to be interpreted in the light of previous experience and knowledge. Both spatial and temporal contextual information play a critical role in the domain of visual recognition. Here we investigate spatial constraints (what image features provide contextual information and where they are located), and temporal constraints (when different contextual cues matter) for visual recognition. The task is to reason about the scene context and infer what a target object hidden behind a flap is in a natural image. To tackle this problem, we first describe an online human psychophysics experiment recording active sampling via mouse clicks in lift-the-flap games and identify clicking patterns and features which are diagnostic for high contextual reasoning accuracy. As a proof of the usefulness of these clicking patterns and visual features, we extend a state-of-the-art recurrent model capable of attending to salient context regions, dynamically integrating useful information, making inferences, and predicting class label for the target object over multiple clicks. The proposed model achieves human-level contextual reasoning accuracy, shares human-like sampling behavior and learns interpretable features for contextual reasoning.""","""The authors present the task lift-the-flap where an agent (artificial or human) is presented with a blurred image and a hidden item. The agent can de-blur the parts of the image by clicking on it. The authors introduce a model for this task (ClickNet) and they compare this against others. As reviewers point, this paper presents an interesting set of experiments and analyses. Overall, this type of work can be quite influential as it gives an alternative way to improve our models by unveiling human strategies and using those as inductive biases for our models. That being said, I find the conclusions of this paper quite narrow for the general audience of ICLR (as R2 and R3 also point), as authors look into an artificial task and show ClickNet performs well. But what have we learned beyond that? How do we use these results to improve either our models or our understanding of these models? I believe these are the type of questions that are missing from the current version of the paper and that if answered would greatly increase its impact and relevance to the ICLR community. At the moment though, I cannot recommend this paper for acceptance. """
484,"""Continuous Adaptation in Multi-agent Competitive Environments""","['multi-agent environment', 'continuous adaptation', 'Nash equilibrium', 'deep counterfactual regret minimization', 'reinforcement learning', 'stochastic game', 'baseball']","""In a multi-agent competitive environment, we would expect an agent who can quickly adapt to environmental changes may have a higher probability to survive and beat other agents. In this paper, to discuss whether the adaptation capability can help a learning agent to improve its competitiveness in a multi-agent environment, we construct a simplified baseball game scenario to develop and evaluate the adaptation capability of learning agents. Our baseball game scenario is modeled as a two-player zero-sum stochastic game with only the final reward. We purpose a modified Deep CFR algorithm to learn a strategy that approximates the Nash equilibrium strategy. We also form several teams, with different teams adopting different playing strategies, trying to analyze (1) whether an adaptation mechanism can help in increasing the winning percentage and (2) what kind of initial strategies can help a team to get a higher winning percentage. The experimental results show that the learned Nash-equilibrium strategy is very similar to real-life baseball game strategy. Besides, with the proposed strategy adaptation mechanism, the winning percentage can be increased for the team with a Nash-equilibrium initial strategy. Nevertheless, based on the same adaptation mechanism, those teams with deterministic initial strategies actually become less competitive.""","""This paper studies whether adopting strategy adaptation mechanisms helps players improve their performance in zero-sum stochastic games (in this case baseball). Moreover they study two questions in particular, a) whether adaptation techniques are helpful when faced with a small number of iterations and 2) whats the effect of different initial strategies when both teams adopt the same adaptation technique. Reviewers expressed concerns regarding the fact that the authors adaptation techniques improve upon initial strategies, which seems to indicate that their initial strategies were not Nash (despite the use of CFR). In the lack of theory of why this seems to happen at the current setup (and whether indeed the initial strategies are Nash and why do the improve), stronger empirical evidence from more rigorous experiments seem somewhat necessary for recommending acceptance of this paper."""
485,"""Effect of Activation Functions on the Training of Overparametrized Neural Nets""","['activation functions', 'deep learning theory', 'neural networks']","""It is well-known that overparametrized neural networks trained using gradient based methods quickly achieve small training error with appropriate hyperparameter settings. Recent papers have proved this statement theoretically for highly overparametrized networks under reasonable assumptions. These results either assume that the activation function is ReLU or they depend on the minimum eigenvalue of a certain Gram matrix. In the latter case, existing works only prove that this minimum eigenvalue is non-zero and do not provide quantitative bounds which require that this eigenvalue be large. Empirically, a number of alternative activation functions have been proposed which tend to perform better than ReLU at least in some settings but no clear understanding has emerged. This state of affairs underscores the importance of theoretically understanding the impact of activation functions on training. In the present paper, we provide theoretical results about the effect of activation function on the training of highly overparametrized 2-layer neural networks. A crucial property that governs the performance of an activation is whether or not it is smooth: For non-smooth activations such as ReLU, SELU, ELU, which are not smooth because there is a point where either the rst order or second order derivative is discontinuous, all eigenvalues of the associated Gram matrix are large under minimal assumptions on the data. For smooth activations such as tanh, swish, polynomial, which have derivatives of all orders at all points, the situation is more complex: if the subspace spanned by the data has small dimension then the minimum eigenvalue of the Gram matrix can be small leading to slow training. But if the dimension is large and the data satises another mild condition, then the eigenvalues are large. If we allow deep networks, then the small data dimension is not a limitation provided that the depth is sufcient. We discuss a number of extensions and applications of these results.""","""The article studies the role of the activation function in learning of 2 layer overparaemtrized networks, presenting results on the minimum eigenvalues of the Gram matrix that appears in this type of analysis and which controls the rate of convergence. The article makes numerous observations contributing to the development of principles for the design of activation functions and a better understanding of an active area of investigation as is convergence in overparametrized nets. The reviewers were generally positive about this article. """
486,"""Encoding word order in complex embeddings""","['word embedding', 'complex-valued neural network', 'position embedding']","""Sequential word order is important when processing text. Currently, neural networks (NNs) address this by modeling word position using position embeddings. The problem is that position embeddings capture the position of individual words, but not the ordered relationship (e.g., adjacency or precedence) between individual word positions. We present a novel and principled solution for modeling both the global absolute positions of words and their order relationships. Our solution generalizes word embeddings, previously defined as independent vectors, to continuous word functions over a variable (position). The benefit of continuous functions over variable positions is that word representations shift smoothly with increasing positions. Hence, word representations in different positions can correlate with each other in a continuous function. The general solution of these functions can be extended to complex-valued variants. We extend CNN, RNN and Transformer NNs to complex-valued versions to incorporate our complex embedding (we make all code available). Experiments on text classification, machine translation and language modeling show gains over both classical word embeddings and position-enriched word embeddings. To our knowledge, this is the first work in NLP to link imaginary numbers in complex-valued representations to concrete meanings (i.e., word order).""","""This paper describes a new language model that captures both the position of words, and their order relationships. This redefines word embeddings (previously thought of as fixed and independent vectors) to be functions of position. This idea is implemented in several models (CNN, RNN and Transformer NNs) to show improvements on multiple tasks and datasets. One reviewer asked for additional experiments, which the authors provided, and which still supported their methodology. In the end, the reviewers agreed this paper should be accepted."""
487,"""Deep Audio Prior""","['deep audio prior', 'blind sound separation', 'deep learning', 'audio representation']","""Deep convolutional neural networks are known to specialize in distilling compact and robust prior from a large amount of data. We are interested in applying deep networks in the absence of training dataset. In this paper, we introduce deep audio prior (DAP) which leverages the structure of a network and the temporal information in a single audio file. Specifically, we demonstrate that a randomly-initialized neural network can be used with carefully designed audio prior to tackle challenging audio problems such as universal blind source separation, interactive audio editing, audio texture synthesis, and audio co-separation. To understand the robustness of the deep audio prior, we construct a benchmark dataset Universal-150 for universal sound source separation with a diverse set of sources. We show superior audio results than previous work on both qualitatively and quantitative evaluations. We also perform thorough ablation study to validate our design choices.""","""This paper proposes to use CNN'S prior to deal with the tasks in audio processing. The motivation is weak and the presentation is not clear. The technical contribution is trivial."""
488,"""Frequency Analysis for Graph Convolution Network""","['graph signal processing', 'frequency analysis', 'graph convolution neural network', 'simplified convolution network', 'semi-supervised vertex classification']","""In this work, we develop quantitative results to the learnablity of a two-layers Graph Convolutional Network (GCN). Instead of analyzing GCN under some classes of functions, our approach provides a quantitative gap between a two-layers GCN and a two-layers MLP model. Our analysis is based on the graph signal processing (GSP) approach, which can provide much more useful insights than the message-passing computational model. Interestingly, based on our analysis, we have been able to empirically demonstrate a few case when GCN and other state-of-the-art models cannot learn even when true vertex features are extremely low-dimensional. To demonstrate our theoretical findings and propose a solution to the aforementioned adversarial cases, we build a proof of concept graph neural network model with stacked filters named Graph Filters Neural Network (gfNN). ""","""This paper studies two-layer graph convolutional networks and two-layer multi-layer perceptions and develops quantitative results of their effect in signal processing settings. The paper received 3 reviews by experts working in this area. R1 recommends Weak Accept, indicating that the paper provides some useful insight (e.g. into when graph neural networks are or are not appropriate for particular problems) and poses some specific technical questions. In follow up discussions after the author response, R1 and authors agree that there are some over claims in the paper but that these could be addressed with some toning down of claims and additional discussion. R2 recommends Weak Accept but raises several concerns about the technical contribution of the paper, indicating that some of the conclusions were already known or are unsurprising. R2 concludes ""I vote for weak accept, but I am fine if it is rejected."" R3 recommends Reject, also questioning the significance of the technical contribution and whether some of the conclusions are well-supported by experiments, as well as some minor concerns about clarity of writing. In their thoughtful responses, authors acknowledge these concerns. Given the split decision, the AC also read the paper. While it is clear it has significant merit, the concerns about significance of the contribution and support for conclusions (as acknowledged by authors) are important, and the AC feels a revision of the paper and another round of peer review is really needed to flesh these issues out. """
489,"""The Implicit Bias of Depth: How Incremental Learning Drives Generalization""","['gradient flow', 'gradient descent', 'implicit regularization', 'implicit bias', 'generalization', 'optimization', 'quadratic network', 'matrix sensing']","""A leading hypothesis for the surprising generalization of neural networks is that the dynamics of gradient descent bias the model towards simple solutions, by searching through the solution space in an incremental order of complexity. We formally define the notion of incremental learning dynamics and derive the conditions on depth and initialization for which this phenomenon arises in deep linear models. Our main theoretical contribution is a dynamical depth separation result, proving that while shallow models can exhibit incremental learning dynamics, they require the initialization to be exponentially small for these dynamics to present themselves. However, once the model becomes deeper, the dependence becomes polynomial and incremental learning can arise in more natural settings. We complement our theoretical findings by experimenting with deep matrix sensing, quadratic neural networks and with binary classification using diagonal and convolutional linear networks, showing all of these models exhibit incremental learning.""","""The paper studies the role of depth on incremental learning, defined as a favorable learning regime in which one searches through the hypothesis space in increasing order of complexity. Specifically, it establishes a dynamical depth separation result, whereby shallow models require exponetially smaller initializations than deep ones in order to operate in the incremental learning regime. Despite some concerns shared amongst reviewers about the significance of these results to explain realistic deep models (that exhibit nonlinear behavior as well as interactions between neurons) and some remarks about the precision of some claims, the overall consensus -- also shared by the AC -- is that this paper puts forward an interesting phenomenon that will likely spark future research in this important direction. The AC thus recommends acceptance. """
490,"""Dropout: Explicit Forms and Capacity Control""",[],"""We investigate the capacity control provided by dropout in various machine learning problems. First, we study dropout for matrix sensing, where it induces a data-dependent regularizer that, in expectation, equals the weighted trace-norm of the product of the factors. In deep learning, we show that the data-dependent regularizer due to dropout directly controls the Rademacher complexity of the underlying class of deep neural networks. These developments enable us to give concrete generalization error bounds for the dropout algorithm in both matrix completion as well as training deep neural networks. We evaluate our theoretical findings on real-world datasets, including MovieLens, Fashion MNIST, and CIFAR-10.""","""The authors study dropout for matrix sensing and deep learning, and show that dropout induces a data-dependent regularizer in both cases. In both cases, dropout controls quantities that yield generalization bounds. Reviewers raised several concerns, and several of these were vehemently rebutted. The rhetoric of the back and forth slid into unfortunate territory, in my opinion, and I'd prefer not to see this sort of thing happen. On the one hand, I can sympathize with the reviewers trying to argue that (un)related work is not related work. On the other hand, it's best to be generous, or you run into this sort of mess. In the end, even the expert reviewers were unswayed. I suspect the next version of this paper may land more smoothly. While many of the technical issues are rebutted, one that caught my attention pertained to the empirical work. Reviewer #4 noticed that the empirical evaluations do not meet the sample complexity requirements for the bounds to be valid (nevermind loose). The response suggests this is simply a fact of making the bounds looser, but I suspect it may also change their form in this regime, potentially erasing the empirical findings. I suggest the authors carefully consider whether all assumptions are met, and relay this more carefully to readers."""
491,"""Efficient Saliency Maps for Explainable AI""","['Saliency', 'XAI', 'Efficent', 'Information']","""We describe an explainable AI saliency map method for use with deep convolutional neural networks (CNN) that is much more efficient than popular gradient methods. It is also quantitatively similar or better in accuracy. Our technique works by measuring information at the end of each network scale. This is then combined into a single saliency map. We describe how saliency measures can be made more efficient by exploiting Saliency Map Order Equivalence. Finally, we visualize individual scale/layer contributions by using a Layer Ordered Visualization of Information. This provides an interesting comparison of scale information contributions within the network not provided by other saliency map methods. Our method is generally straight forward and should be applicable to the most commonly used CNNs. (Full source code is available at pseudo-url).""","""The paper presents an efficient approach to computer saliency measures by exploiting saliency map order equivalence (SMOE), and visualization of individual layer contribution by a layer ordered visualization of information. The authors did a good job at addressing most issues raised in the reviews. In the end, two major concerns remained not fully addressed: one is the motivation of efficiency, and the other is how much better SMOE is compared with existing statistics. I think these two issue also determines how significance the work is. After discussion, we agree that while the revised draft pans out to be a much more improved one, the work itself is nothing groundbreaking. Given many other excellent papers on related topics, the paper cannot make the cut for ICLR. """
492,"""MODiR: Multi-Objective Dimensionality Reduction for Joint Data Visualisation""","['dimensionality reduction', 'visualisation', 'text visualisation', 'network drawing']","""Many large text collections exhibit graph structures, either inherent to the content itself or encoded in the metadata of the individual documents. Example graphs extracted from document collections are co-author networks, citation networks, or named-entity-cooccurrence networks. Furthermore, social networks can be extracted from email corpora, tweets, or social media. When it comes to visualising these large corpora, either the textual content or the network graph are used. In this paper, we propose to incorporate both, text and graph, to not only visualise the semantic information encoded in the documents' content but also the relationships expressed by the inherent network structure. To this end, we introduce a novel algorithm based on multi-objective optimisation to jointly position embedded documents and graph nodes in a two-dimensional landscape. We illustrate the effectiveness of our approach with real-world datasets and show that we can capture the semantics of large document collections better than other visualisations based on either the content or the network information.""","""There is a consensus among reviewers that the paper should not be accepted. No rebuttal was provided, so the paper is rejected. """
493,"""On the Decision Boundaries of Deep Neural Networks: A Tropical Geometry Perspective""","['Decision boundaries', 'Neural Network', 'Tropical Geometry', 'Network Pruning', 'Adversarial Attacks', 'Lottery Ticket Hypothesis']","""This work tackles the problem of characterizing and understanding the decision boundaries of neural networks with piece-wise linear non-linearity activations. We use tropical geometry, a new development in the area of algebraic geometry, to provide a characterization of the decision boundaries of a simple neural network of the form (Affine, ReLU, Affine). Specifically, we show that the decision boundaries are a subset of a tropical hypersurface, which is intimately related to a polytope formed by the convex hull of two zonotopes. The generators of the zonotopes are precise functions of the neural network parameters. We utilize this geometric characterization to shed light and new perspective on three tasks. In doing so, we propose a new tropical perspective for the lottery ticket hypothesis, where we see the effect of different initializations on the tropical geometric representation of the decision boundaries. Also, we leverage this characterization as a new set of tropical regularizers, which deal directly with the decision boundaries of a network. We investigate the use of these regularizers in neural network pruning (removing network parameters that do not contribute to the tropical geometric representation of the decision boundaries) and in generating adversarial input attacks (with input perturbations explicitly perturbing the decision boundaries geometry to change the network prediction of the input). ""","""This paper studies the decision boundaries of a certain class of neural networks (piecewise linear, non-linear activation functions) using tropical geometry, a subfield of algebraic geometry that leverages piece-wise linear structures. Building on earlier work, such piecewise linear networks are shown to be represented as a tropical rational function. This characterisation is used to explain different phenomena of neural network training, such as the 'lottery ticket hypothesis', network pruning, and adversarial attacks. This paper received mixed reviews, owing to its very specialized area. Whereas R1 championed the submission for its technical novelty, the other reviewers felt the current exposition is too inaccessible and some application areas are not properly addressed. The AC shares these concerns, recommends rejection and strongly encourages the authors to address the reviewers concerns in the next iteration. """
494,"""Function Feature Learning of Neural Networks""",[],"""We present a Function Feature Learning (FFL) method that can measure the similarity of non-convex neural networks. The function feature representation provides crucial insights into the understanding of the relations between different local solutions of identical neural networks. Unlike existing methods that use neuron activation vectors over a given dataset as neural network representation, FFL aligns weights of neural networks and projects them into a common function feature space by introducing a chain alignment rule. We investigate the function feature representation on Multi-Layer Perceptron (MLP), Convolutional Neural Network (CNN), and Recurrent Neural Network (RNN), finding that identical neural networks trained with different random initializations on different learning tasks by the Stochastic Gradient Descent (SGD) algorithm can be projected into different fixed points. This finding demonstrates the strong connection between different local solutions of identical neural networks and the equivalence of projected local solutions. With FFL, we also find that the semantics are often presented in a bottom-up way. Besides, FFL provides more insights into the structure of local solutions. Experiments on CIFAR-100, NameData, and tiny ImageNet datasets validate the effectiveness of the proposed method.""","""This paper tackles an important problem: understanding if different NN solutions are similar or different. In the current form, however, the main motivation for the approach, and what the empirical results tell us, remains unclear. I read the paper after the updates and after reading reviews and author responses, and still had difficulty understanding the goals and outcomes of the experiments (such as what exactly is being reported as test accuracy and what is meant by: ""High test accuracy means that assumptions are reasonable.""). We highly recommend that the authors revisit the description of the motivation and approach based on comments from reviewers; further explain what is reported as test accuracy in the experiments; and more clearly highlight the insights obtain from the experiments. """
495,"""Multi-source Multi-view Transfer Learning in Neural Topic Modeling with Pretrained Topic and Word Embeddings""","['Neural Topic Modeling', 'Transfer Learning', 'Unsupervised learning', 'Natural Language Processing']","""Though word embeddings and topics are complementary representations, several past works have only used pretrained word embeddings in (neural) topic modeling to address data sparsity problem in short text or small collection of documents. However, no prior work has employed (pretrained latent) topics in transfer learning paradigm. In this paper, we propose a framework to perform transfer learning in neural topic modeling using (1) pretrained (latent) topics obtained from a large source corpus, and (2) pretrained word and topic embeddings jointly (i.e., multiview) in order to improve topic quality, better deal with polysemy and data sparsity issues in a target corpus. In doing so, we first accumulate topics and word representations from one or many source corpora to build respective pools of pretrained topic (i.e., TopicPool) and word embeddings (i.e., WordPool). Then, we identify one or multiple relevant source domain(s) and take advantage of corresponding topics and word features via the respective pools to guide meaningful learning in the sparse target domain. We quantify the quality of topic and document representations via generalization (perplexity), interpretability (topic coherence) and information retrieval (IR) using short-text, long-text, small and large document collections from news and medical domains. We have demonstrated the state-ofthe- art results on topic modeling with the proposed transfer learning approaches.""","""This paper presents a transfer learning framework in neural topic modeling. Authors claim and reviewers agree that this view of transfer learning in the realm of topic modeling is novel. However, after much deliberation and discussion among the reviewers, we conclude that this paper does not contribute sufficient novelty in terms of the method. Also, reviewers find the experiments and results not sufficiently convincing. I sincerely thank the authors for submitting to ICLR and hope to see a revised paper in a future venue."""
496,"""Explain Your Move: Understanding Agent Actions Using Specific and Relevant Feature Attribution""","['Deep Reinforcement Learning', 'Saliency maps', 'Chess', 'Go', 'Atari', 'Interpretable AI', 'Explainable AI']","""As deep reinforcement learning (RL) is applied to more tasks, there is a need to visualize and understand the behavior of learned agents. Saliency maps explain agent behavior by highlighting the features of the input state that are most relevant for the agent in taking an action. Existing perturbation-based approaches to compute saliency often highlight regions of the input that are not relevant to the action taken by the agent. Our proposed approach, SARFA (Specific and Relevant Feature Attribution), generates more focused saliency maps by balancing two aspects (specificity and relevance) that capture different desiderata of saliency. The first captures the impact of perturbation on the relative expected reward of the action to be explained. The second downweighs irrelevant features that alter the relative expected rewards of actions other than the action to be explained. We compare SARFA with existing approaches on agents trained to play board games (Chess and Go) and Atari games (Breakout, Pong and Space Invaders). We show through illustrative examples (Chess, Atari, Go), human studies (Chess), and automated evaluation methods (Chess) that SARFA generates saliency maps that are more interpretable for humans than existing approaches. For the code release and demo videos, see: pseudo-url.""","""A new method of calculating saliency maps for deep networks trained through RL (for example to play games) is presented. The method is aimed at explaining why moves were taken by showing which salient features influenced the move, and seems to work well based on experiments with Chess, Go, and several Atari games. Reviewer 2 had a number of questions related to the performance of the method under various conditions, and these were answered satisfactorily by the reviewers. This is a solid paper with good reasoning and results, though perhaps not super novel, as the basic idea of explaining policies with saliency is not new. It should be accepted for poster presentation. """
497,"""Physics-Aware Flow Data Completion Using Neural Inpainting""","['neural inpainting', 'fluid dynamics', 'flow data completion', 'physics-aware network']","""In this paper we propose a physics-aware neural network for inpainting fluid flow data. We consider that flow field data inherently follows the solution of the Navier-Stokes equations and hence our network is designed to capture physical laws. We use a DenseBlock U-Net architecture combined with a stream function formulation to inpaint missing velocity data. Our loss functions represent the relevant physical quantities velocity, velocity Jacobian, vorticity and divergence. Obstacles are treated as known priors, and each layer of the network receives the relevant information through concatenation with the previous layer's output. Our results demonstrate the network's capability for physics-aware completion tasks, and the presented ablation studies show the effectiveness of each proposed component.""","""The authors present a physics-aware models for inpainting fluid data. In particular, the authors extend the vanilla U-net architecture and add losses that explicitly bias the network towards physically meaningful solutions. While the reviewers found the work to be interesting, they raised a few questions/objections which are summarised below: 1) Novelty: The reviewers largely found the idea to be novel. I agree that this is indeed novel and a step in the right direction. 2) Experiments: The main objection was to the experimental methodology. In particular, since most of the experiments were on simulated data the reviewers expected simulations where the test conditions were a bit more different than the training conditions. It is not very clear whether the training and test conditions were different and it would have been useful if the authors had clarified this in the rebuttal. The reviewers have also suggested a more thorough ablation study. 3) Organisation: The authors could have used the space more effectively by providing additional details and ablation studies. Unfortunately, the authors did not engage with the reviewers and respond to their queries. I understand that this could have been because of the poor ratings which would have made the authors believe that a discussion wouldn't help. The reviewers have asked very relevant Qs and made some interesting suggestions about the experimental setup. I strongly recommend the authors to consider these during subsequent submissions. Based on the reviewer comments and lack of response from the authors, I recommend that the paper cannot be accepted. """
498,"""Unifying Graph Convolutional Neural Networks and Label Propagation""","['graph convolutional neural networks', 'label propagation', 'node classification']","""Label Propagation (LPA) and Graph Convolutional Neural Networks (GCN) are both message passing algorithms on graphs. Both solve the task of node classification but LPA propagates node label information across the edges of the graph, while GCN propagates and transforms node feature information. However, while conceptually similar, theoretical relation between LPA and GCN has not yet been investigated. Here we study the relationship between LPA and GCN in terms of two aspects: (1) feature/label smoothing where we analyze how the feature/label of one node are spread over its neighbors; And, (2) feature/label influence of how much the initial feature/label of one node influences the final feature/label of another node. Based on our theoretical analysis, we propose an end-to-end model that unifies GCN and LPA for node classification. In our unified model, edge weights are learnable, and the LPA serves as regularization to assist the GCN in learning proper edge weights that lead to improved classification performance. Our model can also be seen as learning attention weights based on node labels, which is more task-oriented than existing feature-based attention models. In a number of experiments on real-world graphs, our model shows superiority over state-of-the-art GCN-based methods in terms of node classification accuracy. ""","""The authors attempt to unify graph convolutional networks and label propagation and propose a model that unifies them. The reviewers liked the idea but felt that more extensive experiments are needed. The impact of labels needs to be specially studied more in-depth."""
499,"""Multi-Agent Hierarchical Reinforcement Learning for Humanoid Navigation""","['Multi-Agent Reinforcement Learning', 'Reinforcement Learning', 'Hierarchical Reinforcement Learning']","""Multi-agent reinforcement learning is a particularly challenging problem. Current methods have made progress on cooperative and competitive environments with particle-based agents. Little progress has been made on solutions that could op- erate in the real world with interaction, dynamics, and humanoid robots. In this work, we make a significant step in multi-agent models on simulated humanoid robot navigation by combining Multi-Agent Reinforcement Learning (MARL) with Hierarchical Reinforcement Learning (HRL). We build on top of founda- tional prior work in learning low-level physical controllers for locomotion and add a layer to learn decentralized policies for multi-agent goal-directed collision avoidance systems. A video of our results on a multi-agent pursuit environment can be seen here ""","""This paper presents an approach combining multi-agent with hierarchical RL in a custom-made simulated humanoid robotics setting. Although it is an interesting premise and has a compelling motivation (multi-agent, real-world interaction, humanoid robotics), the reviewers had some trouble pinpointing what the significant contributions are. Partly this is due to lack of clarity in the presentation, such as with overlong sections (eg 5.2), unclear descriptions, mistakes in the text, etc. Reviewers also remarked that this paper might be trying to do too much, without performing the necessary experiments/comparisons and analyses needed to interpret the contributions of each component. This work is definitely promising and has the potential to make a nice contribution, given some additional care (experiments, analyses) and rewriting/polishing. As it is, its probably a bit premature for publication at ICLR. """
500,"""Variational Hyper RNN for Sequence Modeling""","['variational autoencoder', 'hypernetwork', 'recurrent neural network', 'time series']","""In this work, we propose a novel probabilistic sequence model that excels at capturing high variability in time series data, both across sequences and within an individual sequence. Our method uses temporal latent variables to capture information about the underlying data pattern and dynamically decodes the latent information into modifications of weights of the base decoder and recurrent model. The efficacy of the proposed method is demonstrated on a range of synthetic and real-world sequential data that exhibit large scale variations, regime shifts, and complex dynamics.""","""The paper proposes a neural network architecture that uses a hypernetwork (RNN or feedforward) to generate weights for a network (variational RNN), that models sequential data. An empirical comparison of a large number of configurations on synthetic and real world data show the promise of this method. The authors have been very responsive during the discussion period, and generated many new results to address some reviewer concerns. Apart from one reviewer, the others did not engage in further discussion in response to the authors updating their paper. The paper provides a tweak to the hypernetwork idea for modeling sequential data. There are many strong submissions at ICLR this year on RNNs, and the submission in its current state unfortunately does not pass the threshold."""
501,"""QGAN: Quantize Generative Adversarial Networks to Extreme low-bits""","['generative adversarial networks', 'quantization', 'extreme low bits']","""The intensive computation and memory requirements of generative adversarial neural networks (GANs) hinder its real-world deployment on edge devices such as smartphones. Despite the success in model reduction of convolutional neural networks (CNNs), neural network quantization methods have not yet been studied on GANs, which are mainly faced with the issues of both the effectiveness of quantization algorithms and the instability of training GAN models. In this paper, we start with an extensive study on applying existing successful CNN quantization methods to quantize GAN models to extreme low bits. Our observation reveals that none of them generates samples with reasonable quality because of the underrepresentation of quantized weights in models, and the generator and discriminator networks show different sensitivities upon the quantization precision. Motivated by these observations, we develop a novel quantization method for GANs based on EM algorithms, named as QGAN. We also propose a multi-precision algorithm to help find an appropriate quantization precision of GANs given image qualities requirements. Experiments on CIFAR-10 and CelebA show that QGAN can quantize weights in GANs to even 1-bit or 2-bit representations with results of quality comparable to original models.""","""main summary: method for quantizing GAN discussion: reviewer 1: well-written paper, but reviewer questions novelty reviewer 2: well-written, but some details are missing in the paper as well as comparisons to related work reviewer 3: well-written and interesting topic, related work section and clarity of results could be improved recommendation: all reviewers agree paper could be improved by better comparison to related work and better clarity of presentation. Marking paper as reject."""
502,"""A Meta-Transfer Objective for Learning to Disentangle Causal Mechanisms""","['meta-learning', 'transfer learning', 'structure learning', 'modularity', 'causality']","""We propose to use a meta-learning objective that maximizes the speed of transfer on a modified distribution to learn how to modularize acquired knowledge. In particular, we focus on how to factor a joint distribution into appropriate conditionals, consistent with the causal directions. We explain when this can work, using the assumption that the changes in distributions are localized (e.g. to one of the marginals, for example due to an intervention on one of the variables). We prove that under this assumption of localized changes in causal mechanisms, the correct causal graph will tend to have only a few of its parameters with non-zero gradient, i.e. that need to be adapted (those of the modified variables). We argue and observe experimentally that this leads to faster adaptation, and use this property to define a meta-learning surrogate score which, in addition to a continuous parametrization of graphs, would favour correct causal graphs. Finally, motivated by the AI agent point of view (e.g. of a robot discovering its environment autonomously), we consider how the same objective can discover the causal variables themselves, as a transformation of observed low-level variables with no causal meaning. Experiments in the two-variable case validate the proposed ideas and theoretical results.""","""This paper proposes to discover causal mechanisms through meta-learning, and suggests an approach for doing so. The reviewers raised concerns about the key hypothesis (that the right causal model implies higher expected online likelihood) not being sufficiently backed up through theory or through experiments on real data. The authors pointed to a recent paper that builds upon this work and tests on a more realistic problem setting. However, the newer paper measures not the online likelihood of adaptation, but just the training error during adaptation, suggesting that the approach in this paper may be worse. Despite the concerns, the reviewers generally agreed that the paper included novel and interesting ideas, and addressed a number of the reviewers' other concerns about the clarity, references, and experiments. Hence, it makes a worthwhile contribution to ICLR."""
503,"""SGD Learns One-Layer Networks in WGANs""","['Wasserstein GAN', 'global min-max', 'one-layer network']","""Generative adversarial networks (GANs) are a widely used framework for learning generative models. Wasserstein GANs (WGANs), one of the most successful variants of GANs, require solving a minmax problem to global optimality, but in practice, are successfully trained with stochastic gradient descent-ascent. In this paper, we show that, when the generator is a one-layer network, stochastic gradient descent-ascent converges to a global solution in polynomial time and sample complexity.""","""This article studies convergence of WGAN training using SGD and generators of the form pseudo-formula , with results on convergence with polynomial time and sample complexity under the assumption that the target distribution can be expressed by this type of generator. This expands previous work that considered linear generators. An important point of discussion was the choice of the discriminator as a linear or quadratic function. The authors' responses clarified some of the initial criticism, and the scores improved slightly. Following the discussion, the reviewers agreed that the problem being studied is a difficult one and that the paper makes some important contributions. However, they still found that the considered settings are very restrictive, maintaining that quadratic discriminators would work only for the very simple type of generators and targets under consideration. Although the article makes important advances towards understanding convergence of WGAN training with nonlinear models, the relevance of the contribution could be greatly enhanced by addressing / discussing the plausibility or implications of the analysis in a practical setting, in the best case scenario addressing a more practical type of neural networks. """
504,"""Stein Bridging: Enabling Mutual Reinforcement between Explicit and Implicit Generative Models""","['generative models', 'generative adversarial networks', 'energy models']","""Deep generative models are generally categorized into explicit models and implicit models. The former assumes an explicit density form whose normalizing constant is often unknown; while the latter, including generative adversarial networks (GANs), generates samples using a push-forward mapping. In spite of substantial recent advances demonstrating the power of the two classes of generative models in many applications, both of them, when used alone, suffer from respective limitations and drawbacks. To mitigate these issues, we propose Stein Bridging, a novel joint training framework that connects an explicit density estimator and an implicit sample generator with Stein discrepancy. We show that the Stein Bridge induces new regularization schemes for both explicit and implicit models. Convergence analysis and extensive experiments demonstrate that the Stein Bridging i) improves the stability and sample quality of the GAN training, and ii) facilitates the density estimator to seek more modes in data and alleviate the mode-collapse issue. Additionally, we discuss several applications of Stein Bridging and useful tricks in practical implementation used in our experiments.""","""The paper proposes a generative model that jointly trains an implicit generative model and an explicit energy based model using Stein's method. There are concerns about technical correctness of the proofs and the authors are advised to look carefully into the points raised by the reviewers. """
505,"""Removing input features via a generative model to explain their attributions to classifier's decisions""","['attribution maps', 'generative models', 'inpainting', 'counterfactual', 'explanations', 'interpretability', 'explainability']","""Interpretability methods often measure the contribution of an input feature to an image classifier's decisions by heuristically removing it via e.g. blurring, adding noise, or graying out, which often produce unrealistic, out-of-samples. Instead, we propose to integrate a generative inpainter into three representative attribution map methods as a mechanism for removing input features. Compared to the original counterparts, our methods (1) generate more plausible counterfactual samples under the true data generating process; (2) are more robust to hyperparameter settings; and (3) localize objects more accurately. Our findings were consistent across both ImageNet and Places365 datasets and two different pairs of classifiers and inpainters.""","""Perturbation-based methods often produce artefacts that make the perturbed samples less realistic. This paper proposes to corrects this through use of an inpainter. Authors claim that this results in more plausible perturbed samples and produces methods more robust to hyperparameter settings. Reviewers found the work intuitive and well-motivated, well-written, and the experiments comprehensive. However they also had concerns about minimal novelty and unfair experimental comparisons, as well as inconclusive results. Authors response have not sufficiently addressed these concerns. Therefore, we recommend rejection."""
506,"""Why Does Hierarchy (Sometimes) Work So Well in Reinforcement Learning?""","['rl', 'hierarchy', 'reinforcement learning']","""Hierarchical reinforcement learning has demonstrated significant success at solving difficult reinforcement learning (RL) tasks. Previous works have motivated the use of hierarchy by appealing to a number of intuitive benefits, including learning over temporally extended transitions, exploring over temporally extended periods, and training and exploring in a more semantically meaningful action space, among others. However, in fully observed, Markovian settings, it is not immediately clear why hierarchical RL should provide benefits over standard ""shallow"" RL architectures. In this work, we isolate and evaluate the claimed benefits of hierarchical RL on a suite of tasks encompassing locomotion, navigation, and manipulation. Surprisingly, we find that most of the observed benefits of hierarchy can be attributed to improved exploration, as opposed to easier policy learning or imposed hierarchical structures. Given this insight, we present exploration techniques inspired by hierarchy that achieve performance competitive with hierarchical RL while at the same time being much simpler to use and implement. ""","""This paper seeks to analyse the important question around why hierarchical reinforcement learning can be beneficial. The findings show that improved exploration is at the core of this improved performance. Based on these findings, the paper also proposes some simple exploration techniques which are shown to be competitive with hierarchical RL approaches. This is a really interesting paper that could serve to address an oft speculated about result of the relation between HRL and exploration. While the findings of the paper are intuitive, it was agreed by all reviewers that the claims are too general for the evidence presented. The paper should be extended with a wider range of experiments covering more domains and algorithms, and would also benefit from some theoretical results. As it stands this paper should not be accepted."""
507,"""A Data-Efficient Mutual Information Neural Estimator for Statistical Dependency Testing""","['mutual information', 'fMRI', 'inter-subject correation', 'mutual information neural estimation', 'meta-learning', 'statistical test of dependency']","""Measuring Mutual Information (MI) between high-dimensional, continuous, random variables from observed samples has wide theoretical and practical applications. Recent works have developed accurate MI estimators through provably low-bias approximations and tight variational lower bounds assuming abundant supply of samples, but require an unrealistic number of samples to guarantee statistical significance of the estimation. In this work, we focus on improving data efficiency and propose a Data-Efficient MINE Estimator (DEMINE) that can provide a tight lower confident interval of MI under limited data, through adding cross-validation to the MINE lower bound (Belghazi et al., 2018). Hyperparameter search is employed and a novel meta-learning approach with task augmentation is developed to increase robustness to hyperparamters, reduce overfitting and improve accuracy. With improved data-efficiency, our DEMINE estimator enables statistical testing of dependency at practical dataset sizes. We demonstrate the effectiveness of DEMINE on synthetic benchmarks and a real world fMRI dataset, with application of inter-subject correlation analysis.""","""The paper deal with a mutual information based dependency test. The reviewers have provided extensive and constructive feedback on the paper. The authors have in turn given detailed response withsome new experiments and plans for improvement. Overall the reviewers are not convinced the paper is ready for publication. """
508,"""On Mutual Information Maximization for Representation Learning""","['mutual information', 'representation learning', 'unsupervised learning', 'self-supervised learning']","""Many recent methods for unsupervised or self-supervised representation learning train feature extractors by maximizing an estimate of the mutual information (MI) between different views of the data. This comes with several immediate problems: For example, MI is notoriously hard to estimate, and using it as an objective for representation learning may lead to highly entangled representations due to its invariance under arbitrary invertible transformations. Nevertheless, these methods have been repeatedly shown to excel in practice. In this paper we argue, and provide empirical evidence, that the success of these methods cannot be attributed to the properties of MI alone, and that they strongly depend on the inductive bias in both the choice of feature extractor architectures and the parametrization of the employed MI estimators. Finally, we establish a connection to deep metric learning and argue that this interpretation may be a plausible explanation for the success of the recently introduced methods.""","""This paper exams the role of mutual information (MI) estimation in representation learning. Through experiments, they show that the large MI is not predictive of downstream performance, and the empirical success of methods like InfoMax may be more attributed to the inductive bias in the choice of architectures of discriminators, rather than accurate MI estimation. The work is well appreciated by the reviewers. It forms a strong contribution and may motivate subsequent works in the field. """
509,"""Reformer: The Efficient Transformer""","['attention', 'locality sensitive hashing', 'reversible layers']","""Large Transformer models routinely achieve state-of-the-art results on a number of tasks but training these models can be prohibitively costly, especially on long sequences. We introduce two techniques to improve the efficiency of Transformers. For one, we replace dot-product attention by one that uses locality-sensitive hashing, changing its complexity from O( pseudo-formula ) to O( \log L where pseudo-formula is the length of the sequence. Furthermore, we use reversible residual layers instead of the standard residuals, which allows storing activations only once in the training process instead of N times, where N is the number of layers. The resulting model, the Reformer, performs on par with Transformer models while being much more memory-efficient and much faster on long sequences.""","""Transformer models have proven to be quite successful when applied to a variety of ML tasks such as NLP. However, the computational and memory requirements can at times be prohibitive, such as when dealing with long sequences. This paper proposes locality-sensitive hashing to reduce the sequence-length complexity, as well as reversible residual layers to reduce storage requirements. Experimental results confirm that the performance of Transformer models can be preserved even with these new efficiencies in place, and hence, this paper will likely have significant impact within the community. Some relatively minor points notwithstanding, all reviewers voted for acceptance which is my recommendation as well. Note that this paper was also vetted by several detailed external commenters. In all cases the authors provided reasonable feedback, and the final revision of the work will surely be even stronger."""
510,"""The Effect of Residual Architecture on the Per-Layer Gradient of Deep Networks""",[],"""A critical part of the training process of neural networks takes place in the very first gradient steps post initialization. In this work, we study the connection between the network's architecture and initialization parameters, to the statistical properties of the gradient in random fully connected ReLU networks, through the study of the the Jacobian. We compare three types of architectures: vanilla networks, ResNets and DenseNets. The later two, as we show, preserve the variance of the gradient norm through arbitrary depths when initialized properly, which prevents exploding or decaying gradients at deeper layers. In addition, we show that the statistics of the per layer gradient norm is a function of the architecture and the layer's size, but surprisingly not the layer's depth. This depth invariant result is surprising in light of the literature results that state that the norm of the layer's activations grows exponentially with the specific layer's depth. Experimental support is given in order to validate our theoretical results and to reintroduce concatenated ReLU blocks, which, as we show, present better initialization properties than ReLU blocks in the case of fully connected networks.""","""This paper studies the statistics of activation norms and Jacobian norms for randomly-initialized ReLU networks in the presence (and absence) of various types of residual connections. Whereas the variance of the gradient norm grows with depth for vanilla networks, it can be depth-independent for residual networks when using the proper initialization. Reviewers were positive about the setup, but also pointed out important shortcomings on the current manuscript, especially related to the lack of significance of the measured gradient norm statistics with regards to generalisation, and with some techinical aspects of the derivations. For these reasons, the AC believes this paper will strongly benefit from an extra iteration. """
511,"""NAS evaluation is frustratingly hard""","['neural architecture search', 'nas', 'benchmark', 'reproducibility', 'harking']","""Neural Architecture Search (NAS) is an exciting new field which promises to be as much as a game-changer as Convolutional Neural Networks were in 2012. Despite many great works leading to substantial improvements on a variety of tasks, comparison between different methods is still very much an open issue. While most algorithms are tested on the same datasets, there is no shared experimental protocol followed by all. As such, and due to the under-use of ablation studies, there is a lack of clarity regarding why certain methods are more effective than others. Our first contribution is a benchmark of 8 NAS methods on 5 datasets. To overcome the hurdle of comparing methods with different search spaces, we propose using a methods relative improvement over the randomly sampled average architecture, which effectively removes advantages arising from expertly engineered search spaces or training protocols. Surprisingly, we find that many NAS techniques struggle to significantly beat the average architecture baseline. We perform further experiments with the commonly used DARTS search space in order to understand the contribution of each component in the NAS pipeline. These experiments highlight that: (i) the use of tricks in the evaluation protocol has a predominant impact on the reported performance of architectures; (ii) the cell-based search space has a very narrow accuracy range, such that the seed has a considerable impact on architecture rankings; (iii) the hand-designed macrostructure (cells) is more important than the searched micro-structure (operations); and (iv) the depth-gap is a real phenomenon, evidenced by the change in rankings between 8 and 20 cell architectures. To conclude, we suggest best practices, that we hope will prove useful for the community and help mitigate current NAS pitfalls, e.g. difficulties in reproducibility and comparison of search methods. The code used is available at pseudo-url.""","""Summary: This paper provides comprehensive empirical evidence for some of the systemic issues in the NAS community, for example showing that several published NAS algorithms do not outperform random sampling on previously unseen data and that the training pipeline is more important in the DARTS space than the exact choice of neural architecture. I very much appreciate that code is available for reproducibility. Reviewer scores and discussion: The reviewers' scores have very high variance: 2/3 reviewers gave clear acceptance scores (8,8), very much liking the paper, whereas one reviewer gave a clear rejection score (1). In the discussion between the reviewers and the AC, despite the positive comments of the other reviewers, AnonReviewer 2 defended his/her position, arguing that the novelty is too low given previous works. The other reviewers argued against this, emphasizing that it is an important contribution to show empirical evidence for the importance of the training protocol (note that the intended contribution is *not* to introduce these training protocols; they are taken from previous work). Due to the high variance, I read the paper myself in detail. Here are my own two cents: - It is not new to compare to a single random sample. Sciuto et al clearly proposed this first; see Figure 1 (c) in pseudo-url - The systematic experiments showing the importance of the training pipeline are very useful, providing proper and much needed empirical evidence for the many existing suggestions that this might be the case. Figure 3 is utterly convincing. - Throughout, it would be good to put the work into perspective a bit more. E.g., correlations have been studied by many authors before. Also, the paper cites the best practice checklist in the beginning, but does not mention it in the section on best practices (my view is that this paper is in line with that checklist and provides important evidence for several points in it; the checklist also contains other points not being discussed in this paper; it would be good to know whether this paper suggests any new points for the checklist). Recommendation: Overall, I firmly believe that this paper is an important contribution to the NAS community. It may be viewed by some as ""just"" running some experiments, but the experiments it shows are very informative and will impact the community and help guide it in the right direction. I therefore recommend acceptance (as a poster)."""
512,"""On Weight-Sharing and Bilevel Optimization in Architecture Search""","['neural architecture search', 'weight-sharing', 'bilevel optimization', 'non-convex optimization', 'hyperparameter optimization', 'model selection']","""Weight-sharingthe simultaneous optimization of multiple neural networks using the same parametershas emerged as a key component of state-of-the-art neural architecture search. However, its success is poorly understood and often found to be surprising. We argue that, rather than just being an optimization trick, the weight-sharing approach is induced by the relaxation of a structured hypothesis space, and introduces new algorithmic and theoretical challenges as well as applications beyond neural architecture search. Algorithmically, we show how the geometry of ERM for weight-sharing requires greater care when designing gradient- based minimization methods and apply tools from non-convex non-Euclidean optimization to give general-purpose algorithms that adapt to the underlying structure. We further analyze the learning-theoretic behavior of the bilevel optimization solved by practical weight-sharing methods. Next, using kernel configuration and NLP feature selection as case studies, we demonstrate how weight-sharing applies to the architecture search generalization of NAS and effectively optimizes the resulting bilevel objective. Finally, we use our optimization analysis to develop a simple exponentiated gradient method for NAS that aligns with the underlying optimization geometry and matches state-of-the-art approaches on CIFAR-10.""","""Since there were only two official reviews submitted, I reviewed the paper to form a third viewpoint. I agree with reviewer 2 on the following points, which support rejection of the paper: 1) Only CIFAR is evaluated without Penn Treebank; 2) The ""faster convergence"" is not empirically justified by better final accuracy with same amount of search cost; and 3) The advantage of the proposed ACSA over SBMD is not clearly demonstrated in the paper. The scores of the two official reviews are insufficient for acceptance, and an additional review did not overturn this view."""
513,"""U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation""","['Image-to-Image Translation', 'Generative Attentional Networks', 'Adaptive Layer-Instance Normalization']","""We propose a novel method for unsupervised image-to-image translation, which incorporates a new attention module and a new learnable normalization function in an end-to-end manner. The attention module guides our model to focus on more important regions distinguishing between source and target domains based on the attention map obtained by the auxiliary classifier. Unlike previous attention-based method which cannot handle the geometric changes between domains, our model can translate both images requiring holistic changes and images requiring large shape changes. Moreover, our new AdaLIN (Adaptive Layer-Instance Normalization) function helps our attention-guided model to flexibly control the amount of change in shape and texture by learned parameters depending on datasets. Experimental results show the superiority of the proposed method compared to the existing state-of-the-art models with a fixed network architecture and hyper-parameters. ""","""The paper proposes a new architecture for unsupervised image2image translation. Following the revision/discussion, all reviewers agree that the proposed ideas are reasonable, well described, convincingly validated, and of clear though limited novelty. Accept."""
514,"""Ladder Polynomial Neural Networks""",['polynomial neural networks'],"""The underlying functions of polynomial neural networks are polynomial functions. These networks are shown to have nice theoretical properties by previous analysis, but they are actually hard to train when their polynomial orders are high. In this work, we devise a new type of activations and then create the Ladder Polynomial Neural Network (LPNN). This new network can be trained with generic optimization algorithms. With a feedforward structure, it can also be combined with deep learning techniques such as batch normalization and dropout. Furthermore, an LPNN provides good control of its polynomial order because its polynomial order increases by 1 with each of its hidden layers. In our empirical study, deep LPNN models achieve good performances in a series of regression and classification tasks.""","""This paper proposes a new type of Polynomial NN called Ladder Polynomial NN (LPNN) which is easy to train with general optimization algorithms and can be combined with techniques like batch normalization and dropout. Experiments show it works better than FMs with simple classification and regression tasks, but no experiments are done in more complex tasks. All reviewers agree the paper addresses an interesting question and makes some progress but the contribution is limited and there are still many ways to improve."""
515,"""Generating Dialogue Responses From A Semantic Latent Space""","['dialog', 'chatbot', 'open domain conversation', 'CCA']","""Generic responses are a known issue for open-domain dialog generation. Most current approaches model this one-to-many task as a one-to-one task, hence being unable to integrate information from multiple semantically similar valid responses of a prompt. We propose a novel dialog generation model that learns a semantic latent space, on which representations of semantically related sentences are close to each other. This latent space is learned by maximizing correlation between the features extracted from prompt and responses. Learning the pair relationship between the prompts and responses as a regression task on the latent space, instead of classification on the vocabulary using MLE loss, enables our model to view semantically related responses collectively. An additional autoencoder is trained, for recovering the full sentence from the latent space. Experimental results show that our proposed model eliminates the generic response problem, while achieving comparable or better coherence compared to baselines.""","""This paper proposes a response generation approach that aims to tackle the generic response problem. The approach is learning a latent semantic space by maximizing the correlation between features extracted from prompts and responses. The reviewers were concerned about the lack of comparison with previous papers tackling the same problem, and did not change their decision (i.e., were not convinced) even after the rebuttal. Hence, I suggest a reject for this paper."""
516,"""Truth or backpropaganda? An empirical investigation of deep learning theory""","['Deep learning', 'generalization', 'loss landscape', 'robustness']","""We empirically evaluate common assumptions about neural networks that are widely held by practitioners and theorists alike. In this work, we: (1) prove the widespread existence of suboptimal local minima in the loss landscape of neural networks, and we use our theory to find examples; (2) show that small-norm parameters are not optimal for generalization; (3) demonstrate that ResNets do not conform to wide-network theories, such as the neural tangent kernel, and that the interaction between skip connections and batch normalization plays a role; (4) find that rank does not correlate with generalization or robustness in a practical setting.""","""The authors take a closer look at widely held beliefs about neural networks. Using a mix of analysis and experiment, they shed some light on the ways these assumptions break down. The paper contributes to our understanding of various phenomena and their connection to generalization, and should be a useful paper for theoreticians searching for predictive theories."""
517,"""Generative Adversarial Networks For Data Scarcity Industrial Positron Images With Attention""",[],"""In the industrial field, the positron annihilation is not affected by complex environment, and the gamma-ray photon penetration is strong, so the nondestructive detection of industrial parts can be realized. Due to the poor image quality caused by gamma-ray photon scattering, attenuation and short sampling time in positron process, we propose the idea of combining deep learning to generate positron images with good quality and clear details by adversarial nets. The structure of the paper is as follows: firstly, we encode to get the hidden vectors of medical CT images based on transfer Learning, and use PCA to extract positron image features. Secondly, we construct a positron image memory based on attention mechanism as a whole input to the adversarial nets which uses medical hidden variables as a query. Finally, we train the whole model jointly and update the input parameters until convergence. Experiments have proved the possibility of generating rare positron images for industrial non-destructive testing using countermeasure networks, and good imaging results have been achieved.""","""The paper studies Positron Emission Tomography (PET) in medical imaging. The paper focuses on the challenges created by gamma-ray photon scattering, that results in poor image quality. To tackle this problem and enhance the image quality, the paper suggests using generative adversarial networks. Unfortunately due to poor writing and severe language issues, none of the three reviewers were able to properly assess the paper [see the reviews for multiple examples of this]. In addition, in places, some important implementation details were missing. The authors chose not to response to reviewers' concerns. In its current form, the submission cannot be well understood by people interested in reading the paper, so it needs to be improved and resubmitted. """
518,"""MGP-AttTCN: An Interpretable Machine Learning Model for the Prediction of Sepsis""","['time series analysis', 'interpretability', 'Gaussian Processes', 'attention neural networks']","""With a mortality rate of 5.4 million lives worldwide every year and a healthcare cost of more than 16 billion dollars in the USA alone, sepsis is one of the leading causes of hospital mortality and an increasing concern in the ageing western world. Recently, medical and technological advances have helped re-define the illness criteria of this disease, which is otherwise poorly understood by the medical society. Together with the rise of widely accessible Electronic Health Records, the advances in data mining and complex nonlinear algorithms are a promising avenue for the early detection of sepsis. This work contributes to the research effort in the field of automated sepsis detection with an open-access labelling of the medical MIMIC-III data set. Moreover, we propose MGP-AttTCN: a joint multitask Gaussian Process and attention-based deep learning model to early predict the occurrence of sepsis in an interpretable manner. We show that our model outperforms the current state-of-the-art and present evidence that different labelling heuristics lead to discrepancies in task difficulty.""","""The problem of introducing interpretability into sepsis prediction frameworks is one that I find a very important contribution, and I personally like the ideas presented in this paper. However, there are two reviewers, who have experience at the boundary of ML and HC, who are flagging this paper as currently not focusing on the technical novelty, and explaining the HC application enough to be appreciated by the ICLR audience. As such my recommendation is to edit the exposition so that it more appropriate for a general ML audience, or to submit it to an ML for HC meeting. Great work, and I hope it finds the right audience/focus soon. """
519,"""Filter redistribution templates for iteration-lessconvolutional model reduction""","['Model reduction', 'Pruning', 'filter distribution']","""Automatic neural network discovery methods face an enormous challenge caused for the size of the search space. A common practice is to split this space at different levels and to explore only a part of it. Neural architecture search methods look for how to combine a subset of layers, which are the most promising, to create an architecture while keeping a predefined number of filters in each layer. On the other hand, pruning techniques take a well known architecture and look for the appropriate number of filters per layer. In both cases the exploration is made iteratively, training models several times during the search. Inspired by the advantages of the two previous approaches, we proposed a fast option to find models with improved characteristics. We apply a small set of templates, which are considered promising, for make a redistribution of the number of filters in an already existing neural network. When compared to the initial base models, we found that the resulting architectures, trained from scratch, surpass the original accuracy even after been reduced to fit the same amount of resources.""","""This paper examines how different distributions of the layer-wise number of CNN filters, as partitioned into a set of fixed templates, impacts the performance of various baseline deep architectures. Testing is conducting from the viewpoint of balancing accuracy with various resource metrics such as number of parameters, memory footprint, etc. In the end, reviewer scores were partitioned as two accepts and two rejects. However, the actual comments indicate that both nominal accept reviewers expressed borderline opinions regarding this work (e.g., one preferred a score of 4 or 5 if available, while the other explicitly stated that the paper was borderline acceptance-worthy). Consequently in aggregate there was no strong support for acceptance and non-dismissable sentiment towards rejection. For example, consistent with reviewer comments, a primary concern with this paper is that the novelty and technical contribution is rather limited, and hence, to warrant acceptance the empirical component should be especially compelling. However, all the experiments are limited to cifar10/cifar100 data, with the exception of a couple extra tests on tiny ImageNet added after the rebuttal. But these latter experiments are not so convincing since the base architecture has the best accuracy on VGG, and only on a single MobileNet test do we actually see clear-cut improvement. Moreover, these new results appear to be based on just a single trial per data set (this important detail is unclear), and judging from Figure 2 of the revision, MobileNet results on cifar data can have very high variance blurring the distinction between methods. It is therefore hard to draw firm conclusions at this point, and these two additional tiny ImageNet tests notwithstanding, we don't really know how to differentiate phenomena that are intrinsic to cifar data from other potentially relevant factors. Overall then, my view is that far more testing with different data types is warranted to strengthen the conclusions of this paper and compensate for the modest technical contribution. Note also that training with all of these different filter templates is likely no less computationally expensive than some state-of-the-art pruning or related compression methods, and therefore it would be worth comparing head-to-head with such approaches. This is especially true given that in many scenarios, test-time computational resources are more critical than marginal differences in training time, etc."""
520,"""Scalable and Order-robust Continual Learning with Additive Parameter Decomposition""","['Continual Learning', 'Lifelong Learning', 'Catastrophic Forgetting', 'Deep Learning']","""While recent continual learning methods largely alleviate the catastrophic problem on toy-sized datasets, there are issues that remain to be tackled in order to apply them to real-world problem domains. First, a continual learning model should effectively handle catastrophic forgetting and be efficient to train even with a large number of tasks. Secondly, it needs to tackle the problem of order-sensitivity, where the performance of the tasks largely varies based on the order of the task arrival sequence, as it may cause serious problems where fairness plays a critical role (e.g. medical diagnosis). To tackle these practical challenges, we propose a novel continual learning method that is scalable as well as order-robust, which instead of learning a completely shared set of weights, represents the parameters for each task as a sum of task-shared and sparse task-adaptive parameters. With our Additive Parameter Decomposition (APD), the task-adaptive parameters for earlier tasks remain mostly unaffected, where we update them only to reflect the changes made to the task-shared parameters. This decomposition of parameters effectively prevents catastrophic forgetting and order-sensitivity, while being computation- and memory-efficient. Further, we can achieve even better scalability with APD using hierarchical knowledge consolidation, which clusters the task-adaptive parameters to obtain hierarchically shared parameters. We validate our network with APD, APD-Net, on multiple benchmark datasets against state-of-the-art continual learning methods, which it largely outperforms in accuracy, scalability, and order-robustness.""","""The submission addresses the problem of continual learning with large numbers of tasks and variable task ordering and proposes a parameter decomposition approach such that part of the parameters are task-adaptive and some are task-shared. The validation is on omniglot and other benchmarks. The reviews were mixed on this paper, but most reviewers were favorably impressed with the problem setup, the scalability of the method, and the results. The baselines were limited but acceptable. The recommendation is to accept this paper, but the authors are advised to address all the points in the reviews in their final revision."""
521,"""A Generative Model for Molecular Distance Geometry""","['graph neural networks', 'variational autoencoders', 'distance geometry', 'molecular conformation']","""Computing equilibrium states for many-body systems, such as molecules, is a long-standing challenge. In the absence of methods for generating statistically independent samples, great computational effort is invested in simulating these systems using, for example, Markov chain Monte Carlo. We present a probabilistic model that generates such samples for molecules from their graph representations. Our model learns a low-dimensional manifold that preserves the geometry of local atomic neighborhoods through a principled learning representation that is based on Euclidean distance geometry. We create a new dataset for molecular conformation generation with which we show experimentally that our generative model achieves state-of-the-art accuracy. Finally, we show how to use our model as a proposal distribution in an importance sampling scheme to compute molecular properties.""","""The paper presents a solution to generating molecule with three dimensional structure by learning a low-dimensional manifold that preserves the geometry of local atomic neighborhoods based on Euclidean distance geometry. The application is interesting and the proposed solution is reasonable. The authors did a good job at addressing most concerns raised in the reviews and updating the draft. Two main concerns were left unresolved: one is the lack of novelty in the proposed model, and the other is that some arguments in the paper are not fully supported. The paper could benefit from one more round of revision before being ready for publication. """
522,"""Imagining the Latent Space of a Variational Auto-Encoders""","['VAE', 'GAN']",""" Variational Auto-Encoders (VAEs) are designed to capture compressible information about a dataset. As a consequence the information stored in the latent space is seldom sufficient to reconstruct a particular image. To help understand the type of information stored in the latent space we train a GAN-style decoder constrained to produce images that the VAE encoder will map to the same region of latent space. This allows us to ''imagine'' the information captured in the latent space. We argue that this is necessary to make a VAE into a truly generative model. We use our GAN to visualise the latent space of a standard VAE and of a pseudo-formula -VAE.""","""The paper proposes a new method for improving generative properties of VAE model. The reviewers unanimously agree that this paper is not ready to be published, particularly being concerned about the unclear objective and potentially misleading claims of the paper. Multiple reviewers pointed out about incorrect claims and statements without theoretical or empirical justification. The reviewers also mention that the paper does not provide new insights about VAE model as MDL interpretation of VAE it is not new."""
523,"""Influence-Based Multi-Agent Exploration""","['Multi-agent reinforcement learning', 'Exploration']","""Intrinsically motivated reinforcement learning aims to address the exploration challenge for sparse-reward tasks. However, the study of exploration methods in transition-dependent multi-agent settings is largely absent from the literature. We aim to take a step towards solving this problem. We present two exploration methods: exploration via information-theoretic influence (EITI) and exploration via decision-theoretic influence (EDTI), by exploiting the role of interaction in coordinated behaviors of agents. EITI uses mutual information to capture the interdependence between the transition dynamics of agents. EDTI uses a novel intrinsic reward, called Value of Interaction (VoI), to characterize and quantify the influence of one agent's behavior on expected returns of other agents. By optimizing EITI or EDTI objective as a regularizer, agents are encouraged to coordinate their exploration and learn policies to optimize the team performance. We show how to optimize these regularizers so that they can be easily integrated with policy gradient reinforcement learning. The resulting update rule draws a connection between coordinated exploration and intrinsic reward distribution. Finally, we empirically demonstrate the significant strength of our methods in a variety of multi-agent scenarios.""","""The paper presents a new take on exploration in multi-agent reinforcement learning settings, and presents two approaches, one motivated by information theoretic, the other by decision theoretic influence on other agents. Reviewers consider the proposed approach ""pretty elegant, and in a sense seem fundamental"", the experimental section ""thorough"", and expect the work to ""encourage future work to explore more problems in this area"". Several questions were raised, especially regarding related work, comparison to single agent exploration approaches, and several clarifying questions. These were largely addressed by the authors, resulting in a strong submission with valuable contributions."""
524,"""Tensorized Embedding Layers for Efficient Model Compression""","['Embedding layers compression', 'tensor networks', 'low-rank factorization']","""The embedding layers transforming input words into real vectors are the key components of deep neural networks used in natural language processing. However, when the vocabulary is large, the corresponding weight matrices can be enormous, which precludes their deployment in a limited resource setting. We introduce a novel way of parametrizing embedding layers based on the Tensor Train (TT) decomposition, which allows compressing the model significantly at the cost of a negligible drop or even a slight gain in performance. We evaluate our method on a wide range of benchmarks in natural language processing and analyze the trade-off between performance and compression ratios for a wide range of architectures, from MLPs to LSTMs and Transformers.""","""This paper has been reviewed by three reviewers and received scores: 6/3/8. While two reviewers were reasonably positive, they also did not provide a very compelling reviews (e.g. one rev. just reiterated the rationale behind tensor model compression and the other admitted the paper is of limited novelty). Perhaps the shortest review (and perhaps the most telling) prompts authors to the fact that the model compression with tensor decompositions is quite common in the literature these days. One example could be T-Net: Parametrizing Fully Convolutional Nets with a Single High-Order Tensor by Kossaifi et al. Very likely the authors will find many more recent developments on model compression with/without tensor decomp. For a good paper in this topic, authors should carefully consider various tensor factorizations (Tucker, TT, tensor rings, t-product and many more) and consider theoretical contributions and guarantees. Taking into account all pros and cons, this submissions falls marginally short of the ICLR 2020 threshold but the authors are encouraged to work on further developments."""
525,"""SGD with Hardness Weighted Sampling for Distributionally Robust Deep Learning""","['distributionally robust optimization', 'distributionally robust deep learning', 'over-parameterized deep neural networks', 'deep neural networks', 'AI safety', 'hard example mining']","""Distributionally Robust Optimization (DRO) has been proposed as an alternative to Empirical Risk Minimization (ERM) in order to account for potential biases in the training data distribution. However, its use in deep learning has been severely restricted due to the relative inefficiency of the optimizers available for DRO compared to the wide-spread Stochastic Gradient Descent (SGD) based optimizers for deep learning with ERM. In this work, we demonstrate that SGD with hardness weighted sampling is a principled and efficient optimization method for DRO in machine learning and is particularly suited in the context of deep learning. Similar to a hard example mining strategy in essence and in practice, the proposed algorithm is straightforward to implement and computationally as efficient as SGD-based optimizers used for deep learning. It only requires adding a softmax layer and maintaining an history of the loss values for each training example to compute adaptive sampling probabilities. In contrast to typical ad hoc hard mining approaches, and exploiting recent theoretical results in deep learning optimization, we prove the convergence of our DRO algorithm for over-parameterized deep learning networks with ReLU activation and finite number of layers and parameters. Preliminary results demonstrate the feasibility and usefulness of our approach.""","""This paper proposes a modification of SGD to do distributionally-robust optimization of deep networks. The main idea is sensible enough, however, the inadequate handling of baselines and relatively toy nature of the experiments means that this paper needs more work to be accepted."""
526,"""To Relieve Your Headache of Training an MRF, Take AdVIL""","['Markov Random Fields', 'Undirected Graphical Models', 'Variational Inference', 'Black-box Infernece']","""We propose a black-box algorithm called {\it Adversarial Variational Inference and Learning} (AdVIL) to perform inference and learning on a general Markov random field (MRF). AdVIL employs two variational distributions to approximately infer the latent variables and estimate the partition function of an MRF, respectively. The two variational distributions provide an estimate of the negative log-likelihood of the MRF as a minimax optimization problem, which is solved by stochastic gradient descent. AdVIL is proven convergent under certain conditions. On one hand, compared with contrastive divergence, AdVIL requires a minimal assumption about the model structure and can deal with a broader family of MRFs. On the other hand, compared with existing black-box methods, AdVIL provides a tighter estimate of the log partition function and achieves much better empirical results. ""","""The paper proposes a black box algorithm for MRF training, utilizing a novel approach based on variational approximations of both the positive and negative phase terms of the log likelihood gradient (as R2 puts it, ""a fairly creative combination of existing approaches""). Several technical and rhetorical points were raised by the reviewers, most of which seem to have been satisfactorily addressed, but all reviewers agreed that this was a good direction. The main weakness of the work is that the empirical work is very small scale, mainly due to the bottleneck imposed by an inner loop optimization of the variational distribution q(v, h). I believe it's important to note that most truly large scale results in the literature revolve around purely feedforward models that don't require expensive to compute approximations; that said, MNIST experiments would have been nice. Nevertheless, this work seems like a promising step on a difficult problem, and it seems that the ideas herein are worth disseminating, hopefully stimulating future work on rendering this procedure less expensive and more scalable."""
527,"""Lazy-CFR: fast and near-optimal regret minimization for extensive games with imperfect information""",[],"""Counterfactual regret minimization (CFR) methods are effective for solving two-player zero-sum extensive games with imperfect information with state-of-the-art results. However, the vanilla CFR has to traverse the whole game tree in each round, which is time-consuming in large-scale games. In this paper, we present Lazy-CFR, a CFR algorithm that adopts a lazy update strategy to avoid traversing the whole game tree in each round. We prove that the regret of Lazy-CFR is almost the same to the regret of the vanilla CFR and only needs to visit a small portion of the game tree. Thus, Lazy-CFR is provably faster than CFR. Empirical results consistently show that Lazy-CFR is significantly faster than the vanilla CFR.""","""The paper proposed an regret based approach to speed up counterfactural regret minimization. The reviewers find the proposed approach interesting. However, the method require large memory. More experimental comparisons and comparisons pointed out by reviewers and public comments will help improve the paper. """
528,"""Estimating Gradients for Discrete Random Variables by Sampling without Replacement""","['gradient', 'estimator', 'discrete', 'categorical', 'sampling', 'without replacement', 'reinforce', 'baseline', 'variance', 'gumbel', 'vae', 'structured prediction']","""We derive an unbiased estimator for expectations over discrete random variables based on sampling without replacement, which reduces variance as it avoids duplicate samples. We show that our estimator can be derived as the Rao-Blackwellization of three different estimators. Combining our estimator with REINFORCE, we obtain a policy gradient estimator and we reduce its variance using a built-in control variate which is obtained without additional model evaluations. The resulting estimator is closely related to other gradient estimators. Experiments with a toy problem, a categorical Variational Auto-Encoder and a structured prediction problem show that our estimator is the only estimator that is consistently among the best estimators in both high and low entropy settings.""","""The authors derive a novel, unbiased gradient estimator for discrete random variables based on sampling without replacement. They relate their estimator to existing multi-sample estimators and motivate why we would expect reduced variance. Finally, they evaluate their estimator across several tasks and show that is performs well in all of them. The reviewers agree that the revised paper is well-written and well-executed. There was some concern about that effectiveness of the estimator, however, the authors clarified that ""it is the only estimator that performs well across different settings (high and low entropy). Therefore it is more robust and a strict improvement to any of these estimators which only have good performance in either high or low entropy settings."" Reviewer 2 was still not convinced about the strength of the analysis of the estimator, and this is indeed quantifying the variance reduction theoretically would be an improvement. Overall, the paper is a nice addition to the set of tools for computing gradients of expectations of discrete random variables. I recommend acceptance. """
529,"""GQ-Net: Training Quantization-Friendly Deep Networks""","['Network quantization', 'Efficient deep learning']","""Network quantization is a model compression and acceleration technique that has become essential to neural network deployment. Most quantization methods per- form fine-tuning on a pretrained network, but this sometimes results in a large loss in accuracy compared to the original network. We introduce a new technique to train quantization-friendly networks, which can be directly converted to an accurate quantized network without the need for additional fine-tuning. Our technique allows quantizing the weights and activations of all network layers down to 4 bits, achieving high efficiency and facilitating deployment in practical settings. Com- pared to other fully quantized networks operating at 4 bits, we show substantial improvements in accuracy, for example 66.68% top-1 accuracy on ImageNet using ResNet-18, compared to the previous state-of-the-art accuracy of 61.52% Louizos et al. (2019) and a full precision reference accuracy of 69.76%. We performed a thorough set of experiments to test the efficacy of our method and also conducted ablation studies on different aspects of the method and techniques to improve training stability and accuracy. Our codebase and trained models are available on GitHub.""","""The paper propose a new quantization-friendly network training algorithm called GQ (or DQ) net. The paper is well-written, and the proposed idea is interesting. Empirical results are also good. However, the major performance improvement comes from the combination of different incremental improvements. Some of these additional steps do seem orthogonal to the proposed idea. Also, it is not clear how robust the method is to the various hyperparameters / schedules. For example, it seems that some of the suggested training options are conflicting each other. More in-depth discussions and analysis on the setting of the regularization parameter and schedule for the loss term blending parameters will be useful."""
530,"""Gradient-Based Neural DAG Learning""","['Structure Learning', 'Causality', 'Density estimation']","""We propose a novel score-based approach to learning a directed acyclic graph (DAG) from observational data. We adapt a recently proposed continuous constrained optimization formulation to allow for nonlinear relationships between variables using neural networks. This extension allows to model complex interactions while avoiding the combinatorial nature of the problem. In addition to comparing our method to existing continuous optimization methods, we provide missing empirical comparisons to nonlinear greedy search methods. On both synthetic and real-world data sets, this new method outperforms current continuous methods on most tasks while being competitive with existing greedy search methods on important metrics for causal inference.""","""In this paper, the authors propose a novel approach for learning the structure of a directed acyclic graph from observational data that allows to flexibly model nonlinear relationships between variables using neural networks. While the reviewers initially had concerns with respect to the positioning of the paper and various questions regarding theoretical results and experiments, these concerns have been addressed satisfactorily during the discussion period. The paper is now acceptable for publication in ICLR-2020. """
531,"""Differentiable learning of numerical rules in knowledge graphs""","['knowledge graphs', 'rule learning', 'differentiable neural logic']","""Rules over a knowledge graph (KG) capture interpretable patterns in data and can be used for KG cleaning and completion. Inspired by the TensorLog differentiable logic framework, which compiles rule inference into a sequence of differentiable operations, recently a method called Neural LP has been proposed for learning the parameters as well as the structure of rules. However, it is limited with respect to the treatment of numerical features like age, weight or scientific measurements. We address this limitation by extending Neural LP to learn rules with numerical values, e.g., People younger than 18 typically live with their parents. We demonstrate how dynamic programming and cumulative sum operations can be exploited to ensure efficiency of such extension. Our novel approach allows us to extract more expressive rules with aggregates, which are of higher quality and yield more accurate predictions compared to rules learned by the state-of-the-art methods, as shown by our experiments on synthetic and real-world datasets.""","""This paper presents a number of improvements on existing approaches to neural logic programming. The reviews are generally positive: two weak accepts, one weak reject. Reviewer 2 seems wholly in favour of acceptance at the end of discussion, and did not clarify why they were sticking to their score of weak accept. The main reason Reviewer 1 sticks to 6 rather than 8 is that the work extends existing work rather than offering a ""fundamental contribution"", but otherwise is very positive. I personally feel that a) most work extends existing work b) there is room in our conferences for such well executed extensions (standing on the shoulders of giants etc). Reviewer 3 is somewhat unconvinced by the nature of the evaluation. While I understand their reservations, they state that they would not be offended by the paper being accepted in spite of their reservations. Overall, I find that the review group leans more in favour of acceptance, and an happy to recommend acceptance for the paper as it makes progress in an interesting area at the intersection of differentiable programming and logic-based programming."""
532,"""Learning DNA folding patterns with Recurrent Neural Networks ""","['Machine Learning', 'Recurrent Neural Networks', '3D chromatin structure', 'topologically associating domains', 'computational biology.']",""" The recent expansion of machine learning applications to molecular biology proved to have a significant contribution to our understanding of biological systems, and genome functioning in particular. Technological advances enabled the collection of large epigenetic datasets, including information about various DNA binding factors (ChIP-Seq) and DNA spatial structure (Hi-C). Several studies have confirmed the correlation between DNA binding factors and Topologically Associating Domains (TADs) in DNA structure. However, the information about physical proximity represented by genomic coordinate was not yet used for the improvement of the prediction models. In this research, we focus on Machine Learning methods for prediction of folding patterns of DNA in a classical model organism Drosophila melanogaster. The paper considers linear models with four types of regularization, Gradient Boosting and Recurrent Neural Networks for the prediction of chromatin folding patterns from epigenetic marks. The bidirectional LSTM RNN model outperformed all the models and gained the best prediction scores. This demonstrates the utilization of complex models and the importance of memory of sequential DNA states for the chromatin folding. We identify informative epigenetic features that lead to the further conclusion of their biological significance.""","""The authors consider the problem of predicting DNA folding patterns. They use a range of simple, linear models and find that a bi-LSTM architecture yielded best performance. This paper is below acceptance. Reviewers pointed out strong similarity to previously published work. Furthermore the manuscript lacked in clarity, leaving uncertain eg details about experimental details. """
533,"""VIMPNN: A physics informed neural network for estimating potential energies of out-of-equilibrium systems""","['neural network', 'chemical energy estimation', 'density functional theory']","""Simulation of molecular and crystal systems enables insight into interesting chemical properties that benefit processes ranging from drug discovery to material synthesis. However these simulations can be computationally expensive and time consuming despite the approximations through Density Functional Theory (DFT). We propose the Valence Interaction Message Passing Neural Network (VIMPNN) to approximate DFT's ground-state energy calculations. VIMPNN integrates physics prior knowledge such as the existence of different interatomic bounds to estimate more accurate energies. Furthermore, while many previous machine learning methods consider only stable systems, our proposed method is demonstrated on unstable systems at different atomic distances. VIMPNN predictions can be used to determine the stable configurations of systems, i.e. stable distance for atoms -- a necessary step for the future simulation of crystal growth for example. Our method is extensively evaluated on a augmented version of the QM9 dataset that includes unstable molecules, as well as a new dataset of infinite- and finite-size crystals, and is compared with the Message Passing Neural Network (MPNN). VIMPNN has comparable accuracy with DFT, while allowing for 5 orders of magnitude in computational speed up compared to DFT simulations, and produces more accurate and informative potential energy curves than MPNN for estimating stable configurations.""","""The paper considers the problem of estimating the electronic structure's ground state energy of a given atomic system by means of supervised machine learning, as a fast alternative to conventional explicit methods (DFT). For this purpose, it modifies the neural message-passing architecture to account for further physical properties, and it extends the empirical validation to also include unstable molecules. Reviewers acknowledged the valuable experimental setup of this work and the significance of the results in the application domain, but were generally skeptical about the novelty of the machine learning model under study. Ultimately, and given that the main focus of this conference is on Machine Learning methodology, this AC believes this work could be more suitable in a more specialized venue in computational/quantum chemistry. """
534,"""Deep Reasoning Networks: Thinking Fast and Slow, for Pattern De-mixing""","['Deep Reasoning Network', 'Pattern De-mixing']","""We introduce Deep Reasoning Networks (DRNets), an end-to-end framework that combines deep learning with reasoning for solving pattern de-mixing problems, typically in an unsupervised or weakly-supervised setting. DRNets exploit problem structure and prior knowledge by tightly combining logic and constraint reasoning with stochastic-gradient-based neural network optimization. We illustrate the power of DRNets on de-mixing overlapping hand-written Sudokus (Multi-MNIST-Sudoku) and on a substantially more complex task in scientific discovery that concerns inferring crystal structures of materials from X-ray diffraction data (Crystal-Structure-Phase-Mapping). DRNets significantly outperform the state of the art and experts' capabilities on Crystal-Structure-Phase-Mapping, recovering more precise and physically meaningful crystal structures. On Multi-MNIST-Sudoku, DRNets perfectly recovered the mixed Sudokus' digits, with 100% digit accuracy, outperforming the supervised state-of-the-art MNIST de-mixing models.""","""The paper received mixed reviews of WR (R1), WR (R2) and WA (R3). AC has carefully read all the reviews/rebuttal/comments and examined the paper. AC agrees with R1 and R2's concerns, specifically around overclaiming around reasoning. Also AC was unnerved, as was R2 and R3, by the notion of continuing to train on the test set (and found the rebuttal unconvincing on this point). Overall, the AC feels this paper cannot be accepted. The authors should remove the unsupported/overly bold claims in their paper and incorporate the constructive suggestions from the reviewers in a revised version of the paper."""
535,"""Neural Video Encoding""","['Kolmogorov complexity', 'differentiable programming', 'convolutional neural networks']","""Deep neural networks have had unprecedented success in computer vision, natural language processing, and speech largely due to the ability to search for suitable task algorithms via differentiable programming. In this paper, we borrow ideas from Kolmogorov complexity theory and normalizing flows to explore the possibilities of finding arbitrary algorithms that represent data. In particular, algorithms which encode sequences of video image frames. Ultimately, we demonstrate neural video encoded using convolutional neural networks to transform autoregressive noise processes and show that this method has surprising cryptographic analogs for information security.""","""The paper has several clarity and novelty issues."""
536,"""Wasserstein-Bounded Generative Adversarial Networks""","['GAN', 'WGAN', 'GENERATIVE ADVERSARIAL NETWORKS']","""In the field of Generative Adversarial Networks (GANs), how to design a stable training strategy remains an open problem. Wasserstein GANs have largely promoted the stability over the original GANs by introducing Wasserstein distance, but still remain unstable and are prone to a variety of failure modes. In this paper, we present a general framework named Wasserstein-Bounded GAN (WBGAN), which improves a large family of WGAN-based approaches by simply adding an upper-bound constraint to the Wasserstein term. Furthermore, we show that WBGAN can reasonably measure the difference of distributions which almost have no intersection. Experiments demonstrate that WBGAN can stabilize as well as accelerate convergence in the training processes of a series of WGAN-based variants.""","""The paper presents a framework named Wasserstein-bounded GANs which generalizes WGAN. The paper shows that WBGAN can improve stability. The reviewers raised several questions about the method and the experiments, but these were not addressed. I encourage the authors to revise the draft and resubmit to a different venue."""
537,"""Generative Cleaning Networks with Quantized Nonlinear Transform for Deep Neural Network Defense""","['Adversarial Defense', 'Adversarial Attack']","""Effective defense of deep neural networks against adversarial attacks remains a challenging problem, especially under white-box attacks. In this paper, we develop a new generative cleaning network with quantized nonlinear transform for effective defense of deep neural networks. The generative cleaning network, equipped with a trainable quantized nonlinear transform block, is able to destroy the sophisticated noise pattern of adversarial attacks and recover the original image content. The generative cleaning network and attack detector network are jointly trained using adversarial learning to minimize both perceptual loss and adversarial loss. Our extensive experimental results demonstrate that our approach outperforms the state-of-art methods by large margins in both white-box and black-box attacks. For example, it improves the classification accuracy for white-box attacks upon the second best method by more than 40\% on the SVHN dataset and more than 20\% on the challenging CIFAR-10 dataset. ""","""This paper presents a method to defend neural networks from adversarial attack. The proposed generative cleaning network has a trainable quantization module which is claimed to be able to eliminate adversarial noise and recover the original image. After the intensive interaction with authors and discussion, one expert reviewer (R3) admitted that the experimental procedure basically makes sense and increased the score to Weak Reject. Yet, R3 is still not satisfied with some details such as the number of BPDA iterations, and more importantly, concludes that the meaningful numbers reported in the paper show only small gains, making the claim of the paper less convincing. As authors seem to have less interest in providing theoretical analysis and support, this issue is critical for decision, and there was no objection from other reviewers. After carefully reading the paper myself, I decided to support the opinion and therefore would like to recommend rejection. """
538,"""Convolutional Conditional Neural Processes""","['Neural Processes', 'Deep Sets', 'Translation Equivariance']","""We introduce the Convolutional Conditional Neural Process (ConvCNP), a new member of the Neural Process family that models translation equivariance in the data. Translation equivariance is an important inductive bias for many learning problems including time series modelling, spatial data, and images. The model embeds data sets into an infinite-dimensional function space, as opposed to finite-dimensional vector spaces. To formalize this notion, we extend the theory of neural representations of sets to include functional representations, and demonstrate that any translation-equivariant embedding can be represented using a convolutional deep-set. We evaluate ConvCNPs in several settings, demonstrating that they achieve state-of-the-art performance compared to existing NPs. We demonstrate that building in translation equivariance enables zero-shot generalization to challenging, out-of-domain tasks.""","""This paper presents Convolutional Conditional Neural Process (ConvCNP), a new member of the neural process family that models translation equivariance. Current models must learn translation equivariance from the data, and the authors show that ConvCNP can learn this as part of the model, which is much more generalisable and efficient. They evaluate the ConvCNP on several benchmarks, including an astronomical time-series modelling experiment, a sim2real experiment, and several image completion experiments and show excellent results. The authors wrote extensive responses the the reviewers, uploading a revised version of the paper, and there was some further discussion. This is a strong paper worthy of inclusion in ICLR and could have a large impact on many fields in ML/AI. """
539,"""Neural tangent kernels, transportation mappings, and universal approximation""","['Neural Tangent Kernel', 'universal approximation', 'Barron', 'transport mapping']","""This paper establishes rates of universal approximation for the shallow neural tangent kernel (NTK): network weights are only allowed microscopic changes from random initialization, which entails that activations are mostly unchanged, and the network is nearly equivalent to its linearization. Concretely, the paper has two main contributions: a generic scheme to approximate functions with the NTK by sampling from transport mappings between the initial weights and their desired values, and the construction of transport mappings via Fourier transforms. Regarding the first contribution, the proof scheme provides another perspective on how the NTK regime arises from rescaling: redundancy in the weights due to resampling allows individual weights to be scaled down. Regarding the second contribution, the most notable transport mapping asserts that roughly / \delta^{10d} nodes are sufficient to approximate continuous functions, where pseudo-formula depends on the continuity properties of the target function. By contrast, nearly the same proof yields a bound of / \delta^{2d}$ for shallow ReLU networks; this gap suggests a tantalizing direction for future work, separating shallow ReLU networks and their linearization. ""","""The paper considers representational aspects of neural tangent kernels (NTKs). More precisely, recent literature on overparametrized neural networks has identified NTKs as a way to characterize the behavior of gradient descent on wide neural networks as fitting these types of kernels. This paper focuses on the representational aspect: namely that functions of appropriate ""complexity"" can be written as an NTK with parameters close to initialization (comparably close to what results on gradient descent get). The reviewers agree this content is of general interest to the community and with the proposed revisions there is general agreement that the paper has merits to recommend acceptance."""
540,"""Scaling Laws for the Principled Design, Initialization, and Preconditioning of ReLU Networks""","['initialization', 'mlp', 'relu']","""Abstract In this work, we describe a set of rules for the design and initialization of well-conditioned neural networks, guided by the goal of naturally balancing the diagonal blocks of the Hessian at the start of training. We show how our measure of conditioning of a block relates to another natural measure of conditioning, the ratio of weight gradients to the weights. We prove that for a ReLU-based deep multilayer perceptron, a simple initialization scheme using the geometric mean of the fan-in and fan-out satisfies our scaling rule. For more sophisticated architectures, we show how our scaling principle can be used to guide design choices to produce well-conditioned neural networks, reducing guess-work.""","""This paper proposes a new design space for initialization of neural networks motivated by balancing the singular values of the Hessian. Reviewers found the problem well motivated and agreed that the proposed method has merit, however more rigorous experiments are required to demonstrate that the ideas in this work are significant progress over current known techniques. As noted by Reviewer 2, there has been substantial prior work on initialization and conditioning that needs to be discussed as they relate to the proposed method. The AC notes two additional, closely related initialization schemes that should be discussed [1,2]. Comparing with stronger baselines on more recent modern architectures would improve this work significantly. [1]: pseudo-url [2]: pseudo-url."""
541,"""Coresets for Accelerating Incremental Gradient Methods""",[],"""Many machine learning problems reduce to the problem of minimizing an expected risk. Incremental gradient (IG) methods, such as stochastic gradient descent and its variants, have been successfully used to train the largest of machine learning models. IG methods, however, are in general slow to converge and sensitive to stepsize choices. Therefore, much work has focused on speeding them up by reducing the variance of the estimated gradient or choosing better stepsizes. An alternative strategy would be to select a carefully chosen subset of training data, train only on that subset, and hence speed up optimization. However, it remains an open question how to achieve this, both theoretically as well as practically, while not compromising on the quality of the final model. Here we develop CRAIG, a method for selecting a weighted subset (or coreset) of training data in order to speed up IG methods. We prove that by greedily selecting a subset S of training data that minimizes the upper-bound on the estimation error of the full gradient, running IG on this subset will converge to the (near)optimal solution in the same number of epochs as running IG on the full data. But because at each epoch the gradients are computed only on the subset S, we obtain a speedup that is inversely proportional to the size of S. Our subset selection algorithm is fully general and can be applied to most IG methods. We further demonstrate practical effectiveness of our algorithm, CRAIG, through an extensive set of experiments on several applications, including logistic regression and deep neural networks. Experiments show that CRAIG, while achieving practically the same loss, speeds up IG methods by up to 10x for convex and 3x for non-convex (deep learning) problems.""","""This paper investigates the practical and theoretical consequences of speeding up training using incremental gradient methods (such as stochastic descent) by calculating the gradients with respect to a specifically chosen sparse subset of data. The reviewers were quite split on the paper. On the one hand, there was a general excitement about the direction of the paper. The idea of speeding up gradient descent is of course hugely relevant to the current machine learning landscape. The approach was also considered novel, and the paper well-written. However, the reviewers also pointed out multiple shortcomings. The experimental section was deemed to lack clarity and baselines. The results on standard dataset were very different from expected, causing worry about the reliability, although this has partially been addressed in additional experiments. The applicability to deep learning and large dataset, as well as the significance of time saved by using this method, were other worries. Unfortunately, I have to agree with the majority of the reviewers that the idea is fascinating, but that more work is required for acceptance to ICLR. """
542,"""Deep Generative Classifier for Out-of-distribution Sample Detection""","['Out-of-distribution Detection', 'Generative Classifier', 'Deep Neural Networks', 'Multi-class Classification', 'Gaussian Discriminant Analysis']","""The capability of reliably detecting out-of-distribution samples is one of the key factors in deploying a good classifier, as the test distribution always does not match with the training distribution in most real-world applications. In this work, we propose a deep generative classifier which is effective to detect out-of-distribution samples as well as classify in-distribution samples, by integrating the concept of Gaussian discriminant analysis into deep neural networks. Unlike the discriminative (or softmax) classifier that only focuses on the decision boundary partitioning its latent space into multiple regions, our generative classifier aims to explicitly model class-conditional distributions as separable Gaussian distributions. Thereby, we can define the confidence score by the distance between a test sample and the center of each distribution. Our empirical evaluation on multi-class images and tabular data demonstrate that the generative classifier achieves the best performances in distinguishing out-of-distribution samples, and also it can be generalized well for various types of deep neural networks.""","""The paper presents a training method for deep neural networks to detect out-of-distribution samples under perspective of Gaussian discriminant analysis. Reviewers and AC agree that some idea is given in the previous work (although it does not focus on training), and additional ideas in the paper are not super novel. Furthermore, experimental results are weak, e.g., comparison with other deep generative classifiers are desirable, as the paper focuses on training such deep models. Hence, I recommend rejection."""
543,"""Reject Illegal Inputs: Scaling Generative Classifiers with Supervised Deep Infomax""","['generative classifiers', 'selective classification', 'classification with rejection']","""Deep Infomax~(DIM) is an unsupervised representation learning framework by maximizing the mutual information between the inputs and the outputs of an encoder, while probabilistic constraints are imposed on the outputs. In this paper, we propose Supervised Deep InfoMax~(SDIM), which introduces supervised probabilistic constraints to the encoder outputs. The supervised probabilistic constraints are equivalent to a generative classifier on high-level data representations, where class conditional log-likelihoods of samples can be evaluated. Unlike other works building generative classifiers with conditional generative models, SDIMs scale on complex datasets, and can achieve comparable performance with discriminative counterparts. With SDIM, we could perform \emph{classification with rejection}. Instead of always reporting a class label, SDIM only makes predictions when test samples' largest logits surpass some pre-chosen thresholds, otherwise they will be deemed as out of the data distributions, and be rejected. Our experiments show that SDIM with rejection policy can effectively reject illegal inputs including out-of-distribution samples and adversarial examples.""","""This paper combines a well-known, recently proposed unsupervised representation learning technique technique with a class-conditional negative log likelihood and a squared hinge loss on the class-wise conditional likelihoods, and proposes to use the resulting conditional density model for generative classification. The empirical work appears to validate the claim that their method leads to good out of distribution detection, and better performance using a rejection option. The adversarial defense results are less clear. Reporting raw logits is a strange choice, and difficult to interpret; the table is also difficult to read, and this method of reporting makes it difficult to compare against existing methods. The reviewers generally remarked on presentation issues. R1 asked about the contribution of various loss terms, a matter I feel is underexplored in this work, and the authors mainly replied with a qualitative description of loss behaviour in the joint system, which I don't believe was the question. R1 also asked about the choice of thresholds and the issues of fairness of comparison regarding model capacity, neither of which seemed adequately addressed. R3 remarked on the clarity being lacking, and also that ""Generative modeling of representations is novel, afaik."" (It is not; see, for example, the VQ-VAE line of work where PixelCNN priors are fit on top of representations, and layer-wise pre-training works of the mid 2000s, where generative models were frequently fit on greedily trained feature representations, sometimes in conjunction with a joint generative model of class labels). R2's review was very brief, and with a self-reported low confidence, but their concerns were addressed in a subsequent update. There are three weaknesses which are my grounds for recommending rejection. First, this paper does a poor job of situating itself in the wider body of literature on classification with rejection, which dates to at least the 1970s (see Bartlett & Wengkamp, 2006 and the references therein). Second, the empirical work makes little comparison to other methods in the literature; baselines on clean data are self-generated, and the paper compares to no other adversarial defense proposals. In a minor drawback, ImageNet results are also missing; given that one of the purported advantages of the method is scalability, a large scale benchmark would have strengthened this claim. Third, no ablation study is undertaken that might give us insight into the role of each term of the loss. Given that this is a straightforward combination of well-understood techniques, a fully empirical paper ought to deliver more insight into the combination than this manuscript has."""
544,"""Defense against Adversarial Examples by Encoder-Assisted Search in the Latent Coding Space""","['Adversarial Defense', 'Auto-encoder', 'Adversarial Attack', 'GAN']","""Deep neural networks were shown to be vulnerable to crafted adversarial perturbations, and thus bring serious safety problems. To solve this problem, we proposed pseudo-formula , a framework for purifying input images by searching a closest natural reconstruction with little computation. We first build a reconstruction network AE-GAN, which adapted auto-encoder by introducing adversarial loss to the objective function. In this way, we can enhance the generative ability of decoder and preserve the abstraction ability of encoder to form a self-organized latent space. In the inference time, when given an input, we will start a search process in the latent space which aims to find the closest reconstruction to the given image on the distribution of normal data. The encoder can provide a good start point for the searching process, which saves much computation cost. Experiments show that our method is robust against various attacks and can reach comparable even better performance to similar methods with much fewer computations.""","""The paper proposes a defense for adversarial attacks based on autoencoders that tries to find the closest point to the natural image in the output span of the decoder and ""purify"" the adversarial example. There were concerns about the work being too incremental over DefenseGAN and about empirical evaluation of the defense. It is crucial to test the defense methods against best available attacks to establish the effectiveness. Authors should also discuss and consider evaluating their method against the attack proposed in pseudo-url that claims to greatly reduce the defense accuracy of DefenseGAN. """
545,"""DiffTaichi: Differentiable Programming for Physical Simulation""","['Differentiable programming', 'robotics', 'optimal control', 'physical simulation', 'machine learning system']","""We present DiffTaichi, a new differentiable programming language tailored for building high-performance differentiable physical simulators. Based on an imperative programming language, DiffTaichi generates gradients of simulation steps using source code transformations that preserve arithmetic intensity and parallelism. A light-weight tape is used to record the whole simulation program structure and replay the gradient kernels in a reversed order, for end-to-end backpropagation. We demonstrate the performance and productivity of our language in gradient-based learning and optimization tasks on 10 different physical simulators. For example, a differentiable elastic object simulator written in our language is 4.2x shorter than the hand-engineered CUDA version yet runs as fast, and is 188x faster than the TensorFlow implementation. Using our differentiable programs, neural network controllers are typically optimized within only tens of iterations.""","""The paper provides a language for optimizing through physical simulations. The reviewers had a number of concerns related to paper organization and insufficient comparisons to related work (jax). During the discussion phase, the authors significantly updated their paper and ran additional experiments, leading to a much stronger paper."""
546,"""Structural Language Models for Any-Code Generation""","['Program Generation', 'Structural Language Model', 'SLM', 'Generative Model', 'Code Generation']","""We address the problem of Any-Code Generation (AnyGen) - generating code without any restriction on the vocabulary or structure. The state-of-the-art in this problem is the sequence-to-sequence (seq2seq) approach, which treats code as a sequence and does not leverage any structural information. We introduce a new approach to AnyGen that leverages the strict syntax of programming languages to model a code snippet as tree structural language modeling (SLM). SLM estimates the probability of the program's abstract syntax tree (AST) by decomposing it into a product of conditional probabilities over its nodes. We present a neural model that computes these conditional probabilities by considering all AST paths leading to a target node. Unlike previous structural techniques that have severely restricted the kinds of expressions that can be generated, our approach can generate arbitrary expressions in any programming language. Our model significantly outperforms both seq2seq and a variety of existing structured approaches in generating Java and C# code. We make our code, datasets, and models available online.""","""This paper proposes a new method for code generation based on structured language models. After viewing the paper, reviews, and author response my assessment is that I basically agree with Reviewer 4. (Now, after revision) This work seems to be (1) a bit incremental over other works such as Brockschmidt et al. (2019), and (2) a bit of a niche topic for ICLR. At the same time it has (3) good engineering effort resulting in good scores, and (4) relatively detailed conceptual comparison with other work in the area. Also, (5) the title of ""Structural Language Models for Code Generation"" is clearly over-claiming the contribution of the work -- as cited in the paper there are many language models, unconditional or conditional, that have been used in code generation in the past. In order to be accurate, the title would need to be modified to something that more accurately describes the (somewhat limited) contribution of the work. In general, I found this paper borderline. ICLR, as you know is quite competitive so while this is a reasonably good contribution, I'm not sure whether it checks the box of either high quality or high general interest to warrant acceptance. Because of this, I'm not recommending it for acceptance at this time, but definitely encourage the authors to continue to polish for submission to a different venue (perhaps a domain conference that would be more focused on the underlying task of code generation?)"""
547,"""Skew-Fit: State-Covering Self-Supervised Reinforcement Learning""","['deep reinforcement learning', 'goal space', 'goal conditioned reinforcement learning', 'self-supervised reinforcement learning', 'goal sampling', 'reinforcement learning']","""Autonomous agents that must exhibit flexible and broad capabilities will need to be equipped with large repertoires of skills. Defining each skill with a manually-designed reward function limits this repertoire and imposes a manual engineering burden. Self-supervised agents that set their own goals can automate this process, but designing appropriate goal setting objectives can be difficult, and often involves heuristic design decisions. In this paper, we propose a formal exploration objective for goal-reaching policies that maximizes state coverage. We show that this objective is equivalent to maximizing the entropy of the goal distribution together with goal reaching performance, where goals correspond to full state observations. To instantiate this principle, we present an algorithm called Skew-Fit for learning a maximum-entropy goal distributions. Skew-Fit enables self-supervised agents to autonomously choose and practice reaching diverse goals. We show that, under certain regularity conditions, our method converges to a uniform distribution over the set of valid states, even when we do not know this set beforehand. Our experiments show that it can learn a variety of manipulation tasks from images, including opening a door with a real robot, entirely from scratch and without any manually-designed reward function.""","""This paper tackles the problem of exploration in RL. In order to maximize coverage of the state space, the authors introduce an approach where the agent attempts to reach some self-set goals. The empirically show that agents using this method uniformly visit all valid states under certain conditions. They also show that these agents are able to learn behaviours without providing a manually-defined reward function. The drawback of this work is the combined lack of theoretical justification and limited (marginal) algorithmic novelty given other existing goal-directed techniques. Although they highlight the performance of the proposed approach, the current experiments do not convey a good enough understanding of why this approach works where other existing goal-directed techniques do not, which would be expected from a purely empirical paper. This dampers the contribution, hence I recommend to reject this paper."""
548,"""Differentiable Bayesian Neural Network Inference for Data Streams""","['Bayesian neural network', 'approximate predictive inference', 'data stream', 'histogram']","""While deep neural networks (NNs) do not provide the confidence of its prediction, Bayesian neural network (BNN) can estimate the uncertainty of the prediction. However, BNNs have not been widely used in practice due to the computational cost of predictive inference. This prohibitive computational cost is a hindrance especially when processing stream data with low-latency. To address this problem, we propose a novel model which approximate BNNs for data streams. Instead of generating separate prediction for each data sample independently, this model estimates the increments of prediction for a new data sample from the previous predictions. The computational cost of this model is almost the same as that of non-Bayesian deep NNs. Experiments including semantic segmentation on real-world data show that this model performs significantly faster than BNNs, estimating uncertainty comparable to the results of BNNs. ""","""The main contribution is a Bayesian neural net algorithm which saves computation at test time using a vector quantization approximation. The reviewers are on the fence about the paper. I find the exposition somewhat hard to follow. In terms of evaluation, they demonstrate similar performance to various BNN architectures which require Monte Carlo sampling. But there have been lots of BNN algorithms that don't require sampling (e.g. PBP, Bayesian dark knowledge, MacKay's delta approximation), so it seems important to compare to these. I think there may be promising ideas here, but the paper needs a bit more work before it is to be published at a venue such as ICLR. """
549,"""Contrastive Multiview Coding""","['Representation Learning', 'Unsupervised Learning', 'Self-supervsied Learning', 'Multiview Learning']","""Humans view the world through many sensory channels, e.g., the long-wavelength light channel, viewed by the left eye, or the high-frequency vibrations channel, heard by the right ear. Each view is noisy and incomplete, but important factors, such as physics, geometry, and semantics, tend to be shared between all views (e.g., a ""dog"" can be seen, heard, and felt). We hypothesize that a powerful representation is one that models view-invariant factors. Based on this hypothesis, we investigate a contrastive coding scheme, in which a representation is learned that aims to maximize mutual information between different views but is otherwise compact. Our approach scales to any number of views, and is view-agnostic. The resulting learned representations perform above the state of the art for downstream tasks such as object classification, compared to formulations based on predictive learning or single view reconstruction, and improve as more views are added. On the Imagenet linear readoff benchmark, we achieve 68.4% top-1 accuracy. ""","""This paper proposes to use contrastive predictive coding for self-supervised learning. The proposed approach is shown empirically to be more effective than existing self-supervised learning algorithms. While the reviewers found the experimental results encouraging, there were some questions about the contribution as a whole, in particular the lack of theoretical justification."""
550,"""Empowering Graph Representation Learning with Paired Training and Graph Co-Attention""","['graph neural networks', 'graph co-attention', 'paired graphs', 'molecular properties', 'drug-drug interaction']","""Through many recent advances in graph representation learning, performance achieved on tasks involving graph-structured data has substantially increased in recent years---mostly on tasks involving node-level predictions. The setup of prediction tasks over entire graphs (such as property prediction for a molecule, or side-effect prediction for a drug), however, proves to be more challenging, as the algorithm must combine evidence about several structurally relevant patches of the graph into a single prediction. Most prior work attempts to predict these graph-level properties while considering only one graph at a time---not allowing the learner to directly leverage structural similarities and motifs across graphs. Here we propose a setup in which a graph neural network receives pairs of graphs at once, and extend it with a co-attentional layer that allows node representations to easily exchange structural information across them. We first show that such a setup provides natural benefits on a pairwise graph classification task (drug-drug interaction prediction), and then expand to a more generic graph regression setup: enhancing predictions over QM9, a standard molecular prediction benchmark. Our setup is flexible, powerful and makes no assumptions about the underlying dataset properties, beyond anticipating the existence of multiple training graphs.""","""The paper proposes combining paired attention with co-attention. The reviewers have remarked that the paper is will written and that the experiments provide some new insights into this combination. Initially, some additional experiments were proposed, which were addressed by the authors in the rebuttal and the new version of the paper. However, ICLR is becoming a very competitive conference where novelty is an important criteria for acceptance, and unfortunately the paper was considered to lack the novelty to be presented at ICLR."""
551,"""Confidence Scores Make Instance-dependent Label-noise Learning Possible""","['Instance-dependent label noise', 'Deep learning']","""Learning with noisy labels has drawn a lot of attention. In this area, most of recent works only consider class-conditional noise, where the label noise is independent of its input features. This noise model may not be faithful to many real-world applications. Instead, few pioneer works have studied instance-dependent noise, but these methods are limited to strong assumptions on noise models. To alleviate this issue, we introduce confidence-scored instance-dependent noise (CSIDN), where each instance-label pair is associated with a confidence score. The confidence scores are sufficient to estimate the noise functions of each instance with minimal assumptions. Moreover, such scores can be easily and cheaply derived during the construction of the dataset through crowdsourcing or automatic annotation. To handle CSIDN, we design a benchmark algorithm termed instance-level forward correction. Empirical results on synthetic and real-world datasets demonstrate the utility of our proposed method.""","""While two reviewers rated this paper as an accept, reviewer 3 strongly believes there are unresolved issues with the work as summarized in their post-rebuttal review. This work seems very promising and while the AC will recommend rejection at this time, the authors are strongly encouraged to resubmit this work."""
552,"""Enhancing Transformation-Based Defenses Against Adversarial Attacks with a Distribution Classifier""","['adversarial attack', 'transformation defenses', 'distribution classifier']","""Adversarial attacks on convolutional neural networks (CNN) have gained significant attention and there have been active research efforts on defense mechanisms. Stochastic input transformation methods have been proposed, where the idea is to recover the image from adversarial attack by random transformation, and to take the majority vote as consensus among the random samples. However, the transformation improves the accuracy on adversarial images at the expense of the accuracy on clean images. While it is intuitive that the accuracy on clean images would deteriorate, the exact mechanism in which how this occurs is unclear. In this paper, we study the distribution of softmax induced by stochastic transformations. We observe that with random transformations on the clean images, although the mass of the softmax distribution could shift to the wrong class, the resulting distribution of softmax could be used to correct the prediction. Furthermore, on the adversarial counterparts, with the image transformation, the resulting shapes of the distribution of softmax are similar to the distributions from the clean images. With these observations, we propose a method to improve existing transformation-based defenses. We train a separate lightweight distribution classifier to recognize distinct features in the distributions of softmax outputs of transformed images. Our empirical studies show that our distribution classifier, by training on distributions obtained from clean images only, outperforms majority voting for both clean and adversarial images. Our method is generic and can be integrated with existing transformation-based defenses.""","""This paper investigates tradeoffs between preserving accuracy on clean samples and increasing robustness on adversarial samples by using transformations and majority votes. Observations on the distribution of the induced softmax show that existing methods could be improved by leveraging information from that distribution to correct predictions, as confirmed by experiments. The problem space is important and reviewers find the approach interesting. Authors have provided some necessary clarifications during rebuttal and additional experiments. While some reservations remain, this paper's premise and its experimental results appear sufficiently interesting to justify an acceptance recommendation."""
553,"""Network Deconvolution""","['convolutional networks', 'network deconvolution', 'whitening']","""Convolution is a central operation in Convolutional Neural Networks (CNNs), which applies a kernel to overlapping regions shifted across the image. However, because of the strong correlations in real-world image data, convolutional kernels are in effect re-learning redundant data. In this work, we show that this redundancy has made neural network training challenging, and propose network deconvolution, a procedure which optimally removes pixel-wise and channel-wise correlations before the data is fed into each layer. Network deconvolution can be efficiently calculated at a fraction of the computational cost of a convolution layer. We also show that the deconvolution filters in the first layer of the network resemble the center-surround structure found in biological neurons in the visual regions of the brain. Filtering with such kernels results in a sparse representation, a desired property that has been missing in the training of neural networks. Learning from the sparse representation promotes faster convergence and superior results without the use of batch normalization. We apply our network deconvolution operation to 10 modern neural network models by replacing batch normalization within each. Extensive experiments show that the network deconvolution operation is able to deliver performance improvement in all cases on the CIFAR-10, CIFAR-100, MNIST, Fashion-MNIST, Cityscapes, and ImageNet datasets.""","""This paper presents a feature normalization method for CNNs by decorrelating channel-wise and spatial correlation simultaneously. Overall all reviewers are positive to the acceptance and I support their opinions. The idea and implementation is relatively straightforward but well-motivated and reasonable. Experiments are well-organized and intensive, providing enough evidence to convince its effectiveness in terms of final accuracy and convergence speed. Also, its analogy to biological center-surrounded structure is thought provoking. The novelty of the method seems somewhat incremental considering that there already exists a channel-wise decorrelation method, but I think the findings of the paper are interesting and valuable enough for ICLR community and would like to recommend acceptance. Minor comments: I recommend authors to mention about zero-component analysis (ZCA) normalization, which has been a standard input normalization method for CIFAR datasets. I guess it is quite similar to the proposed method considering 1x1 convolution. Also, comparison with other recent normalization methods (e.g., Group Norm) would be useful. """
554,"""Context-aware Attention Model for Coreference Resolution""","['Coreference resolution', 'Feature Attention']","""Coreference resolution is an important task for gaining more complete understanding about texts by artificial intelligence. The state-of-the-art end-to-end neural coreference model considers all spans in a document as potential mentions and learns to link an antecedent with each possible mention. However, for the verbatim same mentions, the model tends to get similar or even identical representations based on the features, and this leads to wrongful predictions. In this paper, we propose to improve the end-to-end system by building an attention model to reweigh features around different contexts. The proposed model substantially outperforms the state-of-the-art on the English dataset of the CoNLL 2012 Shared Task with 73.45% F1 score on development data and 72.84% F1 score on test data.""","""Main content: Blind review #2 summarizes it well: This paper extends the neural coreference resolution model in Lee et al. (2018) by 1) introducing an additional mention-level feature (grammatical numbers), and 2) letting the mention/pair scoring functions attend over multiple mention-level features. The proposed model achieves marginal improvement (0.2 avg. F1 points) over Lee et al., 2018, on the CoNLL 2012 English test set. -- Discussion: All reviewers rejected. -- Recommendation and justification: The paper must be rejected due to its violation of blind submission (the authors reveal themselves in the Acknowledgments). For information, blind review #2 also summarized well the following justifications for rejection: I recommend rejection for this paper due to the following reasons: - The technical contribution is very incremental (introducing one more features, and adding an attention layer over the feature vectors). - The experiment results aren't strong enough. And the experiments are done on only one dataset. - I am not convinced that adding the grammatical numbers features and the attention mechanism makes the model more context-aware."""
555,"""Rethinking Curriculum Learning With Incremental Labels And Adaptive Compensation""","['Curriculum Learning', 'Incremental Label Learning', 'Label Smoothing', 'Deep Learning']","""Like humans, deep networks learn better when samples are organized and introduced in a meaningful order or curriculum. While conventional approaches to curriculum learning emphasize the difficulty of samples as the core incremental strategy, it forces networks to learn from small subsets of data while introducing pre-computation overheads. In this work, we propose Learning with Incremental Labels and Adaptive Compensation (LILAC), which introduces a novel approach to curriculum learning. LILAC emphasizes incrementally learning labels instead of incrementally learning difficult samples. It works in two distinct phases: first, in the incremental label introduction phase, we unmask ground-truth labels in fixed increments during training, to improve the starting point from which networks learn. In the adaptive compensation phase, we compensate for failed predictions by adaptively altering the target vector to a smoother distribution. We evaluate LILAC against the closest comparable methods in batch and curriculum learning and label smoothing, across three standard image benchmarks, CIFAR-10, CIFAR-100, and STL-10. We show that our method outperforms batch learning with higher mean recognition accuracy as well as lower standard deviation in performance consistently across all benchmarks. We further extend LILAC to state-of-the-art performance across CIFAR-10 using simple data augmentation while exhibiting label order invariance among other important properties.""","""While the reviewers appreciated the ideas presented in the paper and their novelty, there were major concerns raised about the experimental evaluation. Due to the serious doubts that the reviewers raised about the effectiveness of the proposed approach, I do not think that the paper is quite ready for publication at this time, though I would encourage the authors to revise and resubmit the work at the next opportunity."""
556,"""Deep Orientation Uncertainty Learning based on a Bingham Loss""","['Orientation Estimation', 'Directional Statistics', 'Bingham Distribution']","""Reasoning about uncertain orientations is one of the core problems in many perception tasks such as object pose estimation or motion estimation. In these scenarios, poor illumination conditions, sensor limitations, or appearance invariance may result in highly uncertain estimates. In this work, we propose a novel learning-based representation for orientation uncertainty. By characterizing uncertainty over unit quaternions with the Bingham distribution, we formulate a loss that naturally captures the antipodal symmetry of the representation. We discuss the interpretability of the learned distribution parameters and demonstrate the feasibility of our approach on several challenging real-world pose estimation tasks involving uncertain orientations.""","""This paper considers the problem of reasoning about uncertain poses of objects in images. The reviewers agree that this is an interesting direction, and that the paper has interesting technical merit. """
557,"""Attention Interpretability Across NLP Tasks""","['Attention', 'NLP', 'Interpretability']","""The attention layer in a neural network model provides insights into the models reasoning behind its prediction, which are usually criticized for being opaque. Recently, seemingly contradictory viewpoints have emerged about the interpretability of attention weights (Jain & Wallace, 2019; Vig & Belinkov, 2019). Amid such confusion arises the need to understand attention mechanism more systematically. In this work, we attempt to fill this gap by giving a comprehensive explanation which justifies both kinds of observations (i.e., when is attention interpretable and when it is not). Through a series of experiments on diverse NLP tasks, we validate our observations and reinforce our claim of interpretability of attention through manual evaluation.""","""This paper investigates the degree to which we might view attention weights as explanatory across NLP tasks and architectures. Notably, the authors distinguish between single and ""pair"" sequence tasks, the latter including NLI, and generation tasks (e.g., translation). The argument here is that attention weights do not provide explanatory power for single sequence tasks like classification, but do for NLI and generation. Another notable distinction from most (although not all; see the references below) prior work on the explainability of attention mechanisms in NLP is the inclusion of transformer/self-attentive architectures. Unfortunately, the paper needs work in presentation (in particular, in Section 3) before it is ready to be published."""
558,"""Detecting and Diagnosing Adversarial Images with Class-Conditional Capsule Reconstructions""","['Adversarial Examples', 'Detection of adversarial attacks']","""Adversarial examples raise questions about whether neural network models are sensitive to the same visual features as humans. In this paper, we first detect adversarial examples or otherwise corrupted images based on a class-conditional reconstruction of the input. To specifically attack our detection mechanism, we propose the Reconstructive Attack which seeks both to cause a misclassification and a low reconstruction error. This reconstructive attack produces undetected adversarial examples but with much smaller success rate. Among all these attacks, we find that CapsNets always perform better than convolutional networks. Then, we diagnose the adversarial examples for CapsNets and find that the success of the reconstructive attack is highly related to the visual similarity between the source and target class. Additionally, the resulting perturbations can cause the input image to appear visually more like the target class and hence become non-adversarial. This suggests that CapsNets use features that are more aligned with human perception and have the potential to address the central issue raised by adversarial examples.""","""This paper presents a mechanism for capsule networks to defend against adversarial examples, and a new attack, the reconstruction attack. The differing success of this attacks on capsnets and convnets is used to argue that capsnets find features that are more similar to what humans use. Reviewers generally like the paper, but took instance with the strength of the claim (about the usefulness of the examples) and argued that the paper might not be as novel as it claims. Still, this seems like a valuable contribution that should be published."""
559,"""RGTI:Response generation via templates integration for End to End dialog""","['End-to-end dialogue systems', 'transformer', 'pointer-generate network']","""End-to-end models have achieved considerable success in task-oriented dialogue area, but suffer from the challenges of (a) poor semantic control, and (b) little interaction with auxiliary information. In this paper, we propose a novel yet simple end-to-end model for response generation via mixed templates, which can address above challenges. In our model, we retrieval candidate responses which contain abundant syntactic and sequence information by dialogue semantic information related to dialogue history. Then, we exploit candidate response attention to get templates which should be mentioned in response. Our model can integrate multi template information to guide the decoder module how to generate response better. We show that our proposed model learns useful templates information, which improves the performance of ""how to say"" and ""what to say"" in response generation. Experiments on the large-scale Multiwoz dataset demonstrate the effectiveness of our proposed model, which attain the state-of-the-art performance.""","""This paper describes a method to incorporate multiple candidate templates to aid in response generation for an end-to-end dialog system. Reviewers thought the basic idea is novel and interesting. However, they also agree that the paper is far from complete, results are missing, further experiments are needed as justification, and the presentation of the paper is not very clear. Given the these feedback from the reviews, I suggest rejecting the paper."""
560,"""Lean Images for Geo-Localization""","['Geo Localization', 'Deep Learning', 'Computer Vision', 'Camera Localization']","""Most computer vision tasks use textured images. In this paper we consider the geo-localization task - finding the pose of a camera in a large 3D scene from a single lean image, i.e. an image with no texture. We aim to experimentally explore whether texture and correlation between nearby images are necessary in a CNN-based solution for this task. Our results may give insight to the role of geometry (as opposed to textures) in a CNN-based geo-localization solution. Lean images are projections of a simple 3D model of a city. They contain solely information that relates to the geometry of the scene viewed (edges, faces, or relative depth). We find that the network is capable of estimating the camera pose from lean images for a relatively large number of locations (order of hundreds of thousands of images). The main contributions of this paper are: (i) demonstrating the power of CNNs for recovering camera pose using lean images; and (ii) providing insight into the role of geometry in the CNN learning process;""","""The submission studies the problem of geolocalizing a city based on geometric information encoded in so called ""lean"" images. The reviewers were unanimous in their opinion that the submission does not meet the threshold for publication at ICLR. Concerns included quality of writing, novelty with respect to existing literature (in particular see Review #2), and limited validation on one geographic area. No rebuttal was provided."""
561,"""Sample Efficient Policy Gradient Methods with Recursive Variance Reduction""","['Policy Gradient', 'Reinforcement Learning', 'Sample Efficiency']","""Improving the sample efficiency in reinforcement learning has been a long-standing research problem. In this work, we aim to reduce the sample complexity of existing policy gradient methods. We propose a novel policy gradient algorithm called SRVR-PG, which only requires pseudo-formula \footnote{ pseudo-formula notation hides constant factors.} episodes to find an pseudo-formula -approximate stationary point of the nonconcave performance function pseudo-formula (i.e., pseudo-formula such that J(\boldsymbol{\theta})\|_2^2\leq\epsilon This sample complexity improves the existing result pseudo-formula for stochastic variance reduced policy gradient algorithms by a factor of pseudo-formula . In addition, we also propose a variant of SRVR-PG with parameter exploration, which explores the initial policy parameter from a prior probability distribution. We conduct numerical experiments on classic control problems in reinforcement learning to validate the performance of our proposed algorithms.""","""The paper introduces a policy gradient estimator that is based on stochastic recursive gradient estimator. It provides a sample complexity result of O(eps^{-3/2}) trajectories for estimating the gradient with the accuracy of eps. This paper generated a lot of discussions among reviewers. The discussions were around the novelty of this work in relation to SARAH (Nguyen et al., ICML2017), SPIDER (Fang et al., NeurIPS2018) and the work of Papini et al. (ICML 2018). SARAH/SPIDER are stochastic variance reduced gradient estimators for convex/non-convex problems and have been studied in the optimization literature. To bring it to the RL literature, some adjustments are needed, for example the use of importance sampling (IS) estimator. The work of Papini et al. uses IS, but does not use SARAH/SPIDEH, and it does not use step-wise IS. Overall, I believe that even though the key algorithmic components of this work have been around, it is still a valuable contribution to the RL literature. """
562,"""Certified Robustness to Adversarial Label-Flipping Attacks via Randomized Smoothing""","['Adversarial Robustness', 'Label Flipping Attack', 'Data Poisoning Attack']","""This paper considers label-flipping attacks, a type of data poisoning attack where an adversary relabels a small number of examples in a training set in order to degrade the performance of the resulting classifier. In this work, we propose a strategy to build classifiers that are certifiably robust against a strong variant of label-flipping, where the adversary can target each test example independently. In other words, for each test point, our classifier makes a prediction and includes a certification that its prediction would be the same had some number of training labels been changed adversarially. Our approach leverages randomized smoothing, a technique that has previously been used to guarantee test-time robustness to adversarial manipulation of the input to a classifier. Further, we obtain these certified bounds with no additional runtime cost over standard classification. On the Dogfish binary classification task from ImageNet, in the face of an adversary who is allowed to flip 10 labels to individually target each test point, the baseline undefended classifier achieves no more than 29.3% accuracy; we obtain a classifier that maintains 64.2% certified accuracy against the same adversary.""","""The authors develop a certified defense for label-flipping attacks (where an adversary can flip labels of a small number of training set samples) based on the randomized smoothing technique developed for certified defenses to adversarial perturbations of the input. The framework applies to least-squares classifiers acting on pretrained features learned by a deep network. The authors show that the resulting framework can obtain significant improvements in certified accuracy against targeted label flipping attacks for each test example. While the paper makes some interesting contributions, the reviewers had the following shared concerns regarding the paper: 1) Reality of threat model: The threat model assumes that the adversary has access to the model and all of the training data (so as to choose which labels to flip), which is very unlikely in practice. 2) Limitation to least squares on pre-trained features: The only practical instantiation of the framework presented in the paper is on least squares classifiers acting on pre-trained features learned by a deep network. In the rebuttal phase, the authors clarified some of the more minor concerns raised by the reviewers, but the above concerns remained. Overall, I feel that this paper is borderline - If the authors extend the applicability of the framework (for example relaxing the restriction on pre-training the deep features) and motivating the threat model more strongly, this could be an interesting paper."""
563,"""Group-Transformer: Towards A Lightweight Character-level Language Model""","['Transformer', 'Lightweight model', 'Language Modeling', 'Character-level language modeling']","""Character-level language modeling is an essential but challenging task in Natural Language Processing. Prior works have focused on identifying long-term dependencies between characters and have built deeper and wider networks for better performance. However, their models require substantial computational resources, which hinders the usability of character-level language models in applications with limited resources. In this paper, we propose a lightweight model, called Group-Transformer, that reduces the resource requirements for a Transformer, a promising method for modeling sequence with long-term dependencies. Specifically, the proposed method partitions linear operations to reduce the number of parameters and computational cost. As a result, Group-Transformer only uses 18.2\% of parameters compared to the best performing LSTM-based model, while providing better performance on two benchmark tasks, enwik8 and text8. When compared to Transformers with a comparable number of parameters and time complexity, the proposed model shows better performance. The implementation code will be available.""","""This paper proposes using a lightweight alternative to Transformer self-attention called Group-Transformer. This is proposed in order to overcome difficulties in modelling long-distance dependencies in character level language modelling. They take inspiration from work on group convolutions. They experiment on two large-scale char-level LM datasets which show positive results, but experiments on word level tasks fail to show benefits. I think that this work, though promising, is still somewhat incremental and has not shown to be widely applicable, and therefore I recommend that it is not accepted. """
564,"""Scaling Up Neural Architecture Search with Big Single-Stage Models""",['Single-Stage Neural Architecture Search'],"""Neural architecture search (NAS) methods have shown promising results discovering models that are both accurate and fast. For NAS, training a one-shot model has became a popular strategy to approximate the quality of multiple architectures (child models) using a single set of shared weights. To avoid performance degradation due to parameter sharing, most existing methods have a two-stage workflow where the best child model induced from the one-shot model has to be retrained or finetuned. In this work, we propose BigNAS, an approach that simplifies this workflow and scales up neural architecture search to target a wide range of model sizes simultaneously. We propose several techniques to bridge the gap between the distinct initialization and learning dynamics across small and big models with shared parameters, which enable us to train a single-stage model: a single model from which we can directly slice high-quality child models without retraining or finetuning. With BigNAS we are able to train a single set of shared weights on ImageNet and use these weights to obtain child models whose sizes range from 200 to 1000 MFLOPs. Our discovered model family, BigNASModels, achieve top-1 accuracies ranging from 76.5% to 80.9%, surpassing all state-of-the-art models in this range including EfficientNets.""","""This paper presents a NAS method that avoids having to retrain models from scratch and targets a range of model sizes at once. The work builds on Yu & Huang (2019) and studies a combination of many different techniques. Several baselines use a weaker training method, and no code is made available, raising doubts concerning reproducibility. The reviewers asked various questions, but for several of these questions (e.g., running experiments on MNIST and CIFAR) the authors did not answer satisfactorily. Therefore, the reviewer asking these questions also refuses to change his/her rating. Overall, as AnonReviewer #1 points out, the paper is very empirical. This is not necessarily a bad thing if the experiments yield a lot of insight, but this insight also appears limited. Therefore, I agree with the reviewers and recommend rejection."""
565,"""GRAPHS, ENTITIES, AND STEP MIXTURE""","['Graph Neural Network', 'Random Walk', 'Attention']","""Graph neural networks have shown promising results on representing and analyzing diverse graph-structured data such as social, citation, and protein interaction networks. Existing approaches commonly suffer from the oversmoothing issue, regardless of whether policies are edge-based or node-based for neighborhood aggregation. Most methods also focus on transductive scenarios for fixed graphs, leading to poor generalization performance for unseen graphs. To address these issues, we propose a new graph neural network model that considers both edge-based neighborhood relationships and node-based entity features, i.e. Graph Entities with Step Mixture via random walk (GESM). GESM employs a mixture of various steps through random walk to alleviate the oversmoothing problem and attention to use node information explicitly. These two mechanisms allow for a weighted neighborhood aggregation which considers the properties of entities and relations. With intensive experiments, we show that the proposed GESM achieves state-of-the-art or comparable performances on four benchmark graph datasets comprising transductive and inductive learning tasks. Furthermore, we empirically demonstrate the significance of considering global information. The source code will be publicly available in the near future.""","""Two reviewers are concerned about this paper while the other one is slightly positive. A reject is recommended."""
566,"""Editable Neural Networks""","['editing', 'editable', 'meta-learning', 'maml']","""These days deep neural networks are ubiquitously used in a wide range of tasks, from image classification and machine translation to face identification and self-driving cars. In many applications, a single model error can lead to devastating financial, reputational and even life-threatening consequences. Therefore, it is crucially important to correct model mistakes quickly as they appear. In this work, we investigate the problem of neural network editing - how one can efficiently patch a mistake of the model on a particular sample, without influencing the model behavior on other samples. Namely, we propose Editable Training, a model-agnostic training technique that encourages fast editing of the trained model. We empirically demonstrate the effectiveness of this method on large-scale image classification and machine translation tasks.""","""This paper proposes a method which patches/edits a pre-trained neural network's predictions on problematic data points. They do this without the need for retraining the network on the entire data, by only using a few steps of stochastic gradient descent, and thereby avoiding influencing model behaviour on other samples. The post patching training can encourage reliability, locality and efficiency by using a loss function which incorporates these three criteria weighted by hyperparameters. Experiments are done on CIFAR-10 toy experiments, large-scale image classification with adversarial examples, and machine translation. The reviews are generally positive, with significant author response, a new improved version of the paper, and further discussion. This is a well written paper with convincing results, and it addresses a serious problem for production models, I therefore recommend that it is accepted. """
567,"""Towards Better Understanding of Adaptive Gradient Algorithms in Generative Adversarial Nets""","['Generative Adversarial Nets', 'Adaptive Gradient Algorithms']","""Adaptive gradient algorithms perform gradient-based updates using the history of gradients and are ubiquitous in training deep neural networks. While adaptive gradient methods theory is well understood for minimization problems, the underlying factors driving their empirical success in min-max problems such as GANs remain unclear. In this paper, we aim at bridging this gap from both theoretical and empirical perspectives. First, we analyze a variant of Optimistic Stochastic Gradient (OSG) proposed in~\citep{daskalakis2017training} for solving a class of non-convex non-concave min-max problem and establish pseudo-formula complexity for finding pseudo-formula -first-order stationary point, in which the algorithm only requires invoking one stochastic first-order oracle while enjoying state-of-the-art iteration complexity achieved by stochastic extragradient method by~\citep{iusem2017extragradient}. Then we propose an adaptive variant of OSG named Optimistic Adagrad (OAdagrad) and reveal an \emph{improved} adaptive complexity pseudo-formula ~\footnote{Here pseudo-formula compresses a logarithmic factor of pseudo-formula .}, where pseudo-formula characterizes the growth rate of the cumulative stochastic gradient and \alpha\leq 1/2$. To the best of our knowledge, this is the first work for establishing adaptive complexity in non-convex non-concave min-max optimization. Empirically, our experiments show that indeed adaptive gradient algorithms outperform their non-adaptive counterparts in GAN training. Moreover, this observation can be explained by the slow growth rate of the cumulative stochastic gradient, as observed empirically.""","""This work proposes a new adaptive method for solving certain min-max problems. The reviewers all appreciated the work and most of their concerns were addressed in the rebuttal. Given the current interest in both adaptive methods and min-max problems, this work is suited for publication at ICLR."""
568,"""DeepSphere: a graph-based spherical CNN""","['spherical cnns', 'graph neural networks', 'geometric deep learning']","""Designing a convolution for a spherical neural network requires a delicate tradeoff between efficiency and rotation equivariance. DeepSphere, a method based on a graph representation of the discretized sphere, strikes a controllable balance between these two desiderata. This contribution is twofold. First, we study both theoretically and empirically how equivariance is affected by the underlying graph with respect to the number of pixels and neighbors. Second, we evaluate DeepSphere on relevant problems. Experiments show state-of-the-art performance and demonstrates the efficiency and flexibility of this formulation. Perhaps surprisingly, comparison with previous work suggests that anisotropic filters might be an unnecessary price to pay. Our code is available at pseudo-url.""","""This paper proposes a novel methodology for applying convolutional networks to spherical data through a graph-based discretization. The reviewers all found the methodology sensible and the experiments convincing. A common concern of the reviewers was the amount of novelty in the approach, as in it involves the combination of established methods, but ultimately they found that the empirical performance compared to baselines outweighed this."""
569,"""Sub-policy Adaptation for Hierarchical Reinforcement Learning""","['Hierarchical Reinforcement Learning', 'Transfer', 'Skill Discovery']","""Hierarchical reinforcement learning is a promising approach to tackle long-horizon decision-making problems with sparse rewards. Unfortunately, most methods still decouple the lower-level skill acquisition process and the training of a higher level that controls the skills in a new task. Leaving the skills fixed can lead to significant sub-optimality in the transfer setting. In this work, we propose a novel algorithm to discover a set of skills, and continuously adapt them along with the higher level even when training on a new task. Our main contributions are two-fold. First, we derive a new hierarchical policy gradient with an unbiased latent-dependent baseline, and we introduce Hierarchical Proximal Policy Optimization (HiPPO), an on-policy method to efficiently train all levels of the hierarchy jointly. Second, we propose a method of training time-abstractions that improves the robustness of the obtained skills to environment changes. Code and videos are available at pseudo-url.""","""This paper considers hierarchical reinforcement learning, and specifically the case where the learning and use of lower-level skills should not be decoupled. To this end the paper proposes Hierarchical Proximal Policy Optimization (HiPPO) to jointly learn the different layers of the hierarchy. This is compared against other hierarchical RL schemes on several Mujoco domains. The reviewers raised three main issues with this paper. The first concerns an excluded baseline, which was included in the rebuttal. The other issues involve the motivation for the paper (in that there exist other methods that try and learn different levels of hierarchy together) and justification for some design choices. These were addressed to some extent in the rebuttal, but I believe this to still be an interesting contribution to the literature, and should be accepted. """
570,"""Curriculum Loss: Robust Learning and Generalization against Label Corruption""","['Curriculum Learning', 'deep learning']","""Deep neural networks (DNNs) have great expressive power, which can even memorize samples with wrong labels. It is vitally important to reiterate robustness and generalization in DNNs against label corruption. To this end, this paper studies the 0-1 loss, which has a monotonic relationship between empirical adversary (reweighted) risk (Hu et al. 2018). Although the 0-1 loss is robust to outliers, it is also difficult to optimize. To efficiently optimize the 0-1 loss while keeping its robust properties, we propose a very simple and efficient loss, i.e. curriculum loss (CL). Our CL is a tighter upper bound of the 0-1 loss compared with conventional summation based surrogate losses. Moreover, CL can adaptively select samples for stagewise training. As a result, our loss can be deemed as a novel perspective of curriculum sample selection strategy, which bridges a connection between curriculum learning and robust learning. Experimental results on noisy MNIST, CIFAR10 and CIFAR100 dataset validate the robustness of the proposed loss.""","""This paper studies learning with noisy labels by integrating the idea of curriculum learning. All reviewers and AC are happy with novelty, clear write-up and experimental results. I recommend acceptance. """
571,"""Multi-Step Decentralized Domain Adaptation""","['domain adaptation', 'decentralization']","""Despite the recent breakthroughs in unsupervised domain adaptation (uDA), no prior work has studied the challenges of applying these methods in practical machine learning scenarios. In this paper, we highlight two significant bottlenecks for uDA, namely excessive centralization and poor support for distributed domain datasets. Our proposed framework, MDDA, is powered by a novel collaborator selection algorithm and an effective distributed adversarial training method, and allows for uDA methods to work in a decentralized and privacy-preserving way. ""","""This paper proposes a solution to the decentralized privacy preserving domain adaptation problem. In other words, how to adapt to a target domain without explicit data access to other existing domains. In this scenario the authors propose MDDA which consists of both a collaborator selection algorithm based on minimal Wasserstein distance as well as a technique for adapting through sharing discriminator gradients across domains. The reviewers has split scores for this work with two recommending weak accept and two recommending weak reject. However, both reviewers who recommended weak accept explicitly mentioned that their recommendation was borderline (an option not available for ICLR 2020). The main issues raised by the reviewers was lack of algorithmic novelty and lack of comparison to prior privacy preserving work. The authors agreed that their goal was not to introduce a new domain adaptation algorithm, but rather to propose a generic solution to extend existing algorithms to the case of privacy preserving and decentralized DA. The authors also provided extensive revisions in response to the reviewers comments. Though the reviewers were convinced on some points (like privacy preserving arguments), there still remained key outstanding issues that were significant enough to cause the reviewers not to update their recommendations. Therefore, this paper is not recommended for acceptance in its current form. We encourage the authors to build off the revisions completed during the rebuttal phase and any outstanding comments from the reviewers. """
572,"""Dimensional Reweighting Graph Convolution Networks""","['graph convolutional networks', 'representation learning', 'mean field theory', 'variance reduction', 'node classification']","""In this paper, we propose a method named Dimensional reweighting Graph Convolutional Networks (DrGCNs), to tackle the problem of variance between dimensional information in the node representations of GCNs. We prove that DrGCNs can reduce the variance of the node representations by connecting our problem to the theory of the mean field. However, practically, we find that the degrees DrGCNs help vary severely on different datasets. We revisit the problem and develop a new measure K to quantify the effect. This measure guides when we should use dimensional reweighting in GCNs and how much it can help. Moreover, it offers insights to explain the improvement obtained by the proposed DrGCNs. The dimensional reweighting block is light-weighted and highly flexible to be built on most of the GCN variants. Carefully designed experiments, including several fixes on duplicates, information leaks, and wrong labels of the well-known node classification benchmark datasets, demonstrate the superior performances of DrGCNs over the existing state-of-the-art approaches. Significant improvements can also be observed on a large scale industrial dataset.""","""As Reviewer 2 pointed out in his/her response to the authors' rebuttal, this paper (at least in current state) has significant shortcomings that need to be addressed before this paper merits acceptance."""
573,"""Learning Compact Embedding Layers via Differentiable Product Quantization""","['efficient modeling', 'compact embedding', 'embedding table compression', 'differentiable product quantization']","""Embedding layers are commonly used to map discrete symbols into continuous embedding vectors that reflect their semantic meanings. Despite their effectiveness, the number of parameters in an embedding layer increases linearly with the number of symbols and poses a critical challenge on memory and storage constraints. In this work, we propose a generic and end-to-end learnable compression framework termed differentiable product quantization (DPQ). We present two instantiations of DPQ that leverage different approximation techniques to enable differentiability in end-to-end learning. Our method can readily serve as a drop-in alternative for any existing embedding layer. Empirically, DPQ offers significant compression ratios (14-238x) at negligible or no performance cost on 10 datasets across three different language tasks.""","""The presented paper gives a differentiable product quantization framework to compress embedding and support the claim by experiments (the supporting materials are as large as the paper itself). Reviewers agreed that the idea is simple is interesting, and also nice and positive discussion appeared. However, the main limiting factor is the small novelty over Chen 2018b, and I agree with that. Also, the comparison with low rank is rather formal: of course it would be of full rank , as the authors claim in the answer, but looking at singular values is needed to make this claim. Also, one can use low-rank tensor factorization to compress embeddings, and this can be compared. To summarize, I think the contribution is not enough to be accepted."""
574,"""Simple is Better: Training an End-to-end Contract Bridge Bidding Agent without Human Knowledge""","['Contract Bridge', 'Bidding', 'Selfplay', 'AlphaZero']","""Contract bridge is a multi-player imperfect-information game where one partnership collaborate with each other to compete against the other partnership. The game consists of two phases: bidding and playing. While playing is relatively easy for modern software, bidding is challenging and requires agents to learn a communication protocol to reach the optimal contract jointly, with their own private information. The agents need to exchange information to their partners, and interfere opponents, through a sequence of actions. In this work, we train a strong agent to bid competitive bridge purely through selfplay, outperforming WBridge5, a championship-winning software. Furthermore, we show that explicitly modeling belief is not necessary in boosting the performance. To our knowledge, this is the first competitive bridge agent that is trained with no domain knowledge. It outperforms previous state-of-the-art that use human replays with 70x fewer number of parameters.""","""This paper proposes a new training method for an end-to-end contract bridge bidding agent. Reviewers R2 and R3 raised concerns regarding limited novelty and also experimental results not being convincing. R2's main objection is that the paper has ""strong SOTA performance with a simple model, but empirical study are rather shallow."" Based on their recommendations, I recommend to reject this paper. """
575,"""Exploration in Reinforcement Learning with Deep Covering Options""","['Reinforcement learning', 'temporal abstraction', 'exploration']","""While many option discovery methods have been proposed to accelerate exploration in reinforcement learning, they are often heuristic. Recently, covering options was proposed to discover a set of options that provably reduce the upper bound of the environment's cover time, a measure of the difficulty of exploration. Covering options are computed using the eigenvectors of the graph Laplacian, but they are constrained to tabular tasks and are not applicable to tasks with large or continuous state-spaces. We introduce deep covering options, an online method that extends covering options to large state spaces, automatically discovering task-agnostic options that encourage exploration. We evaluate our method in several challenging sparse-reward domains and we show that our approach identifies less explored regions of the state-space and successfully generates options to visit these regions, substantially improving both the exploration and the total accumulated reward.""","""This paper considers options discovery in hierarchical reinforcement learning. It extends the idea of covering options, using the Laplacian of the state space discover a set of options that reduce the upper bound of the environment's cover time, to continuous and large state spaces. An online method is also included, and evaluated on several domains. The reviewers had major questions on a number of aspects of the paper, including around the novelty of the work which seemed limited, the quantitative results in the ATARI environments, and problems with comparisons to other exploration methods. These were all appropriately dealt with in the rebuttals, leaving this paper worthy of acceptance."""
576,"""Concise Multi-head Attention Models""","['Transformers', 'Attention', 'Multihead', 'expressive power', 'embedding size']","""Attention based Transformer architecture has enabled significant advances in the field of natural language processing. In addition to new pre-training techniques, recent improvements crucially rely on working with a relatively larger embedding dimension for tokens. This leads to models that are prohibitively large to be employed in the downstream tasks. In this paper we identify one of the important factors contributing to the large embedding size requirement. In particular, our analysis highlights that the scaling between the number of heads and the size of each head in the existing architectures gives rise to this limitation, which we further validate with our experiments. As a solution, we propose a new way to set the projection size in attention heads that allows us to train models with a relatively smaller embedding dimension, without sacrificing the performance.""","""This paper studies tradeoffs in the design of attention-based architectures. It argues and formally establishes that the expressivity of an attention head is determined by its dimension and that fixing the head dimension, one gains additional expressive power by using more heads. Reviewers were generally positive about the question under study here, but raised important concerns about the significance of the results and the take-home message in the current manuscript. The AC shares these concerns, and recommends rejection, while encouraging the authors to address the concerns raised during this discussion. """
577,"""A Hierarchy of Graph Neural Networks Based on Learnable Local Features""","['Graph Neural Networks', 'Hierarchy', 'Weisfeiler-Lehman', 'Discriminative Power']","""Graph neural networks (GNNs) are a powerful tool to learn representations on graphs by iteratively aggregating features from node neighbourhoods. Many variant models have been proposed, but there is limited understanding on both how to compare different architectures and how to construct GNNs systematically. Here, we propose a hierarchy of GNNs based on their aggregation regions. We derive theoretical results about the discriminative power and feature representation capabilities of each class. Then, we show how this framework can be utilized to systematically construct arbitrarily powerful GNNs. As an example, we construct a simple architecture that exceeds the expressiveness of the Weisfeiler-Lehman graph isomorphism test. We empirically validate our theory on both synthetic and real-world benchmarks, and demonstrate our example's theoretical power translates to state-of-the-art results on node classification, graph classification, and graph regression tasks. ""","""This paper proposes a modification to GCNs that generalizes the aggregation step to multiple levels of neighbors, that in theory, the new class of models have better discriminative power. The main criticism raised is that there is lack of sufficient evidence to distinguish this works theoretical contribution from that of Xu et al. Two reviewers also pointed out the concerns around experiment results and suggested to includes more recent state of the art SOTA results. While authors disagree that the contributions of their work is incremental, reviewers concerns are good samples of the general readers of this paper general readers may also read this paper as incremental. We highly encourage authors to take another cycle of edits to better distinguish their work from others before future submissions. """
578,"""DYNAMIC SELF-TRAINING FRAMEWORK FOR GRAPH CONVOLUTIONAL NETWORKS""","['self-training', 'semi-supervised learning', 'graph convolutional networks']","""Graph neural networks (GNN) such as GCN, GAT, MoNet have achieved state-of-the-art results on semi-supervised learning on graphs. However, when the number of labeled nodes is very small, the performances of GNNs downgrade dramatically. Self-training has proved to be effective for resolving this issue, however, the performance of self-trained GCN is still inferior to that of G2G and DGI for many settings. Moreover, additional model complexity make it more difficult to tune the hyper-parameters and do model selection. We argue that the power of self-training is still not fully explored for the node classification task. In this paper, we propose a unified end-to-end self-training framework called \emph{Dynamic Self-traning}, which generalizes and simplifies prior work. A simple instantiation of the framework based on GCN is provided and empirical results show that our framework outperforms all previous methods including GNNs, embedding based method and self-trained GCNs by a noticeable margin. Moreover, compared with standard self-training, hyper-parameter tuning for our framework is easier.""","""The paper is develops a self-training framework for graph convolutional networks where we have partially labeled graphs with a limited amount of labeled nodes. The reviewers found the paper interesting. One reviewer notes the ability to better exploit available information and raised questions of computational costs. Another reviewer felt the difference from previous work was limited, but that the good results speak for themselves. The final reviewer raised concerns on novelty and limited improvement in results. The authors provided detailed responses to these queries, providing additional results. The paper has improved over the course of the review, but due to a large number of stronger papers, was not accepted at this time."""
579,"""NPTC-net: Narrow-Band Parallel Transport Convolutional Neural Network on Point Clouds""","['geometric convolution', 'point cloud', 'parallel transport']","""Convolution plays a crucial role in various applications in signal and image processing, analysis and recognition. It is also the main building block of convolution neural networks (CNNs). Designing appropriate convolution neural networks on manifold-structured point clouds can inherit and empower recent advances of CNNs to analyzing and processing point cloud data. However, one of the major challenges is to define a proper way to ""sweep"" filters through the point cloud as a natural generalization of the planar convolution and to reflect the point cloud's geometry at the same time. In this paper, we consider generalizing convolution by adapting parallel transport on the point cloud. Inspired by a triangulated surface based method \cite{DBLP:journals/corr/abs-1805-07857}, we propose the Narrow-Band Parallel Transport Convolution (NPTC) using a specifically defined connection on a voxelized narrow-band approximation of point cloud data. With that, we further propose a deep convolutional neural network based on NPTC (called NPTC-net) for point cloud classification and segmentation. Comprehensive experiments show that the proposed NPTC-net achieves similar or better results than current state-of-the-art methods on point clouds classification and segmentation.""","""All the reviewers recommend rejecting the paper. There is no basis for acceptance."""
580,"""A Causal View on Robustness of Neural Networks""","['Neural Network Robustness', 'Variational autoencoder (VAE)', 'Causality', 'Deep generative model']","""We present a causal view on the robustness of neural networks against input manipulations, which applies not only to traditional classification tasks but also to general measurement data. Based on this view, we design a deep causal manipulation augmented model (deep CAMA) which explicitly models the manipulations of data as a cause to the observed effect variables. We further develop data augmentation and test-time fine-tuning methods to improve deep CAMA's robustness. When compared with discriminative deep neural networks, our proposed model shows superior robustness against unseen manipulations. As a by-product, our model achieves disentangled representation which separates the representation of manipulations from those of other latent causes.""","""This paper attempts to present a causal view of robustness in classifiers, which is a very important area of research. However, the connection to causality with the presented model is very thin and, in fact, mathematically unnecessary. Interventions are only applied to root nodes (as pointed out by R4) so they just amount to standard conditioning on the variable ""M"". The experimental results could be obtained without any mention to causal interventions."""
581,"""A Probabilistic Formulation of Unsupervised Text Style Transfer""","['unsupervised text style transfer', 'deep latent sequence model']","""We present a deep generative model for unsupervised text style transfer that unifies previously proposed non-generative techniques. Our probabilistic approach models non-parallel data from two domains as a partially observed parallel corpus. By hypothesizing a parallel latent sequence that generates each observed sequence, our model learns to transform sequences from one domain to another in a completely unsupervised fashion. In contrast with traditional generative sequence models (e.g. the HMM), our model makes few assumptions about the data it generates: it uses a recurrent language model as a prior and an encoder-decoder as a transduction distribution. While computation of marginal data likelihood is intractable in this model class, we show that amortized variational inference admits a practical surrogate. Further, by drawing connections between our variational objective and other recent unsupervised style transfer and machine translation techniques, we show how our probabilistic view can unify some known non-generative objectives such as backtranslation and adversarial loss. Finally, we demonstrate the effectiveness of our method on a wide range of unsupervised style transfer tasks, including sentiment transfer, formality transfer, word decipherment, author imitation, and related language translation. Across all style transfer tasks, our approach yields substantial gains over state-of-the-art non-generative baselines, including the state-of-the-art unsupervised machine translation techniques that our approach generalizes. Further, we conduct experiments on a standard unsupervised machine translation task and find that our unified approach matches the current state-of-the-art.""","""This paper proposes an unsupervised text style transfer model which combines a language model prior with an encoder-decoder transducer. They use a deep generative model which hypothesises a latent sequence which generates the observed sequences. It is trained on non-parallel data and they report good results on unsupervised sentiment transfer, formality transfer, word decipherment, author imitation, and machine translation. The authors responded in depth to reviewer comments, and the reviewers took this into consideration. This is a well written paper, with an elegant model and I would like to see it accepted at ICLR. """
582,"""Novelty Search in representational space for sample efficient exploration""","['Reinforcement Learning', 'Exploration']","""We present a new approach for efficient exploration which leverages a low-dimensional encoding of the environment learned with a combination of model-based and model-free objectives. Our approach uses intrinsic rewards that are based on a weighted distance of nearest neighbors in the low dimensional representational space to gauge novelty. We then leverage these intrinsic rewards for sample-efficient exploration with planning routines in representational space. One key element of our approach is that we perform more gradient steps in-between every environment step in order to ensure the model accuracy. We test our approach on a number of maze tasks, as well as a control problem and show that our exploration approach is more sample-efficient compared to strong baselines. ""","""The two most experienced reviewers recommended the paper be rejected. The submission lacks technical depth, which calls the significance of the contribution into question. This work would be greatly strengthened by a theoretical justification of the proposed approach. The reviewers also criticized the quality of the exposition, noting that key parts of the presentation was unclear. The experimental evaluation was not considered to be sufficiently convincing. The review comments should be able to help the authors strengthen this work."""
583,"""Finding and Visualizing Weaknesses of Deep Reinforcement Learning Agents""","['Visualization', 'Reinforcement Learning', 'Safety']","""As deep reinforcement learning driven by visual perception becomes more widely used there is a growing need to better understand and probe the learned agents. Understanding the decision making process and its relationship to visual inputs can be very valuable to identify problems in learned behavior. However, this topic has been relatively under-explored in the research community. In this work we present a method for synthesizing visual inputs of interest for a trained agent. Such inputs or states could be situations in which specific actions are necessary. Further, critical states in which a very high or a very low reward can be achieved are often interesting to understand the situational awareness of the system as they can correspond to risky states. To this end, we learn a generative model over the state space of the environment and use its latent space to optimize a target function for the state of interest. In our experiments we show that this method can generate insights for a variety of environments and reinforcement learning methods. We explore results in the standard Atari benchmark games as well as in an autonomous driving simulator. Based on the efficiency with which we have been able to identify behavioural weaknesses with this technique, we believe this general approach could serve as an important tool for AI safety applications.""","""This paper proposes a tool to visualizing the behaviour of deep RL agents, for example to observe the behaviour of an agent in critical scenarios. The idea is to learn a generative model of the environment and use it to artificially generate novel states in order to induce specific agent actions. States can then be generated such as to optimize a given target function, for example states where the agent takes a specific actions or states which are high/low reward. They evaluate the proposed visualization on Atari games and on a driving simulation environment, where the authors use their approach, to investigate the behaviour of different deep RL agents such as DQN. The paper is very controversial. On the one hand, as far as we know, this is the first approach that explicitly generates states that are meant to induce specific agent behaviour, although one could relate this to adversarial samples generation. Interpretability in deep RL is a known problem and this work could bring an interesting tool to the community. However, the proposed approach lacks theoretical foundations, thus feels quite ad-hoc, and results are limited to a qualitative, visual, evaluation. At the same time, one could say that the approach is not more ad hoc than other gradient saliency visualization approaches, and one could argue that the lack of theoretical soundness is due to the difficulty of defining good measures of interpretability and that apply well to image-based environments. Nonetheless, this paper is a step in the good direction in a field that could really benefit from it. """
584,"""Prestopping: How Does Early Stopping Help Generalization Against Label Noise?""","['noisy label', 'label noise', 'robustness', 'deep learning', 'early stopping']","""Noisy labels are very common in real-world training data, which lead to poor generalization on test data because of overfitting to the noisy labels. In this paper, we claim that such overfitting can be avoided by ""early stopping"" training a deep neural network before the noisy labels are severely memorized. Then, we resume training the early stopped network using a ""maximal safe set,"" which maintains a collection of almost certainly true-labeled samples at each epoch since the early stop point. Putting them all together, our novel two-phase training method, called Prestopping, realizes noise-free training under any type of label noise for practical use. Extensive experiments using four image benchmark data sets verify that our method significantly outperforms four state-of-the-art methods in test error by 0.48.2 percent points under existence of real-world noise.""","""This paper focuses on avoiding overfitting in the presence of noisy labels. The authors develop a two phase method called pre-stopping based on a combination of early stopping and a maximal safe set. The reviewers raised some concern about illustrating maximal safe set for all data sets and suggest comparisons with more baselines. The reviewers also indicated that the paper is missing key relevant publications. In the response the authors have done a rather through job of addressing the reviewers comments. I thank them for this. However, given the limited time some of the reviewers comments regarding adding new baselines could not be addressed. As a result I can not recommend acceptance because I think this is key to making a proper assessment. That said, I think this is an interesting with good potential if it can outperform other baselines and would recommend that the authors revise and resubmit in a future venue."""
585,"""Meta Learning via Learned Loss""","['Meta Learning', 'Reinforcement Learning', 'Loss Learning']","""We present a meta-learning method for learning parametric loss functions that can generalize across different tasks and model architectures. We develop a pipeline for training such loss functions, targeted at maximizing the performance of model learn- ing with them. We observe that the loss landscape produced by our learned losses significantly improves upon the original task-specific losses in both supervised and reinforcement learning tasks. Furthermore, we show that our meta-learning framework is flexible enough to incorporate additional information at meta-train time. This information shapes the learned loss function such that the environment does not need to provide this information during meta-test time.""","""Despite the new ideas in this paper, reviewers feel that it needs to be revised for clarification, and that experimental results are not convincing. I have down-weighted the criticisms of Reviewer 2 because I agree with the authors' rebuttal. However, there is still not enough support among the remaining reviews to justify acceptance. """
586,"""On Concept-Based Explanations in Deep Neural Networks""","['concept-based explanations', 'interpretability']","""Deep neural networks (DNNs) build high-level intelligence on low-level raw features. Understanding of this high-level intelligence can be enabled by deciphering the concepts they base their decisions on, as human-level thinking. In this paper, we study concept-based explainability for DNNs in a systematic framework. First, we define the notion of completeness, which quantifies how sufficient a particular set of concepts is in explaining a model's prediction behavior. Based on performance and variability motivations, we propose two definitions to quantify completeness. We show that under degenerate conditions, our method is equivalent to Principal Component Analysis. Next, we propose a concept discovery method that considers two additional constraints to encourage the interpretability of the discovered concepts. We use game-theoretic notions to aggregate over sets to define an importance score for each discovered concept, which we call \emph{ConceptSHAP}. On specifically-designed synthetic datasets and real-world text and image datasets, we validate the effectiveness of our framework in finding concepts that are complete in explaining the decision, and interpretable.""","""This paper introduces an unsupervised concept learning and explanation algorithm, as well as a concept of ""completeness"" for evaluating representations in an unsupervised way. There are several valuable contributions here, and the paper improved substantially after the rebuttal. It would not be unreasonable to accept this paper. But after extensive post-review discussion, we decided that the completeness idea was the most valuable contribution, but that it was insufficiently investigated. To quote R3, who I agree with: "" I think the paper could be strengthened considerably with a rewrite that focuses first on a shortcoming of existing methods in finding complete solutions. I also think their explanations for why PCA is not complete are somewhat speculative and I expect that studying the completeness of activation spaces in invertible networks would lead to some relevant insights"" """
587,"""Four Things Everyone Should Know to Improve Batch Normalization""",['batch normalization'],"""A key component of most neural network architectures is the use of normalization layers, such as Batch Normalization. Despite its common use and large utility in optimizing deep architectures, it has been challenging both to generically improve upon Batch Normalization and to understand the circumstances that lend themselves to other enhancements. In this paper, we identify four improvements to the generic form of Batch Normalization and the circumstances under which they work, yielding performance gains across all batch sizes while requiring no additional computation during training. These contributions include proposing a method for reasoning about the current example in inference normalization statistics, fixing a training vs. inference discrepancy; recognizing and validating the powerful regularization effect of Ghost Batch Normalization for small and medium batch sizes; examining the effect of weight decay regularization on the scaling and shifting parameters and ; and identifying a new normalization algorithm for very small batch sizes by combining the strengths of Batch and Group Normalization. We validate our results empirically on six datasets: CIFAR-100, SVHN, Caltech-256, Oxford Flowers-102, CUB-2011, and ImageNet.""","""This paper proposes techniques to improve training with batch normalization. The paper establishes the benefits of these techniques experimentally using ablation studies. The reviewers found the results to be promising and of interest to the community. However, this paper is borderline due in part due to the writing (notation issues) and because it does not discuss related work enough. We encourage the authors to properly address these issues before the camera ready."""
588,"""Weakly Supervised Clustering by Exploiting Unique Class Count""","['weakly supervised clustering', 'weakly supervised learning', 'multiple instance learning']","""A weakly supervised learning based clustering framework is proposed in this paper. As the core of this framework, we introduce a novel multiple instance learning task based on a bag level label called unique class count (ucc), which is the number of unique classes among all instances inside the bag. In this task, no annotations on individual instances inside the bag are needed during training of the models. We mathematically prove that with a perfect ucc classifier, perfect clustering of individual instances inside the bags is possible even when no annotations on individual instances are given during training. We have constructed a neural network based ucc classifier and experimentally shown that the clustering performance of our framework with our weakly supervised ucc classifier is comparable to that of fully supervised learning models where labels for all instances are known. Furthermore, we have tested the applicability of our framework to a real world task of semantic segmentation of breast cancer metastases in histological lymph node sections and shown that the performance of our weakly supervised framework is comparable to the performance of a fully supervised Unet model.""","""The paper proposes a weakly supervised learning algorithm, motivated by its application to histopathology. Similar to the multiple instance learning scenario, labels are provided for bags of instances. However instead of a single (binary) label per bag, the paper introduces the setting where the training algorithm is provided with the number of classes in the bag (but not which ones). Careful empirical experiments on semantic segmentation of histopathology data, as well as simulated labelling from MNIST and CIFAR demonstrate the usefulness of the method. The proposed approach is similar in spirit to works such as learning from label proportions and UU learning (both which solve classification tasks). pseudo-url pseudo-url The reviews are widely spread, with a low confidence reviewer rating (1). However it seems that the high confidence reviewers are also providing higher scores and better comments. The authors addressed many of the reviewer comments, and seeked clarification for certain points, but the reviewers did not engage further during the discussion period. This paper provides a novel weakly supervised learning setting, motivated by a real world semantic segmentation task, and provides an algorithm to learn from only the number of classes per bag, which is demonstrated to work on empirical experiments. It is a good addition to the ICLR program."""
589,"""SCALOR: Generative World Models with Scalable Object Representations""",[],"""Scalability in terms of object density in a scene is a primary challenge in unsupervised sequential object-oriented representation learning. Most of the previous models have been shown to work only on scenes with a few objects. In this paper, we propose SCALOR, a probabilistic generative world model for learning SCALable Object-oriented Representation of a video. With the proposed spatially parallel attention and proposal-rejection mechanisms, SCALOR can deal with orders of magnitude larger numbers of objects compared to the previous state-of-the-art models. Additionally, we introduce a background module that allows SCALOR to model complex dynamic backgrounds as well as many foreground objects in the scene. We demonstrate that SCALOR can deal with crowded scenes containing up to a hundred objects while jointly modeling complex dynamic backgrounds. Importantly, SCALOR is the rst unsupervised object representation model shown to work for natural scenes containing several tens of moving objects.""","""After the author response and paper revision, the reviewers all came to appreciate this paper and unanimously recommended it be accepted. The paper makes a nice contribution to generative modelling of object-oriented representations with large numbers of objects. The authors adequately addressed the main reviewer concerns with their detailed rebuttal and revision."""
590,"""Deep Symbolic Superoptimization Without Human Knowledge""",[],"""Deep symbolic superoptimization refers to the task of applying deep learning methods to simplify symbolic expressions. Existing approaches either perform supervised training on human-constructed datasets that defines equivalent expression pairs, or apply reinforcement learning with human-defined equivalent trans-formation actions. In short, almost all existing methods rely on human knowledge to define equivalence, which suffers from large labeling cost and learning bias, because it is almost impossible to define and comprehensive equivalent set. We thus propose HISS, a reinforcement learning framework for symbolic super-optimization that keeps human outside the loop. HISS introduces a tree-LSTM encoder-decoder network with attention to ensure tractable learning. Our experiments show that HISS can discover more simplification rules than existing human-dependent methods, and can learn meaningful embeddings for symbolic expressions, which are indicative of equivalence.""","""This work introduces a neural architecture and corresponding method for simplifying symbolic equations, which can be trained without requiring human input. This is an area somewhat outside most of our expertise, but the general consensus is that the paper is interesting and is an advance. The reviewer's concerns have been mostly resolved by the rebuttal, so I am recommending an accept. """
591,"""Are there any 'object detectors' in the hidden layers of CNNs trained to identify objects or scenes?""","['neural networks', 'localist coding', 'selectivity', 'object detectors', 'CCMAS', 'CNNs', 'activation maximisation', 'information representation', 'network dissection', 'interpretabillity', 'signal detection']","""Various methods of measuring unit selectivity have been developed with the aim of better understanding how neural networks work. But the different measures provide divergent estimates of selectivity, and this has led to different conclusions regarding the conditions in which selective object representations are learned and the functional relevance of these representations. In an attempt to better characterize object selectivity, we undertake a comparison of various selectivity measures on a large set of units in AlexNet, including localist selectivity, precision, class-conditional mean activity selectivity (CCMAS), network dissection, the human interpretation of activation maximization (AM) images, and standard signal-detection measures. We find that the different measures provide different estimates of object selectivity, with precision and CCMAS measures providing misleadingly high estimates. Indeed, the most selective units had a poor hit-rate or a high false-alarm rate (or both) in object classification, making them poor object detectors. We fail to find any units that are even remotely as selective as the 'grandmother cell' units reported in recurrent neural networks. In order to generalize these results, we compared selectivity measures on a few units in VGG-16 and GoogLeNet trained on the ImageNet or Places-365 datasets that have been described as 'object detectors'. Again, we find poor hit-rates and high false-alarm rates for object classification. ""","""This paper conducted a number of empirical studies to find whether units in object-classification CNN can be used as object detectors. The claimed conclusion is that there are no units that are sufficient powerful to be considered as object detectors. Three reviewers have split reviews. While reviewer #1 is positive about this work, the review is quite brief. In contrast, Reviewer #2 and #3 both rate weak reject, with similar major concerns. That is, the conclusion seems non-conclusive and not surprising as well. What would be the contribution of this type of conclusion to the ICLR community? In particular, Reviewer #2 provided detailed and well elaborated comments. The authors made efforts to response to all reviewers comments. However, the major concerns remain, and the rating were not changed. The ACs concur the major concerns and agree that the paper can not be accepted at its current state."""
592,"""Reanalysis of Variance Reduced Temporal Difference Learning""","['Reinforcement Learning', 'TD learning', 'Markovian sample', 'Variance Reduction']","""Temporal difference (TD) learning is a popular algorithm for policy evaluation in reinforcement learning, but the vanilla TD can substantially suffer from the inherent optimization variance. A variance reduced TD (VRTD) algorithm was proposed by \cite{korda2015td}, which applies the variance reduction technique directly to the online TD learning with Markovian samples. In this work, we first point out the technical errors in the analysis of VRTD in \cite{korda2015td}, and then provide a mathematically solid analysis of the non-asymptotic convergence of VRTD and its variance reduction performance. We show that VRTD is guaranteed to converge to a neighborhood of the fixed-point solution of TD at a linear convergence rate. Furthermore, the variance error (for both i.i.d.\ and Markovian sampling) and the bias error (for Markovian sampling) of VRTD are significantly reduced by the batch size of variance reduction in comparison to those of vanilla TD. As a result, the overall computational complexity of VRTD to attain a given accurate solution outperforms that of TD under Markov sampling and outperforms that of TD under i.i.d.\ sampling for a sufficiently small conditional number.""","""The paper studies the variance reduced TD algorithm by Konda and Prashanth (2015). The original paper provided a convergence analysis that had some technical issues. This paper provides a new convergence analysis, and shows the advantage of VRTD to vanilla TD in terms of reducing the bias and variance. Several of the five reviewers are expert in this area and all of them are positive about it. Therefore, I recommend acceptance of this work."""
593,"""Selfish Emergent Communication""","['multi agent reinforcement learning', 'emergent communication', 'game theory']","""Current literature in machine learning holds that unaligned, self-interested agents do not learn to use an emergent communication channel. We introduce a new sender-receiver game to study emergent communication for this spectrum of partially-competitive scenarios and put special care into evaluation. We find that communication can indeed emerge in partially-competitive scenarios, and we discover three things that are tied to improving it. First, that selfish communication is proportional to cooperation, and it naturally occurs for situations that are more cooperative than competitive. Second, that stability and performance are improved by using LOLA (Foerster et al, 2018), especially in more competitive scenarios. And third, that discrete protocols lend themselves better to learning cooperative communication than continuous ones. ""","""There has been a long discussion on the paper, especially between the authors and the 2nd reviewer. While the authors' comments and paper modifications have improved the paper, the overall opinion on this paper is that it is below par in its current form. The main issue is that the significance of the results is insufficiently clear. While the sender-receiver game introduced is interesting, a more thorough investigation would improve the paper a lot (for example, by looking if theoretical statements can be made)."""
594,"""An Empirical and Comparative Analysis of Data Valuation with Scalable Algorithms""","['Data valuation', 'machine learning']","""This paper focuses on valuating training data for supervised learning tasks and studies the Shapley value, a data value notion originated in cooperative game theory. The Shapley value defines a unique value distribution scheme that satisfies a set of appealing properties desired by a data value notion. However, the Shapley value requires exponential complexity to calculate exactly. Existing approximation algorithms, although achieving great improvement over the exact algorithm, relies on retraining models for multiple times, thus remaining limited when applied to larger-scale learning tasks and real-world datasets. In this work, we develop a simple and efficient algorithm to estimate the Shapley value with complexity independent with the model size. The key idea is to approximate the model via a pseudo-formula -nearest neighbor ( pseudo-formula NN) classifier, which has a locality structure that can lead to efficient Shapley value calculation. We evaluate the utility of the values produced by the pseudo-formula NN proxies in various settings, including label noise correction, watermark detection, data summarization, active data acquisition, and domain adaption. Extensive experiments demonstrate that our algorithm achieves at least comparable utility to the values produced by existing algorithms while significant efficiency improvement. Moreover, we theoretically analyze the Shapley value and justify its advantage over the leave-one-out error as a data value measure.""","""There is insufficient support to recommend accepting this paper. The authors provided detailed responses to the reviewer comments, but the reviewers did not raise their evaluation of the significance and novelty of the contributions as a result. The feedback provided should help the authors improve their paper."""
595,"""One-way prototypical networks""","['few-shot learning', 'one-shot learning', 'prototypical networks', 'one-class classification', 'anomaly detection', 'outlier detection', 'matching networks']","""Few-shot models have become a popular topic of research in the past years. They offer the possibility to determine class belongings for unseen examples using just a handful of examples for each class. Such models are trained on a wide range of classes and their respective examples, learning a decision metric in the process. Types of few-shot models include matching networks and prototypical networks. We show a new way of training prototypical few-shot models for just a single class. These models have the ability to predict the likelihood of an unseen query belonging to a group of examples without any given counterexamples. The difficulty here lies in the fact that no relative distance to other classes can be calculated via softmax. We solve this problem by introducing a null class centered around zero, and enforcing centering with batch normalization. Trained on the commonly used Omniglot data set, we obtain a classification accuracy of .98 on the matched test set, and of .8 on unmatched MNIST data. On the more complex MiniImageNet data set, test accuracy is .8. In addition, we propose a novel Gaussian layer for distance calculation in a prototypical network, which takes the support examples distribution rather than just their centroid into account. This extension shows promising results when a higher number of support examples is available.""","""This paper extends prototypical networks to few shot 1-way classification. The idea is to introduce a null class to compare against with a null prototype. The reviewers found the idea sound and interesting. However, the response was mixed because the reviewers were not convinced of the significance of the improvements. Furthermore, there were questions raised about the motivation that were not sufficiently addressed in the rebuttal. Batch normalization layers will not necessarily lead to zero mean if the trainable offset is not disabled. The authors did not clarify whether they disable this offset. I encourage the authors to resubmit after addressing the issues raised by the reviewers."""
596,"""ADAPTING PRETRAINED LANGUAGE MODELS FOR LONG DOCUMENT CLASSIFICATION""","['NLP', 'Deep Learning', 'Language Models', 'Long Document']","""Pretrained language models (LMs) have shown excellent results in achieving human like performance on many language tasks. However, the most powerful LMs have one significant drawback: a fixed-sized input. With this constraint, these LMs are unable to utilize the full input of long documents. In this paper, we introduce a new framework to handle documents of arbitrary lengths. We investigate the addition of a recurrent mechanism to extend the input size and utilizing attention to identify the most discriminating segment of the input. We perform extensive validating experiments on patent and Arxiv datasets, both of which have long text. We demonstrate our method significantly outperforms state-of-the-art results reported in recent literature.""","""This paper investigates ways of using pretrained transformer models like BERT for classification tasks on documents that are longer than a standard transformer can feasibly encode. This seems like a reasonable research goal, and none of the reviewers raised any concerns that seriously questioned the claims of the paper. However, neither of the more confident reviewers were convinced by the experiments in the paper (even after some private discussion) that the methods presented here represent a useful contribution. This is not an area that I (the area chair) know well, but it seems as though there aren't any easy fixes to suggest: Additional discussion of the choice of evaluation data (or new data), further ablations, and general refinement of the writing could help."""
597,"""Meta-Learning with Network Pruning for Overfitting Reduction""","['Meta-Learning', 'Few-shot Learning', 'Network Pruning', 'Generalization Analysis']","""Meta-Learning has achieved great success in few-shot learning. However, the existing meta-learning models have been evidenced to overfit on meta-training tasks when using deeper and wider convolutional neural networks. This means that we cannot improve the meta-generalization performance by merely deepening or widening the networks. To remedy such a deficiency of meta-overfitting, we propose in this paper a sparsity constrained meta-learning approach to learn from meta-training tasks a subnetwork from which first-order optimization methods can quickly converge towards the optimal network in meta-testing tasks. Our theoretical analysis shows the benefit of sparsity for improving the generalization gap of the learned meta-initialization network. We have implemented our approach on top of the widely applied Reptile algorithm assembled with varying network pruning routines including Dense-Sparse-Dense (DSD) and Iterative Hard Thresholding (IHT). Extensive experimental results on benchmark datasets with different over-parameterized deep networks demonstrate that our method can not only effectively ease meta-overfitting but also in many cases improve the meta-generalization performance when applied to few-shot classification tasks.""","""This paper proposes a regularization scheme for reducing meta-overfitting. After the rebuttal period, the reviewers all still had concerns about the significance of the paper's contributions and the thoroughness of the empirical study. As such, this paper isn't ready for publication at ICLR. See the reviewer's comments for detailed feedback on how to improve the paper. """
598,"""Training Data Distribution Search with Ensemble Active Learning""",[],"""Deep Neural Networks (DNNs) often rely on very large datasets for training. Given the large size of such datasets, it is conceivable that they contain certain samples that either do not contribute or negatively impact the DNN's optimization. Modifying the training distribution in a way that excludes such samples could provide an effective solution to both improve performance and reduce training time. In this paper, we propose to scale up ensemble Active Learning methods to perform acquisition at a large scale (10k to 500k samples at a time). We do this with ensembles of hundreds of models, obtained at a minimal computational cost by reusing intermediate training checkpoints. This allows us to automatically and efficiently perform a training data distribution search for large labeled datasets. We observe that our approach obtains favorable subsets of training data, which can be used to train more accurate DNNs than training with the entire dataset. We perform an extensive experimental study of this phenomenon on three image classification benchmarks (CIFAR-10, CIFAR-100 and ImageNet), analyzing the impact of initialization schemes, acquisition functions and ensemble configurations. We demonstrate that data subsets identified with a lightweight ResNet-18 ensemble remain effective when used to train deep models like ResNet-101 and DenseNet-121. Our results provide strong empirical evidence that optimizing the training data distribution can provide significant benefits on large scale vision tasks.""","""This paper proposes an ensemble-based active learning approach to select a subset of training data that yields the same or better performance. The proposed method is rather heuristic and lacks novel technical contribution that we expect for top ML conferences. No theoretical justification is provided to argue why the proposed method works. Additional studied are needed to fully convincingly demonstrate the benefit of the proposed method in terms computational cost. """
599,"""Linguistic Embeddings as a Common-Sense Knowledge Repository: Challenges and Opportunities""","['knowledge representation', 'word embeddings', 'sentence embeddings', 'common-sense knowledge']","""Many applications of linguistic embedding models rely on their value as pre-trained inputs for end-to-end tasks such as dialog modeling, machine translation, or question answering. This position paper presents an alternate paradigm: Rather than using learned embeddings as input features, we instead treat them as a common-sense knowledge repository that can be queried via simple mathematical operations within the embedding space. We show how linear offsets can be used to (a) identify an object given its description, (b) discover relations of an object given its label, and (c) map free-form text to a set of action primitives. Our experiments provide a valuable proof of concept that language-informed common sense reasoning, or `reasoning in the linguistic domain', lies within the grasp of the research community. In order to attain this goal, however, we must reconsider the way neural embedding models are typically trained an evaluated. To that end, we also identify three empirically-motivated evaluation metrics for use in the training of future embedding models.""","""This paper presents an analysis of the kind of knowledge captured by pre-trained word embeddings. The authors show various kinds of properties like relation between entities and their description, mapping high-level commands to discrete commands etc. The problem with the paper is that almost all of the properties shown in this work has already been established in existing literature. In fact, the methods presented here are the baseline algorithms to the identification of different properties presented in the paper. The term common-sense which is used often in the paper is mischaracterized. In NLP literature, common-sense is something that is implicitly understood by humans but which is not really captured by language. For example, going to a movie means you need parking is something that is well-understood by humans but is not implied by the language of going to the movie. The phenomenon described by the authors is general language processing. Towards the end the evaluation criteria for embedding proposed is also a well-established concept, its just that these metrics are not part of the training mechanism as yet. So if the contribution was on showing how those metrics can be integrated in training the embeddings, that would be a great contribution. I agree with the reviewer's critics and recommend a rejection as of now."""
600,"""Set Functions for Time Series""","['Time Series', 'Set functions', 'Irregularly sampling', 'Medical Time series', 'Dynamical Systems', 'Time series classification']","""Despite the eminent successes of deep neural networks, many architectures are often hard to transfer to irregularly-sampled and asynchronous time series that occur in many real-world datasets, such as healthcare applications. This paper proposes a novel framework for classifying irregularly sampled time series with unaligned measurements, focusing on high scalability and data efficiency. Our method SeFT (Set Functions for Time Series) is based on recent advances in differentiable set function learning, extremely parallelizable, and scales well to very large datasets and online monitoring scenarios. We extensively compare our method to competitors on multiple healthcare time series datasets and show that it performs competitively whilst significantly reducing runtime.""","""The paper investigates a new approach to classification of irregularly sampled and unaligned multi-modal time series via set function mapping. Experiment results on health care datasets are reported to demonstrate the effectiveness of the proposed approach. The idea of extending set functions to address missing value in time series is interesting and novel. The paper does a good job at motivating the methods and describing the proposed solution. The authors did a good job at addressing the concerns of the reviewers. During the discussion, some reviewers are still concerned about the empirical results, which do not match well with published results (even though the authors provided an explanation for it). In addition, the proposed method is only tested on the health care datasets, but the improvement is limited. Therefore it would be worthwhile investigating other time series datasets, and most important answering the important question in terms of what datasets/applications the proposed method works well. The paper is one step away for being a strong publication. We hope the reviews can help improve the paper for a strong publication in the future. """
601,"""CURSOR-BASED ADAPTIVE QUANTIZATION FOR DEEP NEURAL NETWORK""",[],"""Deep neural network (DNN) has rapidly found many applications in different scenarios. However, its large computational cost and memory consumption are barriers to computing restrained applications. DNN model quantization is a widely used method to reduce the DNN storage and computation burden by decreasing the bit width. In this paper, we propose a novel cursor based adaptive quantization method using differentiable architecture search (DAS). The multiple bits quantization mechanism is formulated as a DAS process with a continuous cursor that represents the possible quantization bit. The cursor-based DAS adaptively searches for the desired quantization bit for each layer. The DAS process can be solved via an alternative approximate optimization process, which is designed for mixed quantization scheme of a DNN model. We further devise a new loss function in the search process to simultaneously optimize accuracy and parameter size of the model. In the quantization step, based on a new strategy, the closest two integers to the cursor are adopted as the bits to quantize the DNN together to reduce the quantization noise and avoid the local convergence problem. Comprehensive experiments on benchmark datasets show that our cursor based adaptive quantization approach achieves the new state-of-the-art for multiple bits quantization and can efficiently obtain lower size model with comparable or even better classification accuracy.""","""This paper presents a method to compress DNNs by quantization. The core idea is to use NAS techniques to adaptively set quantization bits at each layer. The proposed method is shown to achieved good results on the standard benchmarks. Through our final discussion, one reviewer agreed to raise the score from Reject to Weak Reject, but still on negative side. Another reviewer was not satisfied with the authors rebuttal, particularly regarding the appropriateness of training strategy and evaluation. Moreover, as reviewers pointed out, there were so many unclear writings and explanations in the original manuscript. Although we admit that authors made great effort to address the comments, the revision seems too major and need to go through another complete peer reviewing. As there was no strong opinion to push this paper, Id like to recommend rejection. """
602,"""Global Concavity and Optimization in a Class of Dynamic Discrete Choice Models""","['Reinforcement learning', 'Policy Gradient', 'Global Concavity', 'Dynamic Discrete Choice Model']","""Discrete choice models with unobserved heterogeneity are commonly used Econometric models for dynamic Economic behavior which have been adopted in practice to predict behavior of individuals and firms from schooling and job choices to strategic decisions in market competition. These models feature optimizing agents who choose among a finite set of options in a sequence of periods and receive choice-specific payoffs that depend on both variables that are observed by the agent and recorded in the data and variables that are only observed by the agent but not recorded in the data. Existing work in Econometrics assumes that optimizing agents are fully rational and requires finding a functional fixed point to find the optimal policy. We show that in an important class of discrete choice models the value function is globally concave in the policy. That means that simple algorithms that do not require fixed point computation, such as the policy gradient algorithm, globally converge to the optimal policy. This finding can both be used to relax behavioral assumption regarding the optimizing agents and to facilitate Econometric analysis of dynamic behavior. In particular, we demonstrate significant computational advantages in using a simple implementation policy gradient algorithm over existing ""nested fixed point"" algorithms used in Econometrics.""","""The authors develop theoretical results showing that policy gradient methods converge to the globally optimal policy for a class of MDPs arising in econometrics. The authors show empirically that their methods perform on a standard benchmark. The paper contains interesting theoretical results. However, the reviewers were concerned about some aspects: 1) The paper does not explain to a general ML audience the significance of the models considered in the paper - where do these arise in practical applications? Further, the experiments are also limited to a small MDP - while this may be a standard benchmark in econometrics, it would be good to study the algorithm's scaling properties to larger models as is standard practice in RL. 2) The implications of the assumptions made in the paper are not explained clearly, nor are the relative improvements of the authors' work relative to prior work. In particular, one reviewer was concerned that the assumptions could be trivially satisfied and the authors' rebuttal did not clarify this sufficiently. Thus, I recommend rejection but am unsure since none of the reviewers nor I am an expert in this area."""
603,"""Scaleable input gradient regularization for adversarial robustness""","['adversarial robustness', 'gradient regularization', 'robust certification', 'robustness bounds']","""In this work we revisit gradient regularization for adversarial robustness with some new ingredients. First, we derive new per-image theoretical robustness bounds based on local gradient information. These bounds strongly motivate input gradient regularization. Second, we implement a scaleable version of input gradient regularization which avoids double backpropagation: adversarially robust ImageNet models are trained in 33 hours on four consumer grade GPUs. Finally, we show experimentally and through theoretical certification that input gradient regularization is competitive with adversarial training. Moreover we demonstrate that gradient regularization does not lead to gradient obfuscation or gradient masking.""","""(1) the authors emphasize the theoretical contribution and claims the bound are tighter. However, they did not directly compare with any certified robust methods, or previous bounds to support the argument. HM, not sure, need to check this (2) The empirical results look suboptimal. The authors did not convince me why they sampled 1000 images for test for a small CIFAR-10 dataset. The proposed method is 10% less robust comparing to Madry's in table 1. Seems ok, understand authors response 1) The theoretical analysis are not terribly new, which is just a straightforward application of first-order Taylor expansion. This idea could be traced back to the very first paper on adversarial examples FGSM (Goodfellow et al 2014). True 2) The novelty of the paper is to replace exact gradient (w.r.t input) by their finite difference and use it as a regularization. However, there is a misalignment between the theory and the proposed algorithm. The theory only encourages input gradient regularization, regardless to how it is evaluated, and previous studies have shown that this is not a very effective way to improve robustness. According to the experiments, the main empirical improvement comes from the finite difference implementation but the benefit of finite difference is not justified/discussed by the theory. Therefore, the empirical improvement are not supported by the theory. Authors have briefly respond to this issue in the discussion but I believe a more rigorous analysis is needed. This seems okay based on author response 3) Moreover, the empirical performance does not achieve state-of-the-art result. Indeed, there is a non-negligible gap (12%) between the obtained performance and some well-known baseline. Thus the empirical contribution is also limited. Yea, for some cases"""
604,"""Efficient Bi-Directional Verification of ReLU Networks via Quadratic Programming""",[],"""Neural networks are known to be sensitive to adversarial perturbations. To investigate this undesired behavior we consider the problem of computing the distance to the decision boundary (DtDB) from a given sample for a deep NN classifier. In this work we present an iterative procedure where in each step we solve a convex quadratic programming (QP) task. Solving the single initial QP already results in a lower bound on the DtDB and can be used as a robustness certificate of the classifier around a given sample. In contrast to currently known approaches our method also provides upper bounds used as a measure of quality for the certificate. We show that our approach provides better or competitive results in comparison with a wide range of existing techniques.""","""This article is concerned with sensitivity to adversarial perturbations. It studies the computation of the distance to the decision boundary from a given sample in order to obtain robustness certificates, and presents an iterative procedure to this end. This is a very relevant line of investigation. The reviewers found that the approach is different from previous ones (even if related quadratic constraints had been formulated in previous works). However, they expressed concerns with the presentation, missing details or intuition for the upper bounds, and the small size of the networks that are tested. The reviewers also mentioned that the paper could be clearer about the strengths and weaknesses of the proposed algorithm. The responses clarified a number of points from the initial reviews. However, some reviewers found that important aspects were still not addressed satisfactorily, specifically in relation to the justification of the approach to obtain upper bounds (although they acknowledge that the strategy seems at least empirically validated), and reiterated concerns about the scalability of the approach. Overall, this article ranks good, but not good enough. """
605,"""State Alignment-based Imitation Learning""","['Imitation learning', 'Reinforcement Learning']","""Consider an imitation learning problem that the imitator and the expert have different dynamics models. Most of existing imitation learning methods fail because they focus on the imitation of actions. We propose a novel state alignment-based imitation learning method to train the imitator by following the state sequences in the expert demonstrations as much as possible. The alignment of states comes from both local and global perspectives. We combine them into a reinforcement learning framework by a regularized policy update objective. We show the superiority of our method on standard imitation learning settings as well as the challenging settings in which the expert and the imitator have different dynamics models.""","""This paper seeks to adapt behavioural cloning to the case where demonstrator and learner have different dynamics (e.g. human demonstrator), by designing a state-based objective. The reviewers agreed the paper makes an important and interesting contribution, but were somewhat divided about whether the experiments were sufficiently impactful. They furthermore had additional concerns regarding the clarity of the paper and presentation of the method. Through discussion, it seems that these were sufficiently addressed that the consensus has moved towards agreeing that the paper sufficiently proves the concept to warrant publication (with one reviewer dissenting). I recommend acceptance, with the view that the authors should put a substantial amount of work into improving the presentation of the paper based on the feedback that has emerged from the discussion before the camera ready is submitted (if accepted)."""
606,"""Few-Shot One-Class Classification via Meta-Learning""","['meta-learning', 'few-shot learning', 'one-class classification', 'class-imbalance learning']","""Although few-shot learning and one-class classification have been separately well studied, their intersection remains rather unexplored. Our work addresses the few-shot one-class classification problem and presents a meta-learning approach that requires only few data examples from only one class to adapt to unseen tasks. The proposed method builds upon the model-agnostic meta-learning (MAML) algorithm (Finn et al., 2017) and explicitly trains for few-shot class-imbalance learning, aiming to learn a model initialization that is particularly suited for learning one-class classification tasks after observing only a few examples of one class. Experimental results on datasets from the image domain and the time-series domain show that our model substantially outperforms the baselines, including MAML, and demonstrate the ability to learn new tasks from only few majority class samples. Moreover, we successfully learn anomaly detectors for a real world application involving sensor readings recorded during industrial manufacturing of workpieces with a CNC milling machine using only few examples from the normal class.""","""The authors present a combination of few-shot learning with one-class classification model of problems. The authors use the existing MAML algorithm and build upon it to present a learning algorithm for the problem. As pointed out by the reviewers, the technical contributions of the paper are quite minimal and after the author response period the reviewers have not changed their minds. However, the authors have significantly changed the paper from its initial submission and as of now it needs to be reviewed again. I recommend authors to resubmit their paper to another conference. As of now, I recommend rejection."""
607,"""Learning to Guide Random Search""","['Random search', 'Derivative-free optimization', 'Learning continuous control']","""We are interested in derivative-free optimization of high-dimensional functions. The sample complexity of existing methods is high and depends on problem dimensionality, unlike the dimensionality-independent rates of first-order methods. The recent success of deep learning suggests that many datasets lie on low-dimensional manifolds that can be represented by deep nonlinear models. We therefore consider derivative-free optimization of a high-dimensional function that lies on a latent low-dimensional manifold. We develop an online learning approach that learns this manifold while performing the optimization. In other words, we jointly learn the manifold and optimize the function. Our analysis suggests that the presented method significantly reduces sample complexity. We empirically evaluate the method on continuous optimization benchmarks and high-dimensional continuous control problems. Our method achieves significantly lower sample complexity than Augmented Random Search, Bayesian optimization, covariance matrix adaptation (CMA-ES), and other derivative-free optimization algorithms.""","""This paper develops a methodology to perform global derivative-free optimization of high dimensional functions through random search on a lower dimensional manifold that is carefully learned with a neural network. In thorough experiments on reinforcement learning tasks and a real world airfoil optimization task, the authors demonstrate the effectiveness of their method compared to strong baselines. The reviewers unanimously agreed that the paper was above the bar for acceptance and thus the recommendation is to accept. An interesting direction for future work might be to combine this methodology with REMBO. REMBO seems competitive in the experiments (but maybe doesn't work as well early on since the model needs to learn the manifold). Learning both the low dimensional manifold to do the optimization over and then performing a guided search through Bayesian optimization instead of a random strategy might get the best of both worlds? """
608,"""Convolutional Tensor-Train LSTM for Long-Term Video Prediction""","['Tensor decomposition', 'Video prediction']","""Long-term video prediction is highly challenging since it entails simultaneously capturing spatial and temporal information across a long range of image frames.Standard recurrent models are ineffective since they are prone to error propagation and cannot effectively capture higher-order correlations. A potential solution is to extend to higher-order spatio-temporal recurrent models. However, such a model requires a large number of parameters and operations, making it intractable to learn in practice and is prone to overfitting. In this work, we propose convolutional tensor-train LSTM (Conv-TT-LSTM), which learns higher-orderConvolutional LSTM (ConvLSTM) efficiently using convolutional tensor-train decomposition (CTTD). Our proposed model naturally incorporates higher-order spatio-temporal information at a small cost of memory and computation by using efficient low-rank tensor representations. We evaluate our model on Moving-MNIST and KTH datasets and show improvements over standard ConvLSTM and better/comparable results to other ConvLSTM-based approaches, but with much fewer parameters.""","""This paper proposes Conv-TT-LSTM for long-term video prediction. The proposed method saves memory and computation by low-rank tensor representations via tensor decomposition and is evaluated in Moving MNIST and KTH datasets. All reviews argue that the novelty of the paper does not meet the standard of ICLR. In the rebuttal, the authors polish the experiment design, which fails to change any reviewers decision. Overall, the paper is not good enough for ICLR."""
609,"""Agent as Scientist: Learning to Verify Hypotheses""",[],"""In this paper, we formulate hypothesis verification as a reinforcement learning problem. Specifically, we aim to build an agent that, given a hypothesis about the dynamics of the world can take actions to generate observations which can help predict whether the hypothesis is true or false. Our first observation is that agents trained end-to-end with the reward fail to learn to solve this problem. In order to train the agents, we exploit the underlying structure in the majority of hypotheses -- they can be formulated as triplets (pre-condition, action sequence, post-condition). Once the agents have been pretrained to verify hypotheses with this structure, they can be fine-tuned to verify more general hypotheses. Our work takes a step towards a ``scientist agent'' that develops an understanding of the world by generating and testing hypotheses about its environment.""","""The authors propose an agent that can act in an RL environment to verify hypotheses about it, using hypotheses formulated as triplets of pre-condition, action sequence, and post-condition variables. Training then proceeds in multiple stages, including a pretraining phase using a reward function that encourages the agent to learn the hypothesis triplets. Strengths: Reviewers generally agreed its an important problem and interesting approach Weaknesses: There were some points of convergence among reviewer comments: lack of connection to existing literature (ie to causal reasoning and POMDPs), and concerns about the robustness of the results (which were only reporting the max seeds). Two reviewers also found the use of natural language to unnecessarily complicate their setup. Overall, clarity seemed to be an issue. Other comments concerned lack of comparisons, analyses, and suggestions for alternate methods of rewarding the agent (to improve understandability). The authors deserve credit for their responsiveness to reviewer comments and for the considerable amount of additional work done in the rebuttal period. However, these efforts ultimately didnt satisfy the reviewers enough to change their scores. Although I find that the additional experiments and revisions have significantly strengthened the paper, I don't believe it's currently ready for publication at ICLR. I urge the authors to focus on clearly presenting and integrating these new results in a future submission, which I look forward to. """
610,"""A Group-Theoretic Framework for Knowledge Graph Embedding""","['group theory', 'knowledge graph embedding', 'representation learning']","""We have rigorously proved the existence of a group algebraic structure hidden in relational knowledge embedding problems, which suggests that a group-based embedding framework is essential for model design. Our theoretical analysis explores merely the intrinsic property of the embedding problem itself without introducing extra designs. Using the proposed framework, one could construct embedding models that naturally accommodate all possible local graph patterns, which are necessary for reproducing a complete graph from atomic knowledge triplets. We reconstruct many state-of-the-art models from the framework and re-interpret them as embeddings with different groups. Moreover, we also propose new instantiation models using simple continuous non-abelian groups.""","""This paper presents a rigorous mathematical framework for knowledge graph embedding. The paper received 3 reviews. R1 recommends Weak Reject based on concerns about the contributions of the paper; the authors, in their response, indicate that R1 may have been confused about what the contributions were meant to be. R2 initially recommended Reject, based on concerns that the paper was overselling its claims, and on the clarity and quality of writing. After the author response, R2 raised their score to Weak Reject but still felt that their main concerns had gone unanswered, and in particular that the authors seemed unwilling to tone down their claims. R3 recommends Weak Reject, indicating that they found the paper difficult to follow and gave some specific technical concerns. The authors, in their response, express confusion about R3's comments and suggest that R3 also did not understand the paper. However, in light of these unanimous Weak Reject reviews, we cannot recommend acceptance at this time. We understand that the authors may feel that some reviewers did not properly understand or appreciate the contribution, but all three reviewers are researchers working at highly-ranked institutions and thus are fairly representative of the attendees of ICLR; we hope that their points of confusion and concern, as reflected in their reviews, will help authors to clarify a revision of the paper for another venue. """
611,"""Perturbations are not Enough: Generating Adversarial Examples with Spatial Distortions""",[],"""Deep neural network image classifiers are reported to be susceptible to adversarial evasion attacks, which use carefully crafted images created to mislead a classifier. Recently, various kinds of adversarial attack methods have been proposed, most of which focus on adding small perturbations to input images. Despite the success of existing approaches, the way to generate realistic adversarial images with small perturbations remains a challenging problem. In this paper, we aim to address this problem by proposing a novel adversarial method, which generates adversarial examples by imposing not only perturbations but also spatial distortions on input images, including scaling, rotation, shear, and translation. As humans are less susceptible to small spatial distortions, the proposed approach can produce visually more realistic attacks with smaller perturbations, able to deceive classifiers without affecting human predictions. We learn our method by amortized techniques with neural networks and generate adversarial examples efficiently by a forward pass of the networks. Extensive experiments on attacking different types of non-robustified classifiers and robust classifiers with defence show that our method has state-of-the-art performance in comparison with advanced attack parallels.""","""The method proposed and explored here is to introduce small spatial distortions, with the goal of making them undetectable by humans but affecting the classification of the images. As reviewers point out, very similar methods have been tested before. The methods are also only tested on a few low-resolution datasets. The reviewers are unanimous in their judgement that the method is not novel enough, and the authors' rebuttals have not convinced the reviewers or me about the opposite."""
612,"""Negative Sampling in Variational Autoencoders""","['Variational Autoencoder', 'generative modelling', 'out-of-distribution detection']","""We propose negative sampling as an approach to improve the notoriously bad out-of-distribution likelihood estimates of Variational Autoencoder models. Our model pushes latent images of negative samples away from the prior. When the source of negative samples is an auxiliary dataset, such a model can vastly improve on baselines when evaluated on OOD detection tasks. Perhaps more surprisingly, we present a fully unsupervised variant that can also significantly improve detection performance: using the output of the generator as a source of negative samples results in a fully unsupervised model that can be interpreted as adversarially trained. ""","""This paper proposes to improve VAEs' modeling of out-of-distribution examples, by pushing the latent representations of negative examples away from the prior. The general idea seems interesting, at least to some of the reviewers and to me. However, the paper seems premature, even after revision, as it leaves unclear some of the justification and analysis of the approach, especially in the fully unsupervised case. I think that with some more work it could be a very compelling contribution to a future conference."""
613,"""DBA: Distributed Backdoor Attacks against Federated Learning""","['distributed backdoor attack', 'federated learning']","""Backdoor attacks aim to manipulate a subset of training data by injecting adversarial triggers such that machine learning models trained on the tampered dataset will make arbitrarily (targeted) incorrect prediction on the testset with the same trigger embedded. While federated learning (FL) is capable of aggregating information provided by different parties for training a better model, its distributed learning methodology and inherently heterogeneous data distribution across parties may bring new vulnerabilities. In addition to recent centralized backdoor attacks on FL where each party embeds the same global trigger during training, we propose the distributed backdoor attack (DBA) --- a novel threat assessment framework developed by fully exploiting the distributed nature of FL. DBA decomposes a global trigger pattern into separate local patterns and embed them into the training set of different adversarial parties respectively. Compared to standard centralized backdoors, we show that DBA is substantially more persistent and stealthy against FL on diverse datasets such as finance and image data. We conduct extensive experiments to show that the attack success rate of DBA is significantly higher than centralized backdoors under different settings. Moreover, we find that distributed attacks are indeed more insidious, as DBA can evade two state-of-the-art robust FL algorithms against centralized backdoors. We also provide explanations for the effectiveness of DBA via feature visual interpretation and feature importance ranking. To further explore the properties of DBA, we test the attack performance by varying different trigger factors, including local trigger variations (size, gap, and location), scaling factor in FL, data distribution, and poison ratio and interval. Our proposed DBA and thorough evaluation results shed lights on characterizing the robustness of FL.""","""Thanks for the discussion, all. This paper proposes an attack strategy against federated learning. Reviewers put this in the top tier, and the authors responded appropriately to their criticisms. """
614,"""Relative Pixel Prediction For Autoregressive Image Generation""","['Image Generation', 'Autoregressive']","""In natural images, transitions between adjacent pixels tend to be smooth and gradual, a fact that has long been exploited in image compression models based on predictive coding. In contrast, existing neural autoregressive image generation models predict the absolute pixel intensities at each position, which is a more challenging problem. In this paper, we propose to predict pixels relatively, by predicting new pixels relative to previously generated pixels (or pixels from the conditioning context, when available). We show that this form of prediction fare favorably to its absolute counterpart when used independently, but their coordination under an unified probabilistic model yields optimal performance, as the model learns to predict sharp transitions using the absolute predictor, while generating smooth transitions using the relative predictor. Experiments on multiple benchmarks for unconditional image generation, image colorization, and super-resolution indicate that our presented mechanism leads to improvements in terms of likelihood compared to the absolute prediction counterparts. ""","""All reviewers rated this submission as a weak reject and there was no author response. The AC recommends rejection."""
615,"""InfoCNF: Efficient Conditional Continuous Normalizing Flow Using Adaptive Solvers""","['continuous normalizing flows', 'conditioning', 'adaptive solvers', 'gating networks']","""Continuous Normalizing Flows (CNFs) have emerged as promising deep generative models for a wide range of tasks thanks to their invertibility and exact likelihood estimation. However, conditioning CNFs on signals of interest for conditional image generation and downstream predictive tasks is inefficient due to the high-dimensional latent code generated by the model, which needs to be of the same size as the input data. In this paper, we propose InfoCNF, an efficient conditional CNF that partitions the latent space into a class-specific supervised code and an unsupervised code that shared among all classes for efficient use of labeled information. Since the partitioning strategy (slightly) increases the number of function evaluations (NFEs), InfoCNF also employs gating networks to learn the error tolerances of its ordinary differential equation (ODE) solvers for better speed and performance. We show empirically that InfoCNF improves the test accuracy over the baseline while yielding comparable likelihood scores and reducing the NFEs on CIFAR10. Furthermore, applying the same partitioning strategy in InfoCNF on time-series data helps improve extrapolation performance. ""","""This paper presents a conditional CNF based on the InfoGAN structure to improve ODE solvers. Reviewers appreciate that the approach shows improved performances over the baseline models. Reviewers all note, however, that this paper is weak in clearly defining the problem and explaining the approach and the results. While the authors have addressed some of the reviewers concerns through their rebuttal, reviewers still remain concerned about the clarity of the paper. I thank the authors for submitting to ICLR and hope to see a revised paper at a future venue."""
616,"""Random Bias Initialization Improving Binary Neural Network Training""","['Binarized Neural Network', 'Activation function', 'Initialization', 'Neural Network Acceleration']","""Edge intelligence especially binary neural network (BNN) has attracted considerable attention of the artificial intelligence community recently. BNNs significantly reduce the computational cost, model size, and memory footprint. However, there is still a performance gap between the successful full-precision neural network with ReLU activation and BNNs. We argue that the accuracy drop of BNNs is due to their geometry. We analyze the behaviour of the full-precision neural network with ReLU activation and compare it with its binarized counterpart. This comparison suggests random bias initialization as a remedy to activation saturation in full-precision networks and leads us towards an improved BNN training. Our numerical experiments confirm our geometric intuition.""","""The article studies the behaviour of binary and full precision ReLU networks towards explaining differences in performance and suggests a random bias initialisation strategy. The reviewers agree that, while closing the gap between binary networks and full precision networks is an interesting problem, the article cannot be accepted in its current form. They point out that more extensive theoretical analysis and experiments would be important, as well as improving the writing. The authors did not provide a rebuttal nor a revision. """
617,"""A Random Matrix Perspective on Mixtures of Nonlinearities in High Dimensions""",[],"""One of the distinguishing characteristics of modern deep learning systems is that they typically employ neural network architectures that utilize enormous numbers of parameters, often in the millions and sometimes even in the billions. While this paradigm has inspired significant research on the properties of large networks, relatively little work has been devoted to the fact that these networks are often used to model large complex datasets, which may themselves contain millions or even billions of constraints. In this work, we focus on this high-dimensional regime in which both the dataset size and the number of features tend to infinity. We analyze the performance of a simple regression model trained on the random features pseudo-formula for a random weight matrix pseudo-formula and random bias vector pseudo-formula , obtaining an exact formula for the asymptotic training error on a noisy autoencoding task. The role of the bias can be understood as parameterizing a distribution over activation functions, and our analysis actually extends to general such distributions, even those not expressible with a traditional additive bias. Intruigingly, we find that a mixture of nonlinearities can outperform the best single nonlinearity on the noisy autoecndoing task, suggesting that mixtures of nonlinearities might be useful for approximate kernel methods or neural network architecture design.""","""In this work, the authors focus on the high-dimensional regime in which both the dataset size and the number of features tend to infinity. They analyze the performance of a simple regression model trained on the random features and revealed several interesting and important observations. Unfortunately, the reviewers could not reach a consensus as to whether this paper had sufficient novelty to merit acceptance at this time. Incorporating their feedback would move the paper closer towards the acceptance threshold."""
618,"""Building Deep Equivariant Capsule Networks""","['Capsule networks', 'equivariance']","""Capsule networks are constrained by the parameter-expensive nature of their layers, and the general lack of provable equivariance guarantees. We present a variation of capsule networks that aims to remedy this. We identify that learning all pair-wise part-whole relationships between capsules of successive layers is inefficient. Further, we also realise that the choice of prediction networks and the routing mechanism are both key to equivariance. Based on these, we propose an alternative framework for capsule networks that learns to projectively encode the manifold of pose-variations, termed the space-of-variation (SOV), for every capsule-type of each layer. This is done using a trainable, equivariant function defined over a grid of group-transformations. Thus, the prediction-phase of routing involves projection into the SOV of a deeper capsule using the corresponding function. As a specific instantiation of this idea, and also in order to reap the benefits of increased parameter-sharing, we use type-homogeneous group-equivariant convolutions of shallower capsules in this phase. We also introduce an equivariant routing mechanism based on degree-centrality. We show that this particular instance of our general model is equivariant, and hence preserves the compositional representation of an input under transformations. We conduct several experiments on standard object-classification datasets that showcase the increased transformation-robustness, as well as general performance, of our model to several capsule baselines.""","""This paper combine recent ideas from capsule networks and group-equivariant neural networks to form equivariant capsules, which is a great idea. The exposition is clear and the experiments provide a very interesting analysis and results. I believe this work will be very well received by the ICLR community."""
619,"""Deep k-NN for Noisy Labels""",[],"""Modern machine learning models are often trained on examples with noisy labels that hurt performance and are hard to identify. In this paper, we provide an empirical study showing that a simple pseudo-formula -nearest neighbor-based filtering approach on the logit layer of a preliminary model can remove mislabeled training data and produce more accurate models than some recently proposed methods. We also provide new statistical guarantees into its efficacy.""","""The paper proposed and analyze a k-NN method for identifying corrupted labels for training deep neural networks. Although a reviewer pointed out that the noisy k-NN contribution is interesting, I think the paper can be much improved further due to the followings: (a) Lack of state-of-the-art baselines to compare. (b) Lack of important recent related work, i.e., ""Robust Inference via Generative Classifiers for Handling Noisy Labels"" from ICML 2019 (see pseudo-url). The paper also runs a clustering-like algorithm for handling noisy labels, and the authors should compare and discuss why the proposed method is superior. (c) Poor write-up, e.g., address what is missing in existing methods from many different perspectives as this is a quite well-studied popular problem. Hence, I recommend rejection."""
620,"""Denoising Improves Latent Space Geometry in Text Autoencoders""","['controllable text generation', 'autoencoders', 'denoising', 'latent space geometry']","""Neural language models have recently shown impressive gains in unconditional text generation, but controllable generation and manipulation of text remain challenging. In particular, controlling text via latent space operations in autoencoders has been difficult, in part due to chaotic latent space geometry. We propose to employ adversarial autoencoders together with denoising (referred as DAAE) to drive the latent space to organize itself. Theoretically, we prove that input sentence perturbations in the denoising approach encourage similar sentences to map to similar latent representations. Empirically, we illustrate the trade-off between text-generation and autoencoder-reconstruction capabilities, and our model significantly improves over other autoencoder variants. Even from completely unsupervised training, DAAE can successfully alter the tense/sentiment of sentences via simple latent vector arithmetic.""","""This work presents a simple technique for improving the latent space geometry of text autoencoders. The strengths of the paper lie in the simplicity of the method, and results show that the technique improves over the considered baselines. However, some reviewers expressed concerns over the presented theory for why input noise helps, and did not address concerns that the theory was useful. The paper should be improved if Section 4 were instead rewritten to focus on providing intuition, either with empirical analysis, results on a toy task, or clear but high level discussion of why the method helps. The current theorem statements seem either unnecessary or make strong assumptions that don't hold in practice. As a result, Section 4 in its current form is not in service to the reader's understanding why the simple method works. Finally, further improvements to the paper could be made with comparisons to additional baselines from prior work as suggested by reviewers."""
621,"""Neural Clustering Processes""","['amortized inference', 'probabilistic clustering', 'mixture models', 'exchangeability', 'spike sorting']","""Mixture models, a basic building block in countless statistical models, involve latent random variables over discrete spaces, and existing posterior inference methods can be inaccurate and/or very slow. In this work we introduce a novel deep learning architecture for efficient amortized Bayesian inference over mixture models. While previous approaches to amortized clustering assumed a fixed or maximum number of mixture components and only amortized over the continuous parameters of each mixture component, our method amortizes over the local discrete labels of all the data points, and performs inference over an unbounded number of mixture components. The latter property makes our method natural for the challenging case of nonparametric Bayesian models, where the number of mixture components grows with the dataset. Our approach exploits the exchangeability of the generative models and is based on mapping distributed, permutation-invariant representations of discrete arrangements into varying-size multinomial conditional probabilities. The resulting algorithm parallelizes easily, yields iid samples from the approximate posteriors along with a normalized probability estimate of each sample (a quantity generally unavailable using Markov Chain Monte Carlo) and can easily be applied to both conjugate and non-conjugate models, as training only requires samples from the generative model. We also present an extension of the method to models of random communities (such as infinite relational or stochastic block models). As a scientific application, we present a novel approach to neural spike sorting for high-density multielectrode arrays. ""","""This paper uses neural amortized inference for clustering processes to automatically tune the number of clusters based on the observed data. The main contribution of the paper is the design of the posterior parametrization based on the DeepSet method. The reviewers feel that the paper has limited novelty since it mainly follows from existing methodologies. Also, experiments are limited and not all comparisons are made. """
622,"""A Base Model Selection Methodology for Efficient Fine-Tuning""","['transfer learning', 'fine-tuning', 'parameter transfer']","""While the accuracy of image classification achieves significant improvement with deep Convolutional Neural Networks (CNN), training a deep CNN is a time-consuming task because it requires a large amount of labeled data and takes a long time to converge even with high performance computing resources. Fine-tuning, one of the transfer learning methods, is effective in decreasing time and the amount of data necessary for CNN training. It is known that fine-tuning can be performed efficiently if the source and the target tasks have high relativity. However, the technique to evaluate the relativity or transferability of trained models quantitatively from their parameters has not been established. In this paper, we propose and evaluate several metrics to estimate the transferability of pre-trained CNN models for a given target task by featuremaps of the last convolutional layer. We found that some of the proposed metrics are good predictors of fine-tuned accuracy, but their effectiveness depends on the structure of the network. Therefore, we also propose to combine two metrics to get a generally applicable indicator. The experimental results reveal that one of the combined metrics is well correlated with fine-tuned accuracy in a variety of network structure and our method has a good potential to reduce the burden of CNN training.""","""This paper proposes to speed up finetuning of pretrained deep image classification networks by predicting the success rate of a zoom of pre-trained networks without completely running them on the test set. The idea is that a sensible measure from the output layer might well correlate with the performance of the network. All reviewers consider this is an important problem and a good direction to make the effort. However, various concerns are raised and all reviewers unanimously rate weak reject. The major concerns include the unclear relationship between the metrics and the fine-tuning performance, non- comprehensive experiments, poor writing quality. The authors respond to Reviewers concerns but did not change the major concerns. The ACs concur the concerns and the paper can not be accepted at its current state."""
623,"""Variational Autoencoders for Opponent Modeling in Multi-Agent Systems""","['reinforcement learning', 'multi-agent systems', 'representation learning']","""Multi-agent systems exhibit complex behaviors that emanate from the interactions of multiple agents in a shared environment. In this work, we are interested in controlling one agent in a multi-agent system and successfully learn to interact with the other agents that have fixed policies. Modeling the behavior of other agents (opponents) is essential in understanding the interactions of the agents in the system. By taking advantage of recent advances in unsupervised learning, we propose modeling opponents using variational autoencoders. Additionally, many existing methods in the literature assume that the opponent models have access to opponent's observations and actions during both training and execution. To eliminate this assumption, we propose a modification that attempts to identify the underlying opponent model, using only local information of our agent, such as its observations, actions, and rewards. The experiments indicate that our opponent modeling methods achieve equal or greater episodic returns in reinforcement learning tasks against another modeling method.""","""The present work addresses the problem of opponent modeling in multi-agent learning settings, and propose an approach based on variational auto-encoders (VAEs). Reviewers consider the approach natural and novel empirical results area presented to show that the proposed approach can accurately model opponents in partially observable settings. Several concerns were addressed by the authors during the rebuttal phased. A key remaining concern is the size of the contribution. Reviewers suggest that a deeper conceptual development, e.g., based on empirical insights, is required."""
624,"""Graph Neural Networks For Multi-Image Matching""","['Graph Neural Networks', 'Multi-image Matching']","""In geometric computer vision applications, multi-image feature matching gives more accurate and robust solutions compared to simple two-image matching. In this work, we formulate multi-image matching as a graph embedding problem, then use a Graph Neural Network to learn an appropriate embedding function for aligning image features. We use cycle consistency to train our network in an unsupervised fashion, since ground truth correspondence can be difficult or expensive to acquire. Geometric consistency losses are added to aid training, though unlike optimization based methods no geometric information is necessary at inference time. To the best of our knowledge, no other works have used graph neural networks for multi-image feature matching. Our experiments show that our method is competitive with other optimization based approaches.""","""The paper proposes a method for learning multi-image matching using graph neural networks. The model is learned by making use of cycle consistency constraints and geometric consistency, and it achieves a performance that is comparable to the state of the art. While the reviewers view the proposed method interesting in general, they raised issues regarding the evaluation, which is limited in terms of both the chosen datasets and prior methods. After rounds of discussion, the reviewers reached a consensus that the submission is not mature enough to be accepted for this venue at this time. Therefore, I recommend rejecting this submission."""
625,"""Augmenting Transformers with KNN-Based Composite Memory""","['knn', 'memory-augmented networks', 'language generation', 'dialogue']","""Various machine learning tasks can benefit from access to external information of different modalities, such as text and images. Recent work has focused on learning architectures with large memories capable of storing this knowledge. We propose augmenting Transformer neural networks with KNN-based Information Fetching (KIF) modules. Each KIF module learns a read operation to access fixed external knowledge. We apply these modules to generative dialogue modeling, a challenging task where information must be flexibly retrieved and incorporated to maintain the topic and flow of conversation. We demonstrate the effectiveness of our approach by identifying relevant knowledge from Wikipedia, images, and human-written dialogue utterances, and show that leveraging this retrieved information improves model performance, measured by automatic and human evaluation.""","""This paper augments transformer encoder-decoder networks architecture with k nearest neighbors to fetch knowledge or information related to the previous conversation, and demonstrates improvements through manual and automated evaluation. Reviewers note the fact that the approach is simple and clean and results in significant improvements, however, the approach is incremental over the previous work (including pseudo-url). Furthermore, although the authors improved the article in the light of reviewer suggestions (i.e., rushed analysis, not so clear descriptions) and some reviewers increased their scores, none of them actually marked the paper as an accept or a strong accept."""
626,"""Deep Batch Active Learning by Diverse, Uncertain Gradient Lower Bounds""","['deep learning', 'active learning', 'batch active learning']","""We design a new algorithm for batch active learning with deep neural network models. Our algorithm, Batch Active learning by Diverse Gradient Embeddings (BADGE), samples groups of points that are disparate and high-magnitude when represented in a hallucinated gradient space, a strategy designed to incorporate both predictive uncertainty and sample diversity into every selected batch. Crucially, BADGE trades off between diversity and uncertainty without requiring any hand-tuned hyperparameters. While other approaches sometimes succeed for particular batch sizes or architectures, BADGE consistently performs as well or better, making it a useful option for real world active learning problems.""","""The paper provides a simple method of active learning for classification using deep nets. The method is motivated by choosing examples based on an embedding computed that represents the last layer gradients, which is shown to have a connection to a lower bound of model change if labeled. The algorithm is simple and easy to implement. The method is justified by convincing experiments. The reviewers agree that the rebuttal and revisions cleared up any misunderstandings. This is a solid empirical work on an active learning technique that seems to have a lot of promise. Accept. """
627,"""Controlling generative models with continuous factors of variations""","['Generative models', 'factor of variation', 'GAN', 'beta-VAE', 'interpretable representation', 'interpretability']","""Recent deep generative models can provide photo-realistic images as well as visual or textual content embeddings useful to address various tasks of computer vision and natural language processing. Their usefulness is nevertheless often limited by the lack of control over the generative process or the poor understanding of the learned representation. To overcome these major issues, very recent works have shown the interest of studying the semantics of the latent space of generative models. In this paper, we propose to advance on the interpretability of the latent space of generative models by introducing a new method to find meaningful directions in the latent space of any generative model along which we can move to control precisely specific properties of the generated image like position or scale of the object in the image. Our method is weakly supervised and particularly well suited for the search of directions encoding simple transformations of the generated image, such as translation, zoom or color variations. We demonstrate the effectiveness of our method qualitatively and quantitatively, both for GANs and variational auto-encoders.""","""Following the revision and the discussion, all three reviewers agree that the paper provides an interesting contribution to the area of generative image modeling. Accept."""
628,"""A Theory of Usable Information under Computational Constraints""",[],"""We propose a new framework for reasoning about information in complex systems. Our foundation is based on a variational extension of Shannons information theory that takes into account the modeling power and computational constraints of the observer. The resulting predictive V-information encompasses mutual information and other notions of informativeness such as the coefficient of determination. Unlike Shannons mutual information and in violation of the data processing inequality, V-information can be created through computation. This is consistent with deep neural networks extracting hierarchies of progressively more informative features in representation learning. Additionally, we show that by incorporating computational constraints, V-information can be reliably estimated from data even in high dimensions with PAC-style guarantees. Empirically, we demonstrate predictive V-information is more effective than mutual information for structure learning and fair representation learning.""","""All reviewers unanimously accept the paper."""
629,"""MaskConvNet: Training Efficient ConvNets from Scratch via Budget-constrained Filter Pruning""","['Structured Pruning', 'Sparsity Regularization', 'Budget-Aware']","""In this paper, we propose a framework, called MaskConvNet, for ConvNets filter pruning. MaskConvNet provides elegant support for training budget-aware pruned networks from scratch, by adding a simple mask module to a ConvNet architecture. MaskConvNet enjoys several advantages - (1) Flexible, the mask module can be integrated with any ConvNets in a plug-and-play manner. (2) Simple, the mask module is implemented by a hard Sigmoid function with a small number of trainable mask variables, adding negligible memory and computational overheads to the networks during training. (3) Effective, it is able to achieve competitive pruning rate while maintaining comparable accuracy with the baseline ConvNets without pruning, regardless of the datasets and ConvNet architectures used. (4) Fast, it is observed that the number of training epochs required by MaskConvNet is close to training a baseline without pruning. (5) Budget-aware, with a sparsity budget on target metric (e.g. model size and FLOP), MaskConvNet is able to train in a way that the optimizer can adaptively sparsify the network and automatically maintain sparsity level, till the pruned network produces good accuracy and fulfill the budget constraint simultaneously. Results on CIFAR-10 and ImageNet with several ConvNet architectures show that MaskConvNet works competitively well compared to previous pruning methods, with budget-constraint well respected. Code is available at pseudo-url. We hope MaskConvNet, as a simple and general pruning framework, can address the gaps in existing literate and advance future studies to push the boundaries of neural network pruning.""","""This paper presents a method to learn a pruned convolutional network during conventional training. Pruning the network has advantages (in deployment) of reducing the final model size and reducing the required FLOPS for compute. The method adds a pruning mask on each layer with an additional sparsity loss on the mask variables. The method avoids the cost of a train-prune-retrain optimization process that has been used in several earlier papers. The method is evaluated on CIFAR-10 and ImageNet with three standard convolutional network architectures. The results show comparable performance to the original networks with the learned sparse networks. The reviewers made many substantial comments on the paper and most of these were addressed in the author response and subsequent discussion. For example, Reviewer1 mentioned two other papers that promote sparsity implicitly during training (Q3), and the authors acknowledged the omission and described how those methods had less flexibility on a target metric (FLOPS) that is not parameter size. Many of the author responses described changes to an updated paper that would clarify the claims and results (R1: Q2-7, R2:Q3). However, the reviewers raised many concerns for the original paper and they did not see an updated paper that contains the proposed revisions. Given the numerous concerns with the original submission, the reviewers wanted to see the revised paper to assess whether their concerns had been addressed adequately. Additionally, the paper does not have a comparison experiment with state-of the art results, and the current results were not sufficiently convincing for the reviewers. Reviewer1 and author response to questions 13--15 suggest that the experimental results with ResNet-34 are inadequate to show the benefits of the approach, but results for the proposed method with the larger ResNet-50 (which could show benefits) are not yet ready. The current paper is not ready for publication. """
630,"""Transfer Active Learning For Graph Neural Networks""","['Active Learning', 'Graph Neural Networks', 'Transfer Learning', 'Reinforcement Learning']","""Graph neural networks have been proved very effective for a variety of prediction tasks on graphs such as node classification. Generally, a large number of labeled data are required to train these networks. However, in reality it could be very expensive to obtain a large number of labeled data on large-scale graphs. In this paper, we studied active learning for graph neural networks, i.e., how to effectively label the nodes on a graph for training graph neural networks. We formulated the problem as a sequential decision process, which sequentially label informative nodes, and trained a policy network to maximize the performance of graph neural networks for a specific task. Moreover, we also studied how to learn a universal policy for labeling nodes on graphs with multiple training graphs and then transfer the learned policy to unseen graphs. Experimental results on both settings of a single graph and multiple training graphs (transfer learning setting) prove the effectiveness of our proposed approaches over many competitive baselines. ""","""Paper proposes a method for active learning on graphs. Reviewers found the presentation of the method confusing and somewhat lacking novelty in light of existing works (some of which were not compared to). After the rebuttal and revisions, reviewers minds were not changed from rejection. """
631,"""Explaining A Black-box By Using A Deep Variational Information Bottleneck Approach""","['interpretable machine learning', 'information bottleneck principle', 'black-box']","""Interpretable machine learning has gained much attention recently. Briefness and comprehensiveness are necessary in order to provide a large amount of information concisely when explaining a black-box decision system. However, existing interpretable machine learning methods fail to consider briefness and comprehensiveness simultaneously, leading to redundant explanations. We propose the variational information bottleneck for interpretation, VIBI, a system-agnostic interpretable method that provides a brief but comprehensive explanation. VIBI adopts an information theoretic principle, information bottleneck principle, as a criterion for finding such explanations. For each instance, VIBI selects key features that are maximally compressed about an input (briefness), and informative about a decision made by a black-box system on that input (comprehensive). We evaluate VIBI on three datasets and compare with state-of-the-art interpretable machine learning methods in terms of both interpretability and fidelity evaluated by human and quantitative metrics.""","""The authors present a system-agnostic interpretable method based on the idea of that provides a brief (=compressed) but comprehensive (=informative) explanation. Their system is build upon the idea of VIB. The authors compare against 3 state-of-the-art interpretable machine learning methods and the evaluation is terms of interpretability (=human understandable) and fidelity (=accuracy of approximating black-box model). Overall, all reviewers agreed that the topic of model interpretability is an important one and the novel connection between IB and interpretable data-summaries is a very natural one. This manuscript has generated a lot of discussion among the reviewers during the rebuttal and there are a number of concerns that are currently preventing me from recommending this paper for acceptance. The first concern relates to the lack of comparison against attention methods (I agree with the authors that this is a model-specific solution whereas they propose a model-agnostic one), however attention is currently the elephant in room and the first thing someone thinks of when thinking of interpretability. As such, the authors should have presented such a comparison. The second concern relates to the human evaluation protocol which could be significantly improved (Why 100 samples from all models but 200 for VIBI? Given the small set of results, are these model differences significant? Similarly, assuming that we have multiple annotations per sample, what is the variance in the annotations?). This paper is currently borderline and given reviewers' concerns and the limited space in the conference program I cannot recommend acceptance of this paper. """
632,"""Adversarially Robust Representations with Smooth Encoders""","['Adversarial Learning', 'Robust Representations', 'Variational AutoEncoder', 'Wasserstein Distance', 'Variational Inference']","""This paper studies the undesired phenomena of over-sensitivity of representations learned by deep networks to semantically-irrelevant changes in data. We identify a cause for this shortcoming in the classical Variational Auto-encoder (VAE) objective, the evidence lower bound (ELBO). We show that the ELBO fails to control the behaviour of the encoder out of the support of the empirical data distribution and this behaviour of the VAE can lead to extreme errors in the learned representation. This is a key hurdle in the effective use of representations for data-efficient learning and transfer. To address this problem, we propose to augment the data with specifications that enforce insensitivity of the representation with respect to families of transformations. To incorporate these specifications, we propose a regularization method that is based on a selection mechanism that creates a fictive data point by explicitly perturbing an observed true data point. For certain choices of parameters, our formulation naturally leads to the minimization of the entropy regularized Wasserstein distance between representations. We illustrate our approach on standard datasets and experimentally show that significant improvements in the downstream adversarial accuracy can be achieved by learning robust representations completely in an unsupervised manner, without a reference to a particular downstream task and without a costly supervised adversarial training procedure. ""","""This paper proposes an novel way of expanding our VAE toolkit by tying it to adversarial robustness. It should be thus of interest to the respective communities."""
633,"""Encoding Musical Style with Transformer Autoencoders""","['music generation', 'sequence-to-sequence model', 'controllable generation']","""We consider the problem of learning high-level controls over the global structure of sequence generation, particularly in the context of symbolic music generation with complex language models. In this work, we present the Transformer autoencoder, which aggregates encodings of the input data across time to obtain a global representation of style from a given performance. We show it is possible to combine this global embedding with other temporally distributed embeddings, enabling improved control over the separate aspects of performance style and and melody. Empirically, we demonstrate the effectiveness of our method on a variety of music generation tasks on the MAESTRO dataset and an internal, 10,000+ hour dataset of piano performances, where we achieve improvements in terms of log-likelihood and mean listening scores as compared to relevant baselines.""","""Main content: Blind review #3 summarizes it well: This paper presents a technique for encoding the high level style of pieces of symbolic music. The music is represented as a variant of the MIDI format. The main strategy is to condition a Music Transformer architecture on this global style embedding. Additionally, the Music Transformer model is also conditioned on a combination of both style and melody embeddings to try and generate music similar to the conditioning melody but in the style of the performance embedding. -- Discussion: The reviewers questioned the novelty. Blind review #2 wrote: ""Overall, I think the paper presents an interesting application and parts of it are well written, however I have concerns with the technical presentation in parts of the paper and some of the methodology. Firstly, I think the algorithmic novelty in the paper is fairly limited. The performance conditioning vector is generated by an additional encoding transformer, compared to the Music Transformer paper (Huang et. al. 2019b). However, the limited algorithmic novelty is not the main concern. The authors also mention an internal dataset of music audio and transcriptions, which can be a major contribution to the music information retrieval (MIR) community. However it is not clear if this dataset will be publicly released or is only for internal experiments."" However, after revision, the same reviewer has upgraded the review to a weak accept, as the authors wrote ""We emphasize that our goal is to provide users with more fine-grained control over the outputs generated by a seq2seq language model. Despite its simplicity, our method is able to learn a global representation of style for a Transformer, which to the best of our knowledge is a novel contribution for music generation. Additionally, we can synthesize an arbitrary melody into the style of another performance, and we demonstrate the effectiveness of our results both quantitatively (metrics) and qualitatively (interpolations, samples, and user listening studies)."" -- Recommendation and justification: This paper is borderline for the reasons above, and due to the large number of strong papers, is not accepted at this time. As one comment, this work might actually be more suitable for a more specialized conference like ISMIR, as its novel contribution is more to music applications than to fundamental machine learning approaches."""
634,"""Cascade Style Transfer""","['style transfer', 'cascade', 'quality', 'flexibility', 'domain-independent', 'serial', 'parallel']","""Recent studies have made tremendous progress in style transfer for specific domains, e.g., artistic, semantic and photo-realistic. However, existing approaches have limited flexibility in extending to other domains, as different style representations are often specific to particular domains. This also limits the stylistic quality. To address these limitations, we propose Cascade Style Transfer, a simple yet effective framework that can improve the quality and flexibility of style transfer by combining multiple existing approaches directly. Our cascade framework contains two architectures, i.e., Serial Style Transfer (SST) and Parallel Style Transfer (PST). The SST takes the stylized output of one method as the input content of the others. This could help improve the stylistic quality. The PST uses a shared backbone and a loss module to optimize the loss functions of different methods in parallel. This could help improve the quality and flexibility, and guide us to find domain-independent approaches. Our experiments are conducted on three major style transfer domains: artistic, semantic and photo-realistic. In all these domains, our methods have shown superiority over the state-of-the-art methods.""","""This work combines style transfer approaches either in a serial or parallel fashion, and shows that the combination of methods is more powerful than isolated methods. The novelty in this work is extremely limited and not offset by insightful analysis or very thorough experiments, given that most results are qualitative. Authors have not provided a public response. Therefore, we recommend rejection."""
635,"""Depth-Width Trade-offs for ReLU Networks via Sharkovsky's Theorem""","['Depth-Width trade-offs', 'ReLU networks', 'chaos theory', 'Sharkovsky Theorem', 'dynamical systems']","""Understanding the representational power of Deep Neural Networks (DNNs) and how their structural properties (e.g., depth, width, type of activation unit) affect the functions they can compute, has been an important yet challenging question in deep learning and approximation theory. In a seminal paper, Telgarsky high- lighted the benefits of depth by presenting a family of functions (based on sim- ple triangular waves) for which DNNs achieve zero classification error, whereas shallow networks with fewer than exponentially many nodes incur constant error. Even though Telgarskys work reveals the limitations of shallow neural networks, it doesnt inform us on why these functions are difficult to represent and in fact he states it as a tantalizing open question to characterize those functions that cannot be well-approximated by smaller depths. In this work, we point to a new connection between DNNs expressivity and Sharkovskys Theorem from dynamical systems, that enables us to characterize the depth-width trade-offs of ReLU networks for representing functions based on the presence of a generalized notion of fixed points, called periodic points (a fixed point is a point of period 1). Motivated by our observation that the triangle waves used in Telgarskys work contain points of period 3 a period that is special in that it implies chaotic behaviour based on the celebrated result by Li-Yorke we proceed to give general lower bounds for the width needed to represent periodic functions as a function of the depth. Technically, the crux of our approach is based on an eigenvalue analysis of the dynamical systems associated with such functions.""","""The article is concerned with depth width tradeoffs in the representation of functions with neural networks. The article presents connections between expressivity of neural networks and dynamical systems, and obtains lower bounds on the width to represent periodic functions as a function of the depth. These are relevant advances and new perspectives for the theoretical study of neural networks. The reviewers were very positive about this article. The authors' responses also addressed comments from the initial reviews. """
636,"""A Uniform Generalization Error Bound for Generative Adversarial Networks""","['GANs', 'Uniform Generalization Bound', 'Deep Learning', 'Weight normalization']",""" This paper focuses on the theoretical investigation of unsupervised generalization theory of generative adversarial networks (GANs). We first formulate a more reasonable definition of general error and generalization bounds for GANs. On top of that, we establish a bound for generalization error with a fixed generator in a general weight normalization context. Then, we obtain a width-independent bound by applying pseudo-formula and spectral norm weight normalization. To better understand the unsupervised model, GANs, we establish the generalization bound, which uniformly holds with respect to the choice of generators. Hence, we can explain how the complexity of discriminators and generators contribute to generalization error. For pseudo-formula and spectral weight normalization, we provide explicit guidance on how to design parameters to train robust generators. Our numerical simulations also verify that our generalization bound is reasonable.""","""The authors received reviews from true experts and these experts felt the paper was not up to the standards of ICLR. Reviewer 3 and Reviewer 1 disagree as to whether the new notion of generalization error is appropriate. I think both cases can be defended. I think the authors should aim to sharpen their argument in this regard.Several reviewers at one point remark that the results follow from standard techniques: shouldn't this be the case? I believe the actual criticism being made is that the value of these new results do not go above and beyond existing ones. There is also the matter of what value should be attributed to technical developments on their own. On this matter, the reviewers seem to agree that the derivations lean heavily on prior work. """
637,"""The Dynamics of Signal Propagation in Gated Recurrent Neural Networks""","['recurrent neural networks', 'theory of deep learning']","""Training recurrent neural networks (RNNs) on long sequence tasks is plagued with difficulties arising from the exponential explosion or vanishing of signals as they propagate forward or backward through the network. Many techniques have been proposed to ameliorate these issues, including various algorithmic and architectural modifications. Two of the most successful RNN architectures, the LSTM and the GRU, do exhibit modest improvements over vanilla RNN cells, but they still suffer from instabilities when trained on very long sequences. In this work, we develop a mean field theory of signal propagation in LSTMs and GRUs that enables us to calculate the time scales for signal propagation as well as the spectral properties of the state-to-state Jacobians. By optimizing these quantities in terms of the initialization hyperparameters, we derive a novel initialization scheme that eliminates or reduces training instabilities. We demonstrate the efficacy of our initialization scheme on multiple sequence tasks, on which it enables successful training while a standard initialization either fails completely or is orders of magnitude slower. We also observe a beneficial effect on generalization performance using this new initialization.""","""Using ideas from mean-field theory and statistical mechanics, this paper derives a principled way to analyze signal propagation through gated recurrent networks. This analysis then allows for the development of a novel initialization scheme capable of mitigating subsequent training instabilities. In the end, while reviewers appreciated some of the analytical insights provided, two still voted for rejection while one chose accept after the rebuttal and discussion period. And as AC for this paper, I did not find sufficient evidence to overturn the reviewer majority for two primary reasons. First, the paper claims to demonstrate the efficacy of the proposed initialization scheme on multiple sequence tasks, but the presented experiments do not really involve representative testing scenarios as pointed out by reviewers. Given that this is not a purely theoretical paper, but rather one suggesting practically-relevant initializations for RNNs, it seems important to actually demonstrate this on sequence data people in the community actually care about. In fact, even the reviewer who voted for acceptance conceded that the presented results were not too convincing (basically limited to toy situations involving Cifar10 and MNIST data). Secondly, all reviewers found parts of the paper difficult to digest, and while a future revision has been promised to provide clarity, no text was actually changed making updated evaluations problematic. Note that the rebuttal mentions that the paper is written in a style that is common in the physics literature, and this appears to be a large part of the problem. ICLR is an ML conference and in this respect, to the extent possible it is important to frame relevant papers in an accessible way such that a broader segment of this community can benefit from the key message. At the very least, this will ensure that the reviewer pool is more equipped to properly appreciate the contribution. My own view is that this work can be reframed in such a way that it could be successfully submitted to another ML conference in the future."""
638,"""GLAD: Learning Sparse Graph Recovery""","['Meta learning', 'automated algorithm design', 'learning structure recovery', 'Gaussian graphical models']","""Recovering sparse conditional independence graphs from data is a fundamental problem in machine learning with wide applications. A popular formulation of the problem is an pseudo-formula regularized maximum likelihood estimation. Many convex optimization algorithms have been designed to solve this formulation to recover the graph structure. Recently, there is a surge of interest to learn algorithms directly based on data, and in this case, learn to map empirical covariance to the sparse precision matrix. However, it is a challenging task in this case, since the symmetric positive definiteness (SPD) and sparsity of the matrix are not easy to enforce in learned algorithms, and a direct mapping from data to precision matrix may contain many parameters. We propose a deep learning architecture, GLAD, which uses an Alternating Minimization (AM) algorithm as our model inductive bias, and learns the model parameters via supervised learning. We show that GLAD learns a very compact and effective model for recovering sparse graphs from data.""",""" The paper proposes a neural network architecture to address the problem of estimating a sparse precision matrix from data, which can be used for inferring conditional independence if the random variables are gaussian. The authors propose an Alternating Minimisation procedure for solving the l1 regularized maximum likelihood which can be unrolled and parameterized. This method is shown to converge faster at inference time than other methods and it is also far more effective in terms of training time compared to an existing data driven method. Reviewers had good initial impressions of this paper, pointing out the significance of the idea and the soundness of the setup. After a productive rebuttal phase the authors significantly improved the readibility and successfully clarified the remaining concerns of the reviewers. This AC thus recommends acceptance. """
639,"""Jacobian Adversarially Regularized Networks for Robustness""","['adversarial examples', 'robust machine learning', 'deep learning']","""Adversarial examples are crafted with imperceptible perturbations with the intent to fool neural networks. Against such attacks, adversarial training and its variants stand as the strongest defense to date. Previous studies have pointed out that robust models that have undergone adversarial training tend to produce more salient and interpretable Jacobian matrices than their non-robust counterparts. A natural question is whether a model trained with an objective to produce salient Jacobian can result in better robustness. This paper answers this question with affirmative empirical results. We propose Jacobian Adversarially Regularized Networks (JARN) as a method to optimize the saliency of a classifier's Jacobian by adversarially regularizing the model's Jacobian to resemble natural training images. Image classifiers trained with JARN show improved robust accuracy compared to standard models on the MNIST, SVHN and CIFAR-10 datasets, uncovering a new angle to boost robustness without using adversarial training.""","""This paper extends previous observations (Tsipars, Etmann etc) in relations between Jacobian and robustness and directly train a model that improves robustness using Jacobians that look like images. The questions regarding computation time (suggested by two reviewers, including one of the most negative reviewers) are appropriately addressed by the authors (added experiments). Reviewers agree that the idea is novel, and some conjectured why the papers idea is a very sensible one. We think this paper would be an interest for ICLR readers. Please address any remaining comments from the reviewers before the final copy. """
640,"""LabelFool: A Trick in the Label Space""","['Adversarial attack', 'LabelFool', 'Imperceptibility', 'Label space']","""It is widely known that well-designed perturbations can cause state-of-the-art machine learning classifiers to mis-label an image, with sufficiently small perturbations that are imperceptible to the human eyes. However, by detecting the inconsistency between the image and wrong label, the human observer would be alerted of the attack. In this paper, we aim to design attacks that not only make classifiers generate wrong labels, but also make the wrong labels imperceptible to human observers. To achieve this, we propose an algorithm called LabelFool which identifies a target label similar to the ground truth label and finds a perturbation of the image for this target label. We first find the target label for an input image by a probability model, then move the input in the feature space towards the target label. Subjective studies on ImageNet show that in the label space, our attack is much less recognizable by human observers, while objective experimental results on ImageNet show that we maintain similar performance in the image space as well as attack rates to state-of-the-art attack algorithms.""","""Thanks for the discussion with reviewers, which improved our understanding of your paper significantly. However, we concluded that this paper is still premature to be accepted to ICLR2020. We hope that the detailed comments by the reviewers help improve your paper for potential future submission."""
641,"""Gradient Surgery for Multi-Task Learning""","['multi-task learning', 'deep learning']","""While deep learning and deep reinforcement learning systems have demonstrated impressive results in domains such as image classification, game playing, and robotic control, data efficiency remains a major challenge, particularly as these algorithms learn individual tasks from scratch. Multi-task learning has emerged as a promising approach for sharing structure across multiple tasks to enable more efficient learning. However, the multi-task setting presents a number of optimization challenges, making it difficult to realize large efficiency gains compared to learning tasks independently. The reasons why multi-task learning is so challenging compared to single task learning are not fully understood. Motivated by the insight that gradient interference causes optimization challenges, we develop a simple and general approach for avoiding interference between gradients from different tasks, by altering the gradients through a technique we refer to as gradient surgery. We propose a form of gradient surgery that projects the gradient of a task onto the normal plane of the gradient of any other task that has a conflicting gradient. On a series of challenging multi-task supervised and multi-task reinforcement learning problems, we find that this approach leads to substantial gains in efficiency and performance. Further, it can be effectively combined with previously-proposed multi-task architectures for enhanced performance in a model-agnostic way.""","""This paper presents a method for improving optimization in multi-task learning settings by minimizing the interference of gradients belonging to different tasks. While the idea is simple and well-motivated, the reviewers felt that the problem is still not studied adequately. The proofs are useful, but there is still a gap when it comes to practicality. The rebuttal clarified some of the concerns, but still there is a feeling that (a) the main assumptions for the method need to be demonstrated in a more convincing way, e.g. by boosting the experiments as suggested with other MTL methods (b) by placing the paper better in the current literature and minimizing the gap between proofs/underlying assumptions and practical usefulness. """
642,"""Non-linear System Identification from Partial Observations via Iterative Smoothing and Learning""","['System Identification', 'Dynamical Systems', 'Partial Observations', 'Non-linear Programming', 'Expectation Maximization', 'Neural Networks']","""System identification is the process of building a mathematical model of an unknown system from measurements of its inputs and outputs. It is a key step for model-based control, estimator design, and output prediction. This work presents an algorithm for non-linear offline system identification from partial observations, i.e. situations in which the system's full-state is not directly observable. The algorithm presented, called SISL, iteratively infers the system's full state through non-linear optimization and then updates the model parameters. We test our algorithm on a simulated system of coupled Lorenz attractors, showing our algorithm's ability to identify high-dimensional systems that prove intractable for particle-based approaches. We also use SISL to identify the dynamics of an aerobatic helicopter. By augmenting the state with unobserved fluid states, we learn a model that predicts the acceleration of the helicopter better than state-of-the-art approaches.""","""The paper is about nonlinear system identification in an EM-style learning framework. The idea is to use nonlinear programming for the E step (finding a MAP estimate) and then refine the model parameters. In flavor, this approach is similar to the work by Roweis and Ghahramani. However, this paper does not offer any new insights whatsoever and the (very short) methods section arrives at proposing to compute the maximum a posteriori estimate (eq. 5). While the motivation for this given in the paper is a bit hard to understand it is of course a very well-known and useful estimator. Besides the maximum likelihood estimator this is one of the most commonly used point estimators, see any textbook on statistical signal processing. There has been quite a bit of work in the signal processing community over the last 10 years, and a good overview can be found here: pseudo-url This should give evidence that this is indeed a standard way of solving the problem and it does work really well. Given that we have so fast and good optimizers these days it is common to solve Kalman filtering/smoothing problems via this optimization problem. The paper does not contain any analysis at all. The experiments do of course show that the method works (when there is low noise). Again, we know very well that the MAP estimate is a decent estimator for unimodal problems. The MAP estimator can also be made to work well for noisy situations. As for the comments that the sequential Monte Carlo methods do not work in higher dimensions that is indeed true. However, there are now algorithms that work in much higher dimensions than those considered by the authors of this paper, e.g. pseudo-url which also contains an up-to-date survey on the topic. Furthermore, when it comes to particle smoothing there are also much more efficient smoothers than 10 years ago. The area of particle smoothing has also evolved rapidly over the past years. Summary: The paper makes use of the well-known MAP estimator for learning nonlinear dynamical systems (states and parameters). This is by now a standard technique in signal processing. There are several throw-away comments on SMC that are not valid and that are not grounded in the intense research of that field over the past decade. """
643,"""MxPool: Multiplex Pooling for Hierarchical Graph Representation Learning""","['GNN', 'graph pooling', 'graph representation learning']","""Graphs are known to have complicated structures and have myriad applications. How to utilize deep learning methods for graph classification tasks has attracted considerable research attention in the past few years. Two properties of graph data have imposed significant challenges on existing graph learning techniques. (1) Diversity: each graph has a variable size of unordered nodes and diverse node/edge types. (2) Complexity: graphs have not only node/edge features but also complex topological features. These two properties motivate us to use multiplex structure to learn graph features in a diverse way. In this paper, we propose a simple but effective approach, MxPool, which concurrently uses multiple graph convolution networks and graph pooling networks to build hierarchical learning structure for graph representation learning tasks. Our experiments on numerous graph classification benchmarks show that our MxPool has marked superiority over other state-of-the-art graph representation learning methods. For example, MxPool achieves 92.1% accuracy on the D&D dataset while the second best method DiffPool only achieves 80.64% accuracy.""","""All three reviewers are consistently negative on this paper. Thus a reject is recommended."""
644,"""Pareto Optimality in No-Harm Fairness""","['Fairness', 'Fairness in Machine Learning', 'No-Harm Fairness']","""Common fairness definitions in machine learning focus on balancing various notions of disparity and utility. In this work we study fairness in the context of risk disparity among sub-populations. We introduce the framework of Pareto-optimal fairness, where the goal of reducing risk disparity gaps is secondary only to the principle of not doing unnecessary harm, a concept that is especially applicable to high-stakes domains such as healthcare. We provide analysis and methodology to obtain maximally-fair no-harm classifiers on finite datasets. We argue that even in domains where fairness at cost is required, no-harm fairness can prove to be the optimal first step. This same methodology can also be applied to any unbalanced classification task, where we want to dynamically equalize the misclassification risks across outcomes without degrading overall performance any more than strictly necessary. We test the proposed methodology on real case-studies of predicting income, ICU patient mortality, classifying skin lesions from images, and assessing credit risk, demonstrating how the proposed framework compares favorably to other traditional approaches.""","""This manuscript outlines procedures to address fairness as measured by disparity in risk across groups. The manuscript is primarily motivated by methods that can achieve ""no-harm"" fairness, i.e., achieving fairness without increasing the risk in subgroups. The reviewers and AC agree that the problem studied is timely and interesting. However, in reviews and discussion, the reviewers noted issues with clarity of the presentation, and sufficient justification of the results. The consensus was that the manuscript in its current state is borderline, and would have to be significantly improved in terms of clarity of the discussion, and possibly improved methods that result in more convincing results. """
645,"""wMAN: WEAKLY-SUPERVISED MOMENT ALIGNMENT NETWORK FOR TEXT-BASED VIDEO SEGMENT RETRIEVAL""","['vision', 'language', 'video moment retrieval']","""Given a video and a sentence, the goal of weakly-supervised video moment retrieval is to locate the video segment which is described by the sentence without having access to temporal annotations during training. Instead, a model must learn how to identify the correct segment (i.e. moment) when only being provided with video-sentence pairs. Thus, an inherent challenge is automatically inferring the latent correspondence between visual and language representations. To facilitate this alignment, we propose our Weakly-supervised Moment Alignment Network (wMAN) which exploits a multi-level co-attention mechanism to learn richer multimodal representations. The aforementioned mechanism is comprised of a Frame-By-Word interaction module as well as a novel Word-Conditioned Visual Graph (WCVG). Our approach also incorporates a novel application of positional encodings, commonly used in Transformers, to learn visual-semantic representations that contain contextual information of their relative positions in the temporal sequence through iterative message-passing. Comprehensive experiments on the DiDeMo and Charades-STA datasets demonstrate the effectiveness of our learned representations: our combined wMAN model not only outperforms the state-of-the-art weakly-supervised method by a significant margin but also does better than strongly-supervised state-of-the-art methods on some metrics.""","""This paper proposes a method for aligning an input text with the frames in a video that correspond to what the text describes in a weakly supervised way. The main technical contribution of the paper is the use of co-attention at different abstraction levels. Among the four reviewers, one reviewer advocates for the paper while the others find this paper to be a borderline reject paper. Reviewer3 who was initially positive about the paper, during the discussion period, expressed that he/she wants to downgrade his/her rating to weak reject after reading the other reviewers' comments and concerns. The main concern of the reviewers is that the contribution of the paper incremental, particularly since the idea of co-attention has been used in many different area in other context. The authors responded to this in the rebuttal that the proposed approach incorporate different components such as Positional Encodings and is different from prior work, and that they experimentally perform superior compared to other co-attention usages such as LCGN. Although the AC understands the authors response, the majority of the reviewers are still not fully convinced about the contribution and their opinion stay opposed to the paper."""
646,"""Neural Machine Translation with Universal Visual Representation""","['Neural Machine Translation', 'Visual Representation', 'Multimodal Machine Translation', 'Language Representation']","""Though visual information has been introduced for enhancing neural machine translation (NMT), its effectiveness strongly relies on the availability of large amounts of bilingual parallel sentence pairs with manual image annotations. In this paper, we present a universal visual representation learned over the monolingual corpora with image annotations, which overcomes the lack of large-scale bilingual sentence-image pairs, thereby extending image applicability in NMT. In detail, a group of images with similar topics to the source sentence will be retrieved from a light topic-image lookup table learned over the existing sentence-image pairs, and then is encoded as image representations by a pre-trained ResNet. An attention layer with a gated weighting is to fuse the visual information and text information as input to the decoder for predicting target translations. In particular, the proposed method enables the visual information to be integrated into large-scale text-only NMT in addition to the multimodel NMT. Experiments on four widely used translation datasets, including the WMT'16 English-to-Romanian, WMT'14 English-to-German, WMT'14 English-to-French, and Multi30K, show that the proposed approach achieves significant improvements over strong baselines.""","""This paper proposes using visual representations learned in a monolingual setting with image annotations into machine translation. Their approach obviates the need to have bilingual sentences aligned with image annotations, a very restricted resource. An attention layer allows the transformer to incorporate a topic-image lookup table. Their approach achieves significant improvements over strong baselines. The reviewers and the authors engaged in substantive discussions. This is a strong paper which should be included in ICLR. """
647,"""AHash: A Load-Balanced One Permutation Hash""","['Data Representation', 'Probabilistic Algorithms']","""Minwise Hashing (MinHash) is a fundamental method to compute set similarities and compact high-dimensional data for efficient learning and searching. The bottleneck of MinHash is computing k (usually hundreds) MinHash values. One Permutation Hashing (OPH) only requires one permutation (hash function) to get k MinHash values by dividing elements into k bins. One drawback of OPH is that the load of the bins (the number of elements in a bin) could be unbalanced, which leads to the existence of empty bins and false similarity computation. Several strategies for densification, that is, filling empty bins, have been proposed. However, the densification is just a remedial strategy and cannot eliminate the error incurred by the unbalanced load. Unlike the densification to fill the empty bins after they undesirably occur, our design goal is to balance the load so as to reduce the empty bins in advance. In this paper, we propose a load-balanced hashing, Amortization Hashing (AHash), which can generate as few empty bins as possible. Therefore, AHash is more load-balanced and accurate without hurting runtime efficiency compared with OPH and densification strategies. Our experiments on real datasets validate the claim. All source codes and datasets have been provided as Supplementary Materials and released on GitHub anonymously.""","""This paper proposes a load-balanced hashing called AHash that balances the load of hashing bins to avoid empty bins that appear in some minwise hashing methods. Reviewers found the work interesting and well-motivated. Authors addressed some clarity issues in their rebuttal. However the impact appeared quite limited, and the experimental validation limited to few realistic experiments that did not alleviate this concern. We thus recommend rejection."""
648,"""High Fidelity Speech Synthesis with Adversarial Networks""","['texttospeech', 'speechsynthesis', 'audiosynthesis', 'gans', 'generativeadversarialnetworks', 'implicitgenerativemodels']","""Generative adversarial networks have seen rapid development in recent years and have led to remarkable improvements in generative modelling of images. However, their application in the audio domain has received limited attention, and autoregressive models, such as WaveNet, remain the state of the art in generative modelling of audio signals such as human speech. To address this paucity, we introduce GAN-TTS, a Generative Adversarial Network for Text-to-Speech. Our architecture is composed of a conditional feed-forward generator producing raw speech audio, and an ensemble of discriminators which operate on random windows of different sizes. The discriminators analyse the audio both in terms of general realism, as well as how well the audio corresponds to the utterance that should be pronounced. To measure the performance of GAN-TTS, we employ both subjective human evaluation (MOS - Mean Opinion Score), as well as novel quantitative metrics (Frchet DeepSpeech Distance and Kernel DeepSpeech Distance), which we find to be well correlated with MOS. We show that GAN-TTS is capable of generating high-fidelity speech with naturalness comparable to the state-of-the-art models, and unlike autoregressive models, it is highly parallelisable thanks to an efficient feed-forward generator. Listen to GAN-TTS reading this abstract at pseudo-url""","""The authors design a GAN-based text-to-speech synthesis model that performs competitively with state-of-the-art synthesizers. The reviewers and I agree that this appears to be the first really successful effort at GAN-based synthesis. Additional positives are that the model is designed to be highly parallelisable, and that the authors also propose several automatic measures of performance in addition to reporting human mean opinion scores. The automatic measures correlate well (though far from perfectly) with human judgments, and in any case are a nice contribution to the area of evaluation of generative models. It would be even more convincing if the authors presented human A/B forced-choice test results (in addition to the mean opinion scores), which are often included in speech synthesis evaluation, but this is a minor quibble."""
649,"""Off-policy Multi-step Q-learning""","['Multi-step Learning', 'Off-policy Learning', 'Q-learning']","""In the past few years, off-policy reinforcement learning methods have shown promising results in their application for robot control. Deep Q-learning, however, still suffers from poor data-efficiency which is limiting with regard to real-world applications. We follow the idea of multi-step TD-learning to enhance data-efficiency while remaining off-policy by proposing two novel Temporal-Difference formulations: (1) Truncated Q-functions which represent the return for the first n steps of a policy rollout and (2) Shifted Q-functions, acting as the farsighted return after this truncated rollout. We prove that the combination of these short- and long-term predictions is a representation of the full return, leading to the Composite Q-learning algorithm. We show the efficacy of Composite Q-learning in the tabular case and compare our approach in the function-approximation setting with TD3, Model-based Value Expansion and TD3(Delta), which we introduce as an off-policy variant of TD(Delta). We show on three simulated robot tasks that Composite TD3 outperforms TD3 as well as state-of-the-art off-policy multi-step approaches in terms of data-efficiency.""","""The authors propose TD updates for Truncated Q-functions and Shifted Q-functions, reflecting short- and long-term predictions, respectively. They show that they can be combined to form an estimate of the full-return, leading to a Composite Q-learning algorithm. They claim to demonstrated improved data-efficiency in the tabular setting and on three simulated robot tasks. All of the reviewers found the ideas in the paper interesting, however, based on the issues raised by Reviewer 3, everyone agreed that substantial revisions to the paper are necessary to properly incorporate the new results. As a result, I am recommending rejection for this submission at this time. I encourage the authors to incorporate the feedback from the reviewers, and believe that after that is done, the paper will be a strong submission. """
650,"""Localised Generative Flows""","['Deep generative models', 'normalizing flows', 'variational inference']","""We argue that flow-based density models based on continuous bijections are limited in their ability to learn target distributions with complicated topologies, and propose localised generative flows (LGFs) to address this problem. LGFs are composed of stacked continuous mixtures of bijections, which enables each bijection to learn a local region of the target rather than its entirety. Our method is a generalisation of existing flow-based methods, which can be used without modification as the basis for an LGF model. Unlike normalising flows, LGFs do not permit exact computation of log likelihoods, but we propose a simple variational scheme that performs well in practice. We show empirically that LGFs yield improved performance across a variety of common density estimation tasks.""","""This paper proposes to overcome some fundamental limitations of normalizing flows by introducing auxiliary continuous latent variables. While the problem this paper is trying to address is mathematically legitimate, there is no strong evidence that this is a relevant problem in practice. Moreover, the proposed solution is not entirely novel, converting the flow in a latent-variable model. Overall, I believe this paper will be of minor relevance to the ICLR community."""
651,"""Cyclical Stochastic Gradient MCMC for Bayesian Deep Learning""",[],"""The posteriors over neural network weights are high dimensional and multimodal. Each mode typically characterizes a meaningfully different representation of the data. We develop Cyclical Stochastic Gradient MCMC (SG-MCMC) to automatically explore such distributions. In particular, we propose a cyclical stepsize schedule, where larger steps discover new modes, and smaller steps characterize each mode. We prove non-asymptotic convergence theory of our proposed algorithm. Moreover, we provide extensive experimental results, including ImageNet, to demonstrate the effectiveness of cyclical SG-MCMC in learning complex multimodal distributions, especially for fully Bayesian inference with modern deep neural networks.""","""This paper proposes a novel stochastic gradient Markov chain Monte Carlo method incorporating a cyclical step size schedule (cyclical SG-MCMC). The authors argue that this step size schedule allows the sampler to cross modes (when the step size is large) and locally explore modes (when the step size is smaller). SG-MCMC is a very promising method for Bayesian deep learning as it is both scalable and easily to incorporate into existing models. However, the stochastic setting often leads to the sampler getting stuck in a local mode due to a requirement of a small step size (which itself is often due to leaving out the Metropolis-Hastings accept / reject step). The cyclic learning rate intuitively helps the sampler escape local modes. This property is demonstrated on synthetic problems in comparison to existing SG-MCMC baselines. The authors demonstrate improved negative log likelihood on larger scale deep learning benchmarks, which is appreciated as the related literature often restricts experiments to small scale problems. The reviewers all found the paper compelling and argued for acceptance and thus the recommendation is to accept. Some questions remain for future work. E.g. all experiments were performed using a very low temperature, which implies that the methods are not sampling from the true Bayesian posterior. Why is such a low temperature needed for reasonable performance? In any case a very nice paper."""
652,"""Deep symbolic regression""","['symbolic regression', 'reinforcement learning', 'automated machine learning']","""Discovering the underlying mathematical expressions describing a dataset is a core challenge for artificial intelligence. This is the problem of symbolic regression. Despite recent advances in training neural networks to solve complex tasks, deep learning approaches to symbolic regression are lacking. We propose a framework that combines deep learning with symbolic regression via a simple idea: use a large model to search the space of small models. More specifically, we use a recurrent neural network to emit a distribution over tractable mathematical expressions, and employ reinforcement learning to train the network to generate better-fitting expressions. Our algorithm significantly outperforms standard genetic programming-based symbolic regression in its ability to exactly recover symbolic expressions on a series of benchmark problems, both with and without added noise. More broadly, our contributions include a framework that can be applied to optimize hierarchical, variable-length objects under a black-box performance metric, with the ability to incorporate a priori constraints in situ.""","""This paper suggests using RNN and policy gradient methods for improving symbolic regression. The reviewers could not reach a consensus, and due to concerns about the clarity of the paper and the extensiveness of the experimental results, the paper does not appear to currently meet the level of publication. Also, while not mentioned in the reviews, there appears to be some work on symbolic regression aided by deep learning, (see for example, pseudo-url, which was found by searching ""symbolic regression deep learning"")---I would thus also recommend the authors do a more thorough literature search for future revisions. """
653,"""Discrete InfoMax Codes for Meta-Learning""","['meta-learning', 'generalization', 'discrete representations']","""This paper analyzes how generalization works in meta-learning. Our core contribution is an information-theoretic generalization bound for meta-learning, which identifies the expressivity of the task-specific learner as the key factor that makes generalization to new datasets difficult. Taking inspiration from our bound, we present Discrete InfoMax Codes (DIMCO), a novel meta-learning model that trains a stochastic encoder to output discrete codes. Experiments show that DIMCO requires less memory and less time for similar performance to previous metric learning methods and that our method generalizes particularly well in a challenging small-data setting.""","""The reviewers were unanimous that this submission is not ready for publication at ICLR in its current form. Concerns raised included that the method was not sufficiently general, including in choice of experiments reported, and the lack of discussion of some lines of significantly related work."""
654,"""Low-Resource Knowledge-Grounded Dialogue Generation""",[],"""Responding with knowledge has been recognized as an important capability for an intelligent conversational agent. Yet knowledge-grounded dialogues, as training data for learning such a response generation model, are difficult to obtain. Motivated by the challenge in practice, we consider knowledge-grounded dialogue generation under a natural assumption that only limited training examples are available. In such a low-resource setting, we devise a disentangled response decoder in order to isolate parameters that depend on knowledge-grounded dialogues from the entire generation model. By this means, the major part of the model can be learned from a large number of ungrounded dialogues and unstructured documents, while the remaining small parameters can be well fitted using the limited training examples. Evaluation results on two benchmarks indicate that with only pseudo-formula training data, our model can achieve the state-of-the-art performance and generalize well on out-of-domain knowledge. ""","""The paper considers the problem of knowledge-grounded dialogue generation with low resources. The authors propose to disentangle the model into three components that can be trained on separate data, and achieve SOTA on three datasets. The reviewers agree that this is a well-written paper with a good idea, and strong empirical results, and I happily recommend acceptance."""
655,"""Detecting Extrapolation with Local Ensembles""","['extrapolation', 'reliability', 'influence functions', 'laplace approximation', 'ensembles', 'Rashomon set']","""We present local ensembles, a method for detecting extrapolation at test time in a pre-trained model. We focus on underdetermination as a key component of extrapolation: we aim to detect when many possible predictions are consistent with the training data and model class. Our method uses local second-order information to approximate the variance of predictions across an ensemble of models from the same class. We compute this approximation by estimating the norm of the component of a test point's gradient that aligns with the low-curvature directions of the Hessian, and provide a tractable method for estimating this quantity. Experimentally, we show that our method is capable of detecting when a pre-trained model is extrapolating on test data, with applications to out-of-distribution detection, detecting spurious correlates, and active learning.""","""This paper presents an ensembling approach to detect underdetermination for extrapolating to test points. The problem domain is interesting and the approach is simple and useful. While reviewers were positive about the work, they raised several points for improvement. The authors are strongly encouraged to include the discussion here in the final version."""
656,"""On the Global Convergence of Training Deep Linear ResNets""",[],"""We study the convergence of gradient descent (GD) and stochastic gradient descent (SGD) for training pseudo-formula -hidden-layer linear residual networks (ResNets). We prove that for training deep residual networks with certain linear transformations at input and output layers, which are fixed throughout training, both GD and SGD with zero initialization on all hidden weights can converge to the global minimum of the training loss. Moreover, when specializing to appropriate Gaussian random linear transformations, GD and SGD provably optimize wide enough deep linear ResNets. Compared with the global convergence result of GD for training standard deep linear networks \citep{du2019width}, our condition on the neural network width is sharper by a factor of L) where pseudo-formula denotes the condition number of the covariance matrix of the training data. We further propose a modified identity input and output transformations, and show that a pseudo-formula -wide neural network is sufficient to guarantee the global convergence of GD/SGD, where pseudo-formula are the input and output dimensions respectively.""","""This paper provides further analysis of convergence in deep linear networks. I recommend acceptance. """
657,"""Large Batch Optimization for Deep Learning: Training BERT in 76 minutes""","['large-batch optimization', 'distributed training', 'fast optimizer']","""Training large deep neural networks on massive datasets is computationally very challenging. There has been recent surge in interest in using large batch stochastic optimization methods to tackle this issue. The most prominent algorithm in this line of research is LARS, which by employing layerwise adaptive learning rates trains ResNet on ImageNet in a few minutes. However, LARS performs poorly for attention models like BERT, indicating that its performance gains are not consistent across tasks. In this paper, we first study a principled layerwise adaptation strategy to accelerate training of deep neural networks using large mini-batches. Using this strategy, we develop a new layerwise adaptive large batch optimization technique called LAMB; we then provide convergence analysis of LAMB as well as LARS, showing convergence to a stationary point in general nonconvex settings. Our empirical results demonstrate the superior performance of LAMB across various tasks such as BERT and ResNet-50 training with very little hyperparameter tuning. In particular, for BERT training, our optimizer enables use of very large batch sizes of 32868 without any degradation of performance. By increasing the batch size to the memory limit of a TPUv3 Pod, BERT training time can be reduced from 3 days to just 76 minutes.""","""This paper presents a range of methods for over-coming the challenges of large-batch training with transformer models. While one reviewer still questions the utility of training with such large numbers of devices, there is certainly a segment of the community that focuses on large-batch training, and the ideas in this paper will hopefully find a range of uses. """
658,"""Keyframing the Future: Discovering Temporal Hierarchy with Keyframe-Inpainter Prediction""","['representation learning', 'variational inference', 'video generation', 'temporal hierarchy']","""To flexibly and efficiently reason about temporal sequences, abstract representations that compactly represent the important information in the sequence are needed. One way of constructing such representations is by focusing on the important events in a sequence. In this paper, we propose a model that learns both to discover such key events (or keyframes) as well as to represent the sequence in terms of them. We do so using a hierarchical Keyframe-Inpainter (KeyIn) model that first generates keyframes and their temporal placement and then inpaints the sequences between keyframes. We propose a fully differentiable formulation for efficiently learning the keyframe placement. We show that KeyIn finds informative keyframes in several datasets with diverse dynamics. When evaluated on a planning task, KeyIn outperforms other recent proposals for learning hierarchical representations.""","""The paper is interesting in video prediction, introducing a hierarchical approach: keyframes are first predicted, then intermediate frames are generated. While it is acknowledge the authors do a step in the right direction, several issues remain: (i) the presentation of the paper could be improved (ii) experiments are not convincing enough (baselines, images not realistic enough, marginal improvements) to validate the viability of the proposed approach over existing ones. """
659,"""An Empirical Study on Post-processing Methods for Word Embeddings""","['word vectors', 'post-processing method', 'centralised kernel alignment', 'shrinkage']","""Word embeddings learnt from large corpora have been adopted in various applications in natural language processing and served as the general input representations to learning systems. Recently, a series of post-processing methods have been proposed to boost the performance of word embeddings on similarity comparison and analogy retrieval tasks, and some have been adapted to compose sentence representations. The general hypothesis behind these methods is that by enforcing the embedding space to be more isotropic, the similarity between words can be better expressed. We view these methods as an approach to shrink the covariance/gram matrix, which is estimated by learning word vectors, towards a scaled identity matrix. By optimising an objective in the semi-Riemannian manifold with Centralised Kernel Alignment (CKA), we are able to search for the optimal shrinkage parameter, and provide a post-processing method to smooth the spectrum of learnt word vectors which yields improved performance on downstream tasks.""","""This paper explores a post-processing method for word vectors to ""smooth the spectrum,"" and show improvements on some downstream tasks. Reviewers had some questions about the strength of the results, and the results on words of differing frequency. The reviewers also have comments on the clarity of the paper, as well as the exposition of some of the methods. Also, for future submissions to ICLR and other such conferences, it is more typical to address the authors comments in a direct response rather than to make changes to the document without summarizing and pointing reviewers to these changes. Without direction about what was changed or where to look, there is a lot of burden being placed on the reviewers to find your responses to their comments."""
660,"""INTERNAL-CONSISTENCY CONSTRAINTS FOR EMERGENT COMMUNICATION""","['Emergent Communication', 'Speaker-Listener Models']","""When communicating, humans rely on internally-consistent language representations. That is, as speakers, we expect listeners to behave the same way we do when we listen. This work proposes several methods for encouraging such internal consistency in dialog agents in an emergent communication setting. We consider two hypotheses about the effect of internal-consistency constraints: 1) that they improve agents ability to refer to unseen referents, and 2) that they improve agents ability to generalize across communicative roles (e.g. performing as a speaker de- spite only being trained as a listener). While we do not find evidence in favor of the former, our results show significant support for the latter.""","""This work examines how internal consistency objectives can help emergent communication, namely through possibly improving ability to refer to unseen referents and to generalize across communicative roles. Experimental results support the second hypothesis but not the first. Reviewers agree that this is an exciting object of study, but had reservations about the rationale for the first hypothesis (which was ultimately disproven), and for how the second hypothesis was investigated (lack of ablations to tease apart which part was most responsible for improvement, unsatisfactory framing). These concerns were not fully addressed by the response. While the paper is very promising and the direction quite interesting, this cannot in its current form be recommended for acceptance. We encourage authors to carefully examine reviewers' suggestions to improve their work for submission to another venue."""
661,"""Collaborative Training of Balanced Random Forests for Open Set Domain Adaptation""",[],"""In this paper, we introduce a collaborative training algorithm of balanced random forests for domain adaptation tasks which can avoid the overfitting problem. In real scenarios, most domain adaptation algorithms face the challenges from noisy, insufficient training data. Moreover in open set categorization, unknown or misaligned source and target categories adds difficulty. In such cases, conventional methods suffer from overfitting and fail to successfully transfer the knowledge of the source to the target domain. To address these issues, the following two techniques are proposed. First, we introduce the optimized decision tree construction method, in which the data at each node are split into equal sizes while maximizing the information gain. Compared to the conventional random forests, it generates larger and more balanced decision trees due to the even-split constraint, which contributes to enhanced discrimination power and reduced overfitting. Second, to tackle the domain misalignment problem, we propose the domain alignment loss which penalizes uneven splits of the source and target domain data. By collaboratively optimizing the information gain of the labeled source data as well as the entropy of unlabeled target data distributions, the proposed CoBRF algorithm achieves significantly better performance than the state-of-the-art methods. The proposed algorithm is extensively evaluated in various experimental setups in challenging domain adaptation tasks with noisy and small training data as well as open set domain adaptation problems, for two backbone networks of AlexNet and ResNet-50.""","""This paper proposes new target objectives for training random forests for better cross-domain generalizability. As reviewers mentioned, I think the idea of using random forests for domain adaptation is novel and interesting, while the proposed method has potential especially in the noisy settings. However, I think the paper can be much improved and is not ready to publish due to the following reviewers' comments: - This paper is not well-written and has too many unclear parts in the experiments and method section. The results are not guaranteed to be reproducible given the content of the paper. Also, the organization of the paper could be improved. - The open-set domain adaptation setting requires more elaboration. More carefully designed experiments should be presented. - It remains unclear how the feature extractors can be trained or fine-tuned in the DNN + tree architecture. Applying trees to high-dimensional features sacrifices the interpretability of the tree models, hampering the practical value of the approach. Hence, I recommend rejection."""
662,"""Continuous Control with Contexts, Provably""","['continuous control', 'learning', 'context']","""A fundamental challenge in artificially intelligence is to build an agent that generalizes and adapts to unseen environments. A common strategy is to build a decoder that takes a context of the unseen new environment and generates a policy. The current paper studies how to build a decoder for the fundamental continuous control environment, linear quadratic regulator (LQR), which can model a wide range of real world physical environments. We present a simple algorithm for this problem, which uses upper confidence bound (UCB) to refine the estimate of the decoder and balance the exploration-exploitation trade-off. Theoretically, our algorithm enjoys a pseudo-formula regret bound in the online setting where pseudo-formula is the number of environments the agent played. This also implies after playing pseudo-formula environments, the agent is able to transfer the learned knowledge to obtain an pseudo-formula -suboptimal policy for an unseen environment. To our knowledge, this is first provably efficient algorithm to build a decoder in the continuous control setting. While our main focus is theoretical, we also present experiments that demonstrate the effectiveness of our algorithm.""","""This work considers the popular LQR objective but with [A,B] unknown and dynamically changing. At each time a context [C,D] is observed and it is assumed there exist a linear map Theta from [C,D] to [A,B]. The particular problem statement is novel, but is heavily influenced by other MDP settings and the also follows very closely to previous works. The algorithm seems computationally intractable (a problem shared by previous work this work builds on) and so in experiments a gross approximation is used. Reviewers found the work very stylized and did not adequately review related work. For example, little attention is paid to switching linear systems and the recent LQR advances are relegated to a list of references with no discussion. The reviewers also questioned how the theory relates to the traditional setting of LQR regret, say, if [C,D] were identity at all times so that Theta = [A,B]. This paper received 3 reviews (a third was added late to the process) and my own opinion influenced the decision. While the problem statement is interesting, the work fails to put the paper in context with the existing work, and there are some questions of algorithm methods. """
663,"""Deep Learning of Determinantal Point Processes via Proper Spectral Sub-gradient""","['determinantal point processes', 'deep learning', 'optimization']","""Determinantal point processes (DPPs) is an effective tool to deliver diversity on multiple machine learning and computer vision tasks. Under deep learning framework, DPP is typically optimized via approximation, which is not straightforward and has some conflict with diversity requirement. We note, however, there has been no deep learning paradigms to optimize DPP directly since it involves matrix inversion which may result in highly computational instability. This fact greatly hinders the wide use of DPP on some specific objectives where DPP serves as a term to measure the feature diversity. In this paper, we devise a simple but effective algorithm to address this issue to optimize DPP term directly expressed with L-ensemble in spectral domain over gram matrix, which is more flexible than learning on parametric kernels. By further taking into account some geometric constraints, our algorithm seeks to generate valid sub-gradients of DPP term in case when the DPP gram matrix is not invertible (no gradients exist in this case). In this sense, our algorithm can be easily incorporated with multiple deep learning tasks. Experiments show the effectiveness of our algorithm, indicating promising performance for practical learning problems. ""","""Most reviewers seems in favour of accepting this paper, with the borderline rejection being satisfied with acceptance if the authors take special heed of their comments to improve the clarity of the paper when preparing the final version. From examination of the reviews, the paper achieves enough to warrant publication. Accept."""
664,"""Augmenting Self-attention with Persistent Memory""","['transformer', 'language modeling', 'self-attention']","""Transformer networks have lead to important progress in language modeling and machine translation. These models include two consecutive modules, a feed-forward layer and a self-attention layer. The latter allows the network to capture long term dependencies and are often regarded as the key ingredient in the success of Transformers. Building upon this intuition, we propose a new model that solely consists of attention layers. More precisely, we augment the self-attention layers with persistent memory vectors that play a similar role as the feed-forward layer. Thanks to these vectors, we can remove the feed-forward layer without degrading the performance of a transformer. Our evaluation shows the benefits brought by our model on standard character and word level language modeling benchmarks.""","""This paper proposes a modification to the Transformer architecture in which the self-attention and feed-forward layer are merged into a self-attention layer with ""persistent"" memory vectors. This involves concatenating the contextual representations with global, learned memory vectors, which are attended over. Experiments show slight gains in character and word-level language modeling benchmarks. While the proposed architectural changes are interesting, they are also rather minor and had a small impact in performance and in number of model parameters. The motivation of the persistent memory vector as replacing the FF-layer is a bit tenuous since Eqs 5 and 9 are substantially different. Overall the contribution seems a bit thin for a ICLR paper. I suggest more analysis and possibly experimentation in other tasks in a future iteration of this paper."""
665,"""Using Explainabilty to Detect Adversarial Attacks""","['adversarial', 'detection', 'explainability']","""Deep learning models are often sensitive to adversarial attacks, where carefully-designed input samples can cause the system to produce incorrect decisions. Here we focus on the problem of detecting attacks, rather than robust classification, since detecting that an attack occurs may be even more important than avoiding misclassification. We build on advances in explainability, where activity-map-like explanations are used to justify and validate decisions, by highlighting features that are involved with a classification decision. The key observation is that it is hard to create explanations for incorrect decisions. We propose EXAID, a novel attack-detection approach, which uses model explainability to identify images whose explanations are inconsistent with the predicted class. Specifically, we use SHAP, which uses Shapley values in the space of the input image, to identify which input features contribute to a class decision. Interestingly, this approach does not require to modify the attacked model, and it can be applied without modelling a specific attack. It can therefore be applied successfully to detect unfamiliar attacks, that were unknown at the time the detection model was designed. We evaluate EXAID on two benchmark datasets CIFAR-10 and SVHN, and against three leading attack techniques, FGSM, PGD and C&W. We find that EXAID improves over the SoTA detection methods by a large margin across a wide range of noise levels, improving detection from 70% to over 90% for small perturbations.""","""This paper proposes EXAID, a method to detect adversarial attacks by building on the advances in explainability (particularly SHAP), where activity-map-like explanations are used to justify and validate decisions. Though it may have some valuable ideas, the execution is not satisfying, with various issues raised in comments. No rebuttal was provided."""
666,"""Learning with Protection: Rejection of Suspicious Samples under Adversarial Environment""","['Learning with Rejection', 'Adversarial Examples']","""We propose a novel framework for avoiding the misclassification of data by using a framework of learning with rejection and adversarial examples. Recent developments in machine learning have opened new opportunities for industrial innovations such as self-driving cars. However, many machine learning models are vulnerable to adversarial attacks and industrial practitioners are concerned about accidents arising from misclassification. To avoid critical misclassifications, we define a sample that is likely to be mislabeled as a suspicious sample. Our main idea is to apply a framework of learning with rejection and adversarial examples to assist in the decision making for such suspicious samples. We propose two frameworks, learning with rejection under adversarial attacks and learning with protection. Learning with rejection under adversarial attacks is a naive extension of the learning with rejection framework for handling adversarial examples. Learning with protection is a practical application of learning with rejection under adversarial attacks. This algorithm transforms the original multi-class classification problem into a binary classification for a specific class, and we reject suspicious samples to protect a specific label. We demonstrate the effectiveness of the proposed method in experiments.""","""The paper addresses the setting of learning with rejection while incorporating the ideas from learning with adversarial examples to tackle adversarial attacks. While the reviewers acknowledged the importance to study learning with rejection in this setting, they raised several concerns: (1) lack of technical contribution -- see R1s and R2s related references, see R3s suggestion on designing c(x); (2) insufficient empirical evidence -- see R3s comment about the sensitivity experiment on the strength of the attack, see R1s suggestion to compare with a baseline that learns the rejection function such as SelectiveNet; (3) clarity of presentation -- see R2s suggestions how to improve clarity. Among these, (3) did not have a substantial impact on the decision, but would be helpful to address in a subsequent revision. However, (1) and (2) make it very difficult to assess the benefits of the proposed approach, and were viewed by AC as critical issues. AC can confirm that all three reviewers have read the author responses and have revised the final ratings. AC suggests, in its current state the manuscript is not ready for a publication. We hope the reviews are useful for improving and revising the paper. """
667,"""Emergence of Compositional Language with Deep Generational Transmission""","['Cultural Evolution', 'Deep Learning', 'Language Emergence']","""Recent work has studied the emergence of language among deep reinforcement learning agents that must collaborate to solve a task. Of particular interest are the factors that cause language to be compositional---i.e., express meaning by combining words which themselves have meaning. Evolutionary linguists have found that in addition to structural priors like those already studied in deep learning, the dynamics of transmitting language from generation to generation contribute significantly to the emergence of compositionality. In this paper, we introduce these cultural evolutionary dynamics into language emergence by periodically replacing agents in a population to create a knowledge gap, implicitly inducing cultural transmission of language. We show that this implicit cultural transmission encourages the resulting languages to exhibit better compositional generalization.""","""This paper explores the emergence of language in environments that demand agents communicate, focusing on the compositionality of language, and the cultural transmission of language. Reviewer 1 has several suggestions about new experiments that are possible. The AC does think there is value in many of the suggested experiments, if not to run, then just to acknowledge their possibility and leave for future work. The reviewers also point to some previous work that is very similar. E.g. ""Ease-of-Teaching and Language Structure from Emergent Communication"", Funshan Li et al"""
668,"""Mogrifier LSTM""","['lstm', 'language modelling']","""Many advances in Natural Language Processing have been based upon more expressive models for how inputs interact with the context in which they occur. Recurrent networks, which have enjoyed a modicum of success, still lack the generalization and systematicity ultimately required for modelling language. In this work, we propose an extension to the venerable Long Short-Term Memory in the form of mutual gating of the current input and the previous output. This mechanism affords the modelling of a richer space of interactions between inputs and their context. Equivalently, our model can be viewed as making the transition function given by the LSTM context-dependent. Experiments demonstrate markedly improved generalization on language modelling in the range of 34 perplexity points on Penn Treebank and Wikitext-2, and 0.010.05 bpc on four character-based datasets. We establish a new state of the art on all datasets with the exception of Enwik8, where we close a large gap between the LSTM and Transformer models. ""","""This paper presents a new twist on the typical LSTM that applies several rounds of gating on the history and input, with the end result that the LSTM's transition function is effectively context-dependent. The performance of the model is illustrated on several datasets. In general, the reviews were positive, with one score being upgraded during the rebuttal period. One of the reviewers complained that the baselines were not adequate, but in the end conceded that the results were still worthy of publication. One reviewer argued very hard for the acceptance of this paper ""Papers that are as clear and informative as this one are few and far between. ... As such, I vehemently argue in favor of this paper being accepted to ICLR."""""
669,"""AlgoNet: pseudo-formula Smooth Algorithmic Neural Networks""","['Algorithms', 'Smoothness', 'Differentiable', 'Inverse Problems', 'Adversarial Training', 'Neural Networks', 'Deep Learning', 'Differentiable Renderer', '3D Mesh', 'Turing-completeness', 'Library']","""Artificial neural networks have revolutionized many areas of computer science in recent years, providing solutions to a number of previously unsolved problems. On the other hand, for many problems, classic algorithms exist, which typically exceed the accuracy and stability of neural networks. To combine these two concepts, we present a new kind of neural networksalgorithmic neural networks (AlgoNets). These networks integrate smooth versions of classic algorithms into the topology of neural networks. A forward AlgoNet includes algorithmic layers into existing architectures to enhance performance and explainability while a backward AlgoNet enables solving inverse problems without or with only weak supervision. In addition, we present the algonet package, a PyTorch based library that includes, inter alia, a smoothly evaluated programming language, a smooth 3D mesh renderer, and smooth sorting algorithms.""","""The paper does not provide theory or experiment to justify the various proposed relaxations. In its current form, it has very limited scope."""
670,"""CEB Improves Model Robustness""","['Information Theory', 'Adversarial Robustness']","""We demonstrate that the Conditional Entropy Bottleneck (CEB) can improve model robustness. CEB is an easy strategy to implement and works in tandem with data augmentation procedures. We report results of a large scale adversarial robustness study on CIFAR-10, as well as the IMAGENET-C Common Corruptions Benchmark.""","""This paper proposes CEB, Conditional Entropy Bottleneck, as a way to improves the robustness of a model against adversarial attacks and noisy data. The model is tested empirically using several experiments and various datasets. We appreciate the authors for submitting the paper to ICLR and providing detailed responses to the reviewers' comments and concerns. After the initial reviews and rebuttal, we had extensive discussions to judge whether the contributions are clear and sufficient for publication. In particular, we discussed the overlap with a previous (arXiv) paper and decided that the overlap should not be considered because it is not published at a conference or journal. Plus the paper makes additional contributions. However, reviewers in the end did not think the paper showed sufficient explanation and proof of why and how this model works, and whether this approach improves upon other state-of-the-art adversarial defense approaches. Again, thank you for submitting to ICLR, and I hope to see an improved version in a future publication."""
671,"""Kernel and Rich Regimes in Overparametrized Models""","['Overparametrized', 'Implicit', 'Bias', 'Regularization', 'Kernel', 'Rich', 'Adaptive', 'Regime']","""A recent line of work studies overparametrized neural networks in the ""kernel regime,"" i.e. when the network behaves during training as a kernelized linear predictor, and thus training with gradient descent has the effect of finding the minimum RKHS norm solution. This stands in contrast to other studies which demonstrate how gradient descent on overparametrized multilayer networks can induce rich implicit biases that are not RKHS norms. Building on an observation by Chizat and Bach, we show how the scale of the initialization controls the transition between the ""kernel"" (aka lazy) and ""rich"" (aka active) regimes and affects generalization properties in multilayer homogeneous models. We provide a complete and detailed analysis for a simple two-layer model that already exhibits an interesting and meaningful transition between the kernel and rich regimes, and we demonstrate the transition for more complex matrix factorization models and multilayer non-linear networks. ""","""The paper studies how the size of the initialization of neural network weights affects whether the resulting training puts the network in a ""kernel regime"" or a ""rich regime"". Using a two-layer model they show, theoretically and practically, the transition between kernel and rich regimes. Further experiments are provided for more complex settings. The scores of the reviewers were widely spread, with a high score (8) from a low confidence reviewer with a very short review. While the authors responded to the reviewer comments, two of the reviewers (importantly including the one recommending reject) did not further engage. Overall, the paper studies an important problem, and provides insight into how weight initialization size can affect the final network. Unfortunately, there are many strong submissions to ICLR this year, and the submission in its current state is not yet suitable for publication."""
672,"""Contrastive Learning of Structured World Models""","['state representation learning', 'graph neural networks', 'model-based reinforcement learning', 'relational learning', 'object discovery']","""A structured understanding of our world in terms of objects, relations, and hierarchies is an important component of human cognition. Learning such a structured world model from raw sensory data remains a challenge. As a step towards this goal, we introduce Contrastively-trained Structured World Models (C-SWMs). C-SWMs utilize a contrastive approach for representation learning in environments with compositional structure. We structure each state embedding as a set of object representations and their relations, modeled by a graph neural network. This allows objects to be discovered from raw pixel observations without direct supervision as part of the learning process. We evaluate C-SWMs on compositional environments involving multiple interacting objects that can be manipulated independently by an agent, simple Atari games, and a multi-object physics simulation. Our experiments demonstrate that C-SWMs can overcome limitations of models based on pixel reconstruction and outperform typical representatives of this model class in highly structured environments, while learning interpretable object-based representations.""","""This paper presents an approach to learn state representations of the scene as well as their action-conditioned transition model, applying contrastive learning on top of a graph neural network. The reviewers unanimously agree that this paper contains a solid research contribution and the authors' response to the reviews further clarified their concerns."""
673,"""CNAS: Channel-Level Neural Architecture Search""",['Neural architecture search'],"""There is growing interest in automating designing good neural network architectures. The NAS methods proposed recently have significantly reduced architecture search cost by sharing parameters, but there is still a challenging problem of designing search space. We consider search space is typically defined with its shape and a set of operations and propose a channel-level architecture search\,(CNAS) method using only a fixed type of operation. The resulting architecture is sparse in terms of channel and has different topology at different cell. The experimental results for CIFAR-10 and ImageNet show that a fine-granular and sparse model searched by CNAS achieves very competitive performance with dense models searched by the existing methods.""","""This paper proposes a channel pruning approach based one-shot neural architecture search (NAS). As agreed by all reviewers, it has limited novelty, and the method can be viewed as a straightforward combination of NAS and pruning. Experimental results are not convincing. The proposed method is not better than STOA on the accuracy or number of parameters. The setup is not fair, as the proposed method uses autoaugment while the other baselines do not. The authors should also compare with related methods such as Bayesnas, and other pruning techniques. Finally, the paper is poorly written, and many related works are missing."""
674,"""How the Softmax Activation Hinders the Detection of Adversarial and Out-of-Distribution Examples in Neural Networks""","['Adversarial examples', 'out-of-distribution', 'detection', 'softmax', 'logits']","""Despite having excellent performances for a wide variety of tasks, modern neural networks are unable to provide a prediction with a reliable confidence estimate which would allow to detect misclassifications. This limitation is at the heart of what is known as an adversarial example, where the network provides a wrong prediction associated with a strong confidence to a slightly modified image. Moreover, this overconfidence issue has also been observed for out-of-distribution data. We show through several experiments that the softmax activation, usually placed as the last layer of modern neural networks, is partly responsible for this behaviour. We give qualitative insights about its impact on the MNIST dataset, showing that relevant information present in the logits is lost once the softmax function is applied. The same observation is made through quantitative analysis, as we show that two out-of-distribution and adversarial example detectors obtain competitive results when using logit values as inputs, but provide considerably lower performances if they use softmax probabilities instead: from 98.0% average AUROC to 56.8% in some settings. These results provide evidence that the softmax activation hinders the detection of adversarial and out-of-distribution examples, as it masks a significant part of the relevant information present in the logits.""",""" The paper investigates how the softmax activation hinders the detection of out-of-distribution examples. All the reviewers felt that the paper requires more work before it can be accepted. In particular, the reviewers raised several concerns about theoretical justification, comparison to other existing methods, discussion of connection to existing methods and scalability to larger number of classes. I encourage the authors to revise the draft based on the reviewers feedback and resubmit to a different venue. """
675,"""Benefits of Overparameterization in Single-Layer Latent Variable Generative Models""","['overparameterization', 'unsupervised', 'parameter recovery', 'rigorous experiments']","""One of the most surprising and exciting discoveries in supervising learning was the benefit of overparameterization (i.e. training a very large model) to improving the optimization landscape of a problem, with minimal effect on statistical performance (i.e. generalization). In contrast, unsupervised settings have been under-explored, despite the fact that it has been observed that overparameterization can be helpful as early as Dasgupta & Schulman (2007). In this paper, we perform an exhaustive study of different aspects of overparameterization in unsupervised learning via synthetic and semi-synthetic experiments. We discuss benefits to different metrics of success (recovering the parameters of the ground-truth model, held-out log-likelihood), sensitivity to variations of the training algorithm, and behavior as the amount of overparameterization increases. We find that, when learning using methods such as variational inference, larger models can significantly increase the number of ground truth latent variables recovered.""","""This paper studies over-parameterization for unsupervised learning. The paper does a series of empirical studies on this topic. Among other things the authors observe that larger models can increase the number latent variables recovered when fitting larger variational inference models. The reviewers raised some concern about the simplicity of the models studied and also lack of some theoretical justification. One reviewer also suggests that more experiments and ablation studies on more general models will further help clarify the role over-parameterized model for latent generative models. I agree with the reviewers that this paper is ""compelling reason for theoretical research on the interplay between overparameterization and parameter recovery in latent variable neural networks trained with gradient descent methods"". I disagree with the reviewers that theoretical study is required as I think a good empirical paper with clear conjectures is as important. I do agree with the reviewers however that for empirical paper I think the empirical studies would have to be a bit more thorough with more clear conjectures. In summary, I think the paper is nice and raises a lot of interesting questions but can be improved with more through studies and conjectures. I would have liked to have the paper accepted but based on the reviewer scores and other papers in my batch I can not recommend acceptance at this time. I strongly recommend the authors to revise and resubmit. I really think this is a nice paper and has a lot of potential and can have impact with appropriate revision."""
676,"""Towards Certified Defense for Unrestricted Adversarial Attacks""","['Adversarial Defense', 'Certified Defense', 'Adversarial Examples']","""Certified defenses against adversarial examples are very important in safety-critical applications of machine learning. However, existing certified defense strategies only safeguard against perturbation-based adversarial attacks, where the attacker is only allowed to modify normal data points by adding small perturbations. In this paper, we provide certified defenses under the more general threat model of unrestricted adversarial attacks. We allow the attacker to generate arbitrary inputs to fool the classifier, and assume the attacker knows everything except the classifiers' parameters and the training dataset used to learn it. Lack of knowledge about the classifiers parameters prevents an attacker from generating adversarial examples successfully. Our defense draws inspiration from differential privacy, and is based on intentionally adding noise to the classifier's outputs to limit the attacker's knowledge about the parameters. We prove concrete bounds on the minimum number of queries required for any attacker to generate a successful adversarial attack. For a simple linear classifiers we prove that the bound is asymptotically optimal up to a constant by exhibiting an attack algorithm that achieves this lower bound. We empirically show the success of our defense strategy against strong black box attack algorithms.""","""This paper proposes a certified defense under the more general threat model beyond additive perturbation. The proposed defense method is based on adding noise to the classifier's outputs to limit the attacker's knowledge about the parameters, which is similar to differential privacy mechanism. The authors proved the query complexity for any attacker to generate a successful adversarial attack. The main objection of this work is (1) the assumption of the attacker and the definition of the query complexity (to recover the optimal classifier rather than generating an adversarial example successfully) is uncommon, (2) the claim is misleading, and (3) the experimental evaluation is not sufficient (only two attacks are evaluated). The authors only provided a brief response to address the reviewers comments/questions without submitting a revision. Unfortunately none of the reviewer is in support of this paper even after author response. """
677,"""Graph Convolutional Reinforcement Learning""",[],"""Learning to cooperate is crucially important in multi-agent environments. The key is to understand the mutual interplay between agents. However, multi-agent environments are highly dynamic, where agents keep moving and their neighbors change quickly. This makes it hard to learn abstract representations of mutual interplay between agents. To tackle these difficulties, we propose graph convolutional reinforcement learning, where graph convolution adapts to the dynamics of the underlying graph of the multi-agent environment, and relation kernels capture the interplay between agents by their relation representations. Latent features produced by convolutional layers from gradually increased receptive fields are exploited to learn cooperation, and cooperation is further improved by temporal relation regularization for consistency. Empirically, we show that our method substantially outperforms existing methods in a variety of cooperative scenarios.""","""The work proposes a graph convolutional network based approach to multi-agent reinforcement learning. This approach is designed to be able to adaptively capture changing interactions between agents. Initial reviews highlighted several limitations but these were largely addressed by the authors. The resulting paper makes a valuable contribution by proposing a well-motivated approach, and by conducting extensive empirical validation and analysis that result in novel insights. I encourage the authors to take on board any remaining reviewer suggestions as they prepare the camera ready version of the paper."""
678,"""Disentangling Factors of Variations Using Few Labels""",[],"""Learning disentangled representations is considered a cornerstone problem in representation learning. Recently, Locatello et al. (2019) demonstrated that unsupervised disentanglement learning without inductive biases is theoretically impossible and that existing inductive biases and unsupervised methods do not allow to consistently learn disentangled representations. However, in many practical settings, one might have access to a limited amount of supervision, for example through manual labeling of (some) factors of variation in a few training examples. In this paper, we investigate the impact of such supervision on state-of-the-art disentanglement methods and perform a large scale study, training over 52000 models under well-defined and reproducible experimental conditions. We observe that a small number of labeled examples (0.01--0.5% of the data set), with potentially imprecise and incomplete labels, is sufficient to perform model selection on state-of-the-art unsupervised models. Further, we investigate the benefit of incorporating supervision into the training process. Overall, we empirically validate that with little and imprecise supervision it is possible to reliably learn disentangled representations.""","""This paper addresses the problem of learning disentangled representations and shows that the introduction of a few labels corresponding to the desired factors of variation can be used to increase the separation of the learned representation. There were mixed scores for this work. Two reviewers recommended weak acceptance while one reviewer recommended rejection. All reviewers and authors agreed that the main conclusion that the labeled factors of variation can be used to improved disentanglement is perhaps expected. However, reviewers 2 and 3 argue that this work presents extensive experimental evidence to support this claim which will be of value to the community. The main concerns of R1 center around a lack of clear analysis and synthesis of the large number of experiments. Though there is a page limit we encourage the authors to revise their manuscript with a specific focus on clarity and take-away messages from their results. After careful consideration of all reviewer comments and author rebuttals the AC recommends acceptance of this work. The potential contribution of the extensive experimental evidence warrants presentation at ICLR. However, again, we encourage the authors to consider ways to mitigate the concerns of R1 in their final manuscript. """
679,"""Meta-Graph: Few shot Link Prediction via Meta Learning""","['Meta Learning', 'Link Prediction', 'Graph Representation Learning', 'Graph Neural Networks']","""We consider the task of few shot link prediction, where the goal is to predict missing edges across multiple graphs using only a small sample of known edges. We show that current link prediction methods are generally ill-equipped to handle this task---as they cannot effectively transfer knowledge between graphs in a multi-graph setting and are unable to effectively learn from very sparse data. To address this challenge, we introduce a new gradient-based meta learning framework, Meta-Graph, that leverages higher-order gradients along with a learned graph signature function that conditionally generates a graph neural network initialization. Using a novel set of few shot link prediction benchmarks, we show that Meta-Graph enables not only fast adaptation but also better final convergence and can effectively learn using only a small sample of true edges.""","""This paper presents a new link prediction framework in the case of small amount labels using meta learning methods. The reviewers think the problem is important, and the proposed approach is a modification of meta learning to this case. However, the method is not compared to other knowledge graph completion methods such as TransE, RotaE, Neural Tensor Factorization in benchmark dataset such as Fb15k and freebase. Adding these comparisons can make the paper more convincing. """
680,"""Neural Subgraph Isomorphism Counting""","['subgraph isomorphism', 'graph neural networks']","""In this paper, we study a new graph learning problem: learning to count subgraph isomorphisms. Although the learning based approach is inexact, we are able to generalize to count large patterns and data graphs in polynomial time compared to the exponential time of the original NP-complete problem. Different from other traditional graph learning problems such as node classification and link prediction, subgraph isomorphism counting requires more global inference to oversee the whole graph. To tackle this problem, we propose a dynamic intermedium attention memory network (DIAMNet) which augments different representation learning architectures and iteratively attends pattern and target data graphs to memorize different subgraph isomorphisms for the global counting. We develop both small graphs (<= 1,024 subgraph isomorphisms in each) and large graphs (<= 4,096 subgraph isomorphisms in each) sets to evaluate different models. Experimental results show that learning based subgraph isomorphism counting can help reduce the time complexity with acceptable accuracy. Our DIAMNet can further improve existing representation learning models for this more global problem.""","""This paper proposes a method called Dynamic Intermedium Attention Memory Network (DIAMNet) to learn the subgraph isomorphism counting for a given pattern graph P and target graph G. However, the reviewers think the experimental comparisons are insufficient. Furthermore, the evaluation is only for synthetic dataset for which generating process is designed by the authors. If possible, evaluation on benchmark graph datasets would be convincing though creating the ground truth might be difficult for larger graphs. """
681,"""Learning relevant features for statistical inference""","['unsupervised learning', 'non-parametric probabilistic model', 'singular value decomposition', 'fisher information metric', 'chi-squared distance']","""We introduce an new technique to learn correlations between two types of data. The learned representation can be used to directly compute the expectations of functions over one type of data conditioned on the other, such as Bayesian estimators and their standard deviations. Specifically, our loss function teaches two neural nets to extract features representing the probability vectors of highest singular value for the stochastic map (set of conditional probabilities) implied by the joint dataset, relative to the inner product defined by the Fisher information metrics evaluated at the marginals. We test the approach using a synthetic dataset, analytical calculations, and inference on occluded MNIST images. Surprisingly, when applied to supervised learning (one dataset consists of labels), this approach automatically provides regularization and faster convergence compared to the cross-entropy objective. We also explore using this approach to discover salient independent features of a single dataset. ""","""This manuscript proposes an approach for estimating cross-correlations between model outputs, related to deep CCA. Authors note that the procedure improves results when applied to supervised learning problems. The reviewers have pointed out the close connection to previous work on deep CCA, and the author(s) have agreed. The reviewers agree that the paper has promise if properly expanded both theoretically and empirically."""
682,"""Policy Tree Network""",['Reinforcement Learning'],"""Decision-time planning policies with implicit dynamics models have been shown to work in discrete action spaces with Q learning. However, decision-time planning with implicit dynamics models in continuous action space has proven to be a difficult problem. Recent work in Reinforcement Learning has allowed for implicit model based approaches to be extended to Policy Gradient methods. In this work we propose Policy Tree Network (PTN). Policy Tree Network lies at the intersection of Model-Based Reinforcement Learning and Model-Free Reinforcement Learning. Policy Tree Network is a novel approach which, for the first time, demonstrates how to leverage an implicit model to perform decision-time planning with Policy Gradient methods in continuous action spaces. This work is empirically justified on 8 standard MuJoCo environments so that it can easily be compared with similar work done in this area. Additionally, we offer a lower bound on the worst case change in the mean of the policy when tree planning is used and theoretically justify our design choices.""","""The consensus amongst the reviewers is that the paper discusses an interesting idea and shows significant promise, but that the presentation of the initial submission was not of a publishable standard. While some of the issues were clarified during discussion, the reviewers agree that the paper lacks polish and is therefore not ready. While I think Reviewer #3 is overly strict in sticking to a 1, as it is the nature of ICLR to allow papers to be improved through the discussion, in the absence of any of the reviewers being ready to champion the paper, I cannot recommend acceptance. I however have no doubt that with further work on the presentation of what sounds like a potentially fascinating contribution to the field, the paper will stand a chance at acceptance at a future conference."""
683,"""Extreme Classification via Adversarial Softmax Approximation""","['Extreme classification', 'negative sampling']","""Training a classifier over a large number of classes, known as 'extreme classification', has become a topic of major interest with applications in technology, science, and e-commerce. Traditional softmax regression induces a gradient cost proportional to the number of classes C, which often is prohibitively expensive. A popular scalable softmax approximation relies on uniform negative sampling, which suffers from slow convergence due a poor signal-to-noise ratio. In this paper, we propose a simple training method for drastically enhancing the gradient signal by drawing negative samples from an adversarial model that mimics the data distribution. Our contributions are three-fold: (i) an adversarial sampling mechanism that produces negative samples at a cost only logarithmic in C, thus still resulting in cheap gradient updates; (ii) a mathematical proof that this adversarial sampling minimizes the gradient variance while any bias due to non-uniform sampling can be removed; (iii) experimental results on large scale data sets that show a reduction of the training time by an order of magnitude relative to several competitive baselines. ""","""The paper proposes a fast training method for extreme classification problems where number of classes is very large. The method improves the negative sampling (method which uses uniform distribution to sample the negatives) by using an adversarial auxiliary model to sample negatives in a non-uniform manner. This has logarithmic computational cost and minimizes the variance in the gradients. There were some concerns about missing empirical comparisons with methods that use sampled-softmax approach for extreme classification. While these comparisons will certainly add further value to the paper, the improvement over widely used method of negative sampling and a formal analysis of improvement from hard negatives is a valuable contribution in itself that will be of interest to the community. Authors should include the experiments on small datasets to quantify the approximation gap due to negative sampling compared to full softmax, as promised."""
684,"""Off-policy Bandits with Deficient Support""","['Recommender System', 'Search Engine', 'Counterfactual Learning']","""Off-policy training of contextual-bandit policies is attractive in online systems (e.g. search, recommendation, ad placement), since it enables the reuse of large amounts of log data from the production system. State-of-the-art methods for off-policy learning, however, are based on inverse propensity score (IPS) weighting, which requires that the logging policy chooses all actions with non-zero probability for any context (i.e., full support). In real-world systems, this condition is often violated, and we show that existing off-policy learning methods based on IPS weighting can fail catastrophically. We therefore develop new off-policy contextual-bandit methods that can controllably and robustly learn even when the logging policy has deficient support. To this effect, we explore three approaches that provide various guarantees for safe learning despite the inherent limitations of support deficient data: restricting the action space, reward extrapolation, and restricting the policy space. We analyze the statistical and computational properties of these three approaches, and empirically evaluate their effectiveness in a series of experiments. We find that controlling the policy space is both computationally efficient and that it robustly leads to accurate policies.""","""This paper tackles the problem of learning off-policy in the contextual bandit problem, more specifically when the available data is deficient (in the sense that it does not allow to build reasonable counterfactual estimators). To address this, the authors introduce three strategies: 1) restricting the action space; 2) imputing missing rewards when lacking data; 3) restricting the policy space to policies with ""enough"" data. All three approaches are analyzed (statistical and computational properties) and evaluated empirically. Restricting the policy space appears to be particularly effective in practice. Although the problem being solved is very relevant, it is not clear how this work is positioned with respect to approaches solving similar problems in RL. For example, Batch constrained Q-learning ([1]) restricts action space, while Bootstrapping Error Accumulation ([2]) and SPIBB ([3]) restrict the policy class in batch RL. A comparison with these techniques in the contextual bandit settings, in addition to recent state-of-the-art off-policy bandit approaches (Liu et al. (2019), Xie et al. (2019)) is lacking. Moreover, given the newly added results (DR method by Tang et al. (2019)), it is not clear how the proposed approach improves over existing techniques. This should be clarified. I therefore recommend to reject this paper. """
685,"""How Does Learning Rate Decay Help Modern Neural Networks?""","['Learning rate decay', 'Optimization', 'Explainability', 'Deep learning', 'Transfer learning']","""Learning rate decay (lrDecay) is a \emph{de facto} technique for training modern neural networks. It starts with a large learning rate and then decays it multiple times. It is empirically observed to help both optimization and generalization. Common beliefs in how lrDecay works come from the optimization analysis of (Stochastic) Gradient Descent: 1) an initially large learning rate accelerates training or helps the network escape spurious local minima; 2) decaying the learning rate helps the network converge to a local minimum and avoid oscillation. Despite the popularity of these common beliefs, experiments suggest that they are insufficient in explaining the general effectiveness of lrDecay in training modern neural networks that are deep, wide, and nonconvex. We provide another novel explanation: an initially large learning rate suppresses the network from memorizing noisy data while decaying the learning rate improves the learning of complex patterns. The proposed explanation is validated on a carefully-constructed dataset with tractable pattern complexity. And its implication, that additional patterns learned in later stages of lrDecay are more complex and thus less transferable, is justified in real-world datasets. We believe that this alternative explanation will shed light into the design of better training strategies for modern neural networks.""","""This paper seeks to understand the effect of learning rate decay in neural net training. This is an important question in the field and this paper also proposes to show why previous explanations were not correct. However, the reviewers found that the paper did not explain the experimental setup enough to be reproducible. Furthermore, there are significant problems with the novelty of the work due to its overlap with works such as (Nakiran et al., 2019), (Li et al. 2019) or (Jastrzbski et al. 2017)."""
686,"""Improving Differentially Private Models with Active Learning""","['Differential Privacy', 'Active Learning']","""Broad adoption of machine learning techniques has increased privacy concerns for models trained on sensitive data such as medical records. Existing techniques for training differentially private (DP) models give rigorous privacy guarantees, but applying these techniques to neural networks can severely degrade model performance. This performance reduction is an obstacle to deploying private models in the real world. In this work, we improve the performance of DP models by fine-tuning them through active learning on public data. We introduce two new techniques - DiversePublic and NearPrivate - for doing this fine-tuning in a privacy-aware way. For the MNIST and SVHN datasets, these techniques improve state-of-the-art accuracy for DP models while retaining privacy guarantees.""","""This paper provides an active-learning approach to improve the performance of an existing differentially private classifier with public labeled data. Where the paper provides a new approach, there is a consensus among the reviewers that the paper does not provide a strong enough contribution for acceptance. The authors can potentially improve the submission by including a more comprehensive comparison with the PATE framework and improving its overall presentation."""
687,"""Deep exploration by novelty-pursuit with maximum state entropy""","['Exploration', 'Reinforcement Learning']","""Efficient exploration is essential to reinforcement learning in huge state space. Recent approaches to address this issue include the intrinsically motivated goal exploration process (IMGEP) and the maximum state entropy exploration (MSEE). In this paper, we disclose that goal-conditioned exploration behaviors in IMGEP can also maximize the state entropy, which bridges the IMGEP and the MSEE. From this connection, we propose a maximum entropy criterion for goal selection in goal-conditioned exploration, which results in the new exploration method novelty-pursuit. Novelty-pursuit performs the exploration in two stages: first, it selects a goal for the goal-conditioned exploration policy to reach the boundary of the explored region; then, it takes random actions to explore the non-explored region. We demonstrate the effectiveness of the proposed method in environments from simple maze environments, Mujoco tasks, to the long-horizon video game of SuperMarioBros. Experiment results show that the proposed method outperforms the state-of-the-art approaches that use curiosity-driven exploration.""","""There is insufficient support to recommend accepting this paper. The reviewers unanimously recommended rejection, and did not change their recommendation after the author response period. The technical depth of the paper was criticized, as was the experimental evaluation. The review comments should help the authors strenghen this work."""
688,"""Massively Multilingual Sparse Word Representations""","['sparse word representations', 'multilinguality', 'sparse coding']","""In this paper, we introduce Mamus for constructing multilingual sparse word representations. Our algorithm operates by determining a shared set of semantic units which get reutilized across languages, providing it a competitive edge both in terms of speed and evaluation performance. We demonstrate that our proposed algorithm behaves competitively to strong baselines through a series of rigorous experiments performed towards downstream applications spanning over dependency parsing, document classification and natural language inference. Additionally, our experiments relying on the QVEC-CCA evaluation score suggests that the proposed sparse word representations convey an increased interpretability as opposed to alternative approaches. Finally, we are releasing our multilingual sparse word representations for the 27 typologically diverse set of languages that we conducted our various experiments on.""","""This paper describes a new method for creating word embeddings that can operate on corpora from more than one language. The algorithm is simple, but rivals more complex approaches. The reviewers were happy with this paper. They were also impressed that the authors ran the requested multi-lingual BERT experiments, even though they did not show positive results. One reviewer did think that non-contextual word embeddings were of less interest to the NLP community, but thought your arguments for the computational efficiency were convincing."""
689,"""LEX-GAN: Layered Explainable Rumor Detector Based on Generative Adversarial Networks""","['explainable rumor detection', 'layered generative adversarial networks']","""Social media have emerged to be increasingly popular and have been used as tools for gathering and propagating information. However, the vigorous growth of social media contributes to the fast-spreading and far-reaching rumors. Rumor detection has become a necessary defense. Traditional rumor detection methods based on hand-crafted feature selection are replaced by automatic approaches that are based on Artificial Intelligence (AI). AI decision making systems need to have the necessary means, such as explainability to assure users their trustworthiness. Inspired by the thriving development of Generative Adversarial Networks (GANs) on text applications, we propose LEX-GAN, a GAN-based layered explainable rumor detector to improve the detection quality and provide explainability. Unlike fake news detection that needs a previously collected verified news database, LEX-GAN realizes explainable rumor detection based on only tweet-level text. LEX-GAN is trained with generated non-rumor-looking rumors. The generators produce rumors by intelligently inserting controversial information in non-rumors, and force the discriminators to detect detailed glitches and deduce exactly which parts in the sentence are problematic. The layered structures in both generative and discriminative model contributes to the high performance. We show LEX-GAN's mutation detection ability in textural sequences by performing a gene classification and mutation detection task.""","""The paper is well-written and presents an extensive set of experiments. The architecture is a simple yet interesting attempt at learning explainable rumour detection models. Some reviewers worry about the novelty of the approach, and whether the explainability of the model is in fact properly evaluated. The authors responded to the reviews and provided detailed feedback. A major limitation of this work is that explanations are at the level of input words. This is common in interpretability (LIME, etc), but it is not clear that explanations/interpretations are best provided at this level and not, say, at the level of training instances or at a more abstract level. It is also not clear that this approach would scale to languages that are morphologically rich and/or harder to segment into words. Since modern approaches to this problem would likely include pretrained language models, it is an interesting problem to make such architectures interpretable. """
690,"""Stochastic Neural Physics Predictor""","['physics prediction', 'forward dynamics', 'stochastic environments', 'dropout']","""Recently, neural-network based forward dynamics models have been proposed that attempt to learn the dynamics of physical systems in a deterministic way. While near-term motion can be predicted accurately, long-term predictions suffer from accumulating input and prediction errors which can lead to plausible but different trajectories that diverge from the ground truth. A system that predicts distributions of the future physical states for long time horizons based on its uncertainty is thus a promising solution. In this work, we introduce a novel robust Monte Carlo sampling based graph-convolutional dropout method that allows us to sample multiple plausible trajectories for an initial state given a neural-network based forward dynamics predictor. By introducing a new shape preservation loss and training our dynamics model recurrently, we stabilize long-term predictions. We show that our models long-term forward dynamics prediction errors on complicated physical interactions of rigid and deformable objects of various shapes are significantly lower than existing strong baselines. Lastly, we demonstrate how generating multiple trajectories with our Monte Carlo dropout method can be used to train model-free reinforcement learning agents faster and to better solutions on simple manipulation tasks.""","""The paper presents a timely method for intuitive physics simulations that expand on the HTRN model, and tested in several physicals systems with rigid and deformable objects as well as other results later in the review. Reviewer 3 was positive about the paper, and suggested improving the exposition to make it more self-contained. Reviewer 1 raised questions about the complexity of tasks and a concerns of limited advancement provided by the paper. Reviewer 2, had a similar concerns about limited clarity as to how the changes contribute to the results, and missing baselines. The authors provided detailed responses in all cases, providing some additional results with various other videos. After discussion and reviewing the additional results, the role of the stochastic elements of the model and its contributions to performance remained and the reviewers chose not to adjust their ratings. The paper is interesting, timely and addresses important questions, but questions remain. We hope the review has provided useful information for their ongoing research. """
691,"""Provenance detection through learning transformation-resilient watermarking""","['watermarking', 'provenance detection']","""Advancements in deep generative models have made it possible to synthesize images, videos and audio signals that are hard to distinguish from natural signals, creating opportunities for potential abuse of these capabilities. This motivates the problem of tracking the provenance of signals, i.e., being able to determine the original source of a signal. Watermarking the signal at the time of signal creation is a potential solution, but current techniques are brittle and watermark detection mechanisms can easily be bypassed by doing some post-processing (cropping images, shifting pitch in the audio etc.). In this paper, we introduce ReSWAT (Resilient Signal Watermarking via Adversarial Training), a framework for learning transformation-resilient watermark detectors that are able to detect a watermark even after a signal has been through several post-processing transformations. Our detection method can be applied to domains with continuous data representations such as images, videos or sound signals. Experiments on watermarking image and audio signals show that our method can reliably detect the provenance of a synthetic signal, even if the signal has been through several post-processing transformations, and improve upon related work in this setting. Furthermore, we show that for specific kinds of transformations (perturbations bounded in the pseudo-formula norm), we can even get formal guarantees on the ability of our model to detect the watermark. We provide qualitative examples of watermarked image and audio samples in the anonymous code submission link.""","""This paper offers an interesting and potentially useful approach to robust watermarking. The reviewers are divided on the significance of the method. The most senior and experienced reviewer was the most negative. On balance, my assessment of this paper is borderline; given the number of more highly ranked papers in my pile, that means I have to assign ""reject""."""
692,"""I Am Going MAD: Maximum Discrepancy Competition for Comparing Classifiers Adaptively""",['model comparison'],"""The learning of hierarchical representations for image classification has experienced an impressive series of successes due in part to the availability of large-scale labeled data for training. On the other hand, the trained classifiers have traditionally been evaluated on small and fixed sets of test images, which are deemed to be extremely sparsely distributed in the space of all natural images. It is thus questionable whether recent performance improvements on the excessively re-used test sets generalize to real-world natural images with much richer content variations. Inspired by efficient stimulus selection for testing perceptual models in psychophysical and physiological studies, we present an alternative framework for comparing image classifiers, which we name the MAximum Discrepancy (MAD) competition. Rather than comparing image classifiers using fixed test images, we adaptively sample a small test set from an arbitrarily large corpus of unlabeled images so as to maximize the discrepancies between the classifiers, measured by the distance over WordNet hierarchy. Human labeling on the resulting model-dependent image sets reveals the relative performance of the competing classifiers, and provides useful insights on potential ways to improve them. We report the MAD competition results of eleven ImageNet classifiers while noting that the framework is readily extensible and cost-effective to add future classifiers into the competition. Codes can be found at pseudo-url.""","""This paper proposes a new way of comparing classifiers, which does not use fixed test set and adaptively sample it from an arbitrarily large corpus of unlabeled images, i.e. replacing the conventional test-set-based evaluation methods with a more flexible mechanism. The main proposal is to build a test set adaptively in a manner that captures how classifiers disagree, as measured by the wordnet tree. As noted by R2, this work has the potential to be of interest to a broad audience and can motivate many subsequent works. While the reviewers acknowledged the importance of this work, they raised several concerns: (1) the proposed approach is immature to be considered for benchmarking yet (R1,R4), (2) selecting k and studying its influence on the performance ( R1, R3, R4), (3) the proposed approach requires data annotation which might not be straightforward -- (R3, R4). The authors provided a detailed rebuttal addressing the reviewer concerns. There is reviewer disagreement on this paper. The comments from R3 were valuable for the discussion, but at the same time too brief to be adequately addressed by the authors. The comments from emergency reviewer were helpful in making the decision. AC decided to recommend acceptance of the paper seeing its valuable contributions towards re-thinking the evaluation of current SOTA models. """
693,"""Batch-shaping for learning conditional channel gated networks""","['Conditional computation', 'channel gated networks', 'gating', 'Batch-shaping', 'distribution matching', 'image classification', 'semantic segmentation']","""We present a method that trains large capacity neural networks with significantly improved accuracy and lower dynamic computational cost. This is achieved by gating the deep-learning architecture on a fine-grained-level. Individual convolutional maps are turned on/off conditionally on features in the network. To achieve this, we introduce a new residual block architecture that gates convolutional channels in a fine-grained manner. We also introduce a generally applicable tool batch-shaping that matches the marginal aggregate posteriors of features in a neural network to a pre-specified prior distribution. We use this novel technique to force gates to be more conditional on the data. We present results on CIFAR-10 and ImageNet datasets for image classification, and Cityscapes for semantic segmentation. Our results show that our method can slim down large architectures conditionally, such that the average computational cost on the data is on par with a smaller architecture, but with higher accuracy. In particular, on ImageNet, our ResNet50 and ResNet34 gated networks obtain 74.60% and 72.55% top-1 accuracy compared to the 69.76% accuracy of the baseline ResNet18 model, for similar complexity. We also show that the resulting networks automatically learn to use more features for difficult examples and fewer features for simple examples.""","""The paper describes a method to train a convolutional network with large capacity, where channel-gating (input conditioned) is implemented - thus, only parts of the network are used at inference time. The paper builds over previous work, with the main contribution being a ""batch-shaping"" technique that regularizes the channel gating to follow a beta distribution, combined with L0 regularization. The paper shows that ResNet trained with this technique can achieve higher accuracy with lower theoretical MACs. Weakness of the paper is that more engineering would be required to convert the theoretical MACs into actual running time - which would further validate the practicality of the approach. """
694,"""Compressive Recovery Defense: A Defense Framework for \ell_2 and pseudo-formula norm attacks.""","['adversarial input', 'adversarial machine learning', 'neural networks', 'compressive sensing.']","""We provide recovery guarantees for compressible signals that have been corrupted with noise and extend the framework introduced in \cite{bafna2018thwarting} to defend neural networks against pseudo-formula , pseudo-formula , and pseudo-formula -norm attacks. In the case of pseudo-formula -norm noise, we provide recovery guarantees for Iterative Hard Thresholding (IHT) and Basis Pursuit (BP). For pseudo-formula -norm bounded noise, we provide recovery guarantees for BP, and for the case of pseudo-formula -norm bounded noise, we provide recovery guarantees for Dantzig Selector (DS). These guarantees theoretically bolster the defense framework introduced in \cite{bafna2018thwarting} for defending neural networks against adversarial inputs. Finally, we experimentally demonstrate the effectiveness of this defense framework against an array of pseudo-formula , pseudo-formula and pseudo-formula -norm attacks. ""","""After reading the author's response, all the reviwers still think that this paper is a simple extension of gradient masking, and can not provide the robustness in neural networks."""
695,"""On the implicit minimization of alternative loss functions when training deep networks""","['implicit minimization', 'optimization bias', 'margin based loss functions', 'flat minima']","""Understanding the implicit bias of optimization algorithms is important in order to improve generalization of neural networks. One approach to try to exploit such understanding would be to then make the bias explicit in the loss function. Conversely, an interesting approach to gain more insights into the implicit bias could be to study how different loss functions are being implicitly minimized when training the network. In this work, we concentrate our study on the inductive bias occurring when minimizing the cross-entropy loss with different batch sizes and learning rates. We investigate how three loss functions are being implicitly minimized during training. These three loss functions are the Hinge loss with different margins, the cross-entropy loss with different temperatures and a newly introduced Gcdf loss with different standard deviations. This Gcdf loss establishes a connection between a sharpness measure for the 01 loss and margin based loss functions. We find that a common behavior is emerging for all the loss functions considered.""","""The paper proposes an interesting setting in which the effect of different optimization parameters on the loss function is analyzed. The analysis is based on considering cross-entropy loss with different softmax parameters, or hinge loss with different margin parameters. The observations are interesting but ultimately the reviewers felt that the experimental results were not sufficient to warrant publication at ICLR. The reviews unanimously recommended rejection, and no rebuttal was provided."""
696,"""Adaptive Generation of Unrestricted Adversarial Inputs""","['Adversarial Examples', 'Adversarial Robustness', 'Generative Adversarial Networks', 'Image Classification']","""Neural networks are vulnerable to adversarially-constructed perturbations of their inputs. Most research so far has considered perturbations of a fixed magnitude under some pseudo-formula norm. Although studying these attacks is valuable, there has been increasing interest in the construction ofand robustness tounrestricted attacks, which are not constrained to a small and rather artificial subset of all possible adversarial inputs. We introduce a novel algorithm for generating such unrestricted adversarial inputs which, unlike prior work, is adaptive: it is able to tune its attacks to the classifier being targeted. It also offers a 4002,000 speedup over the existing state of the art. We demonstrate our approach by generating unrestricted adversarial inputs that fool classifiers robust to perturbation-based attacks. We also show that, by virtue of being adaptive and unrestricted, our attack is able to bypass adversarial training against it.""","""This paper presents an interesting method for creating adversarial examples using a GAN. Reviewers are concerned that ImageNet Results, while successfully evading a classifier, do not appear to be natural images. Furthermore, the attacks are demonstrated on fairly weak baseline classifiers that are known to be easily broken. They attack Resnet50 (without adv training), for which Lp-bounded attacks empirically seem to produce more convincing images. For MNIST, they attack Wong and Kolters ""certifiable"" defense, which is empirically much weaker than an adversarially trained network, and also weaker than more recent certifiable baselines. """
697,"""Adaptive Data Augmentation with Deep Parallel Generative Models""",[],"""Data augmentation(DA) is a useful technique to enlarge the size of the training set and prevent overfitting for different machine learning tasks when training data is scarce. However, current data augmentation techniques rely heavily on human design and domain knowledge, and existing automated approaches are yet to fully exploit the latent features in the training dataset. In this paper we propose an adaptive DA strategy based on generative models, where the training set adaptively enriches itself with sample images automatically constructed from deep generative models trained in parallel. We demonstrate by experiments that our data augmentation strategy, with little model-specific considerations, can be easily adapted to cross-domain deep learning/machine learning tasks such as image classification and image inpainting, while significantly improving model performance in both tasks. ""","""This paper proposes a data augmentation method based on Generative Adversarial Networks by training several GANs on subsets of the data which are then used to synthesise new training examples in proportion to their estimated quality as measured by the Inception Score. The reviewers have raised several critical issues with the work, including motivation (it can be harder to train a generative model than a discriminative one), novelty, complexity of the proposed method, and lack of comparison to existing methods. Perhaps the most important one is the inadequate empirical evaluation. The authors didnt address any of the raised concerns in the rebuttal. I will hence recommend the rejection of this paper."""
698,"""Reconstructing continuous distributions of 3D protein structure from cryo-EM images""","['generative models', 'proteins', '3D reconstruction', 'cryo-EM']","""Cryo-electron microscopy (cryo-EM) is a powerful technique for determining the structure of proteins and other macromolecular complexes at near-atomic resolution. In single particle cryo-EM, the central problem is to reconstruct the 3D structure of a macromolecule from pseudo-formula noisy and randomly oriented 2D projection images. However, the imaged protein complexes may exhibit structural variability, which complicates reconstruction and is typically addressed using discrete clustering approaches that fail to capture the full range of protein dynamics. Here, we introduce a novel method for cryo-EM reconstruction that extends naturally to modeling continuous generative factors of structural heterogeneity. This method encodes structures in Fourier space using coordinate-based deep neural networks, and trains these networks from unlabeled 2D cryo-EM images by combining exact inference over image orientation with variational inference for structural heterogeneity. We demonstrate that the proposed method, termed cryoDRGN, can perform ab-initio reconstruction of 3D protein complexes from simulated and real 2D cryo-EM image data. To our knowledge, cryoDRGN is the first neural network-based approach for cryo-EM reconstruction and the first end-to-end method for directly reconstructing continuous ensembles of protein structures from cryo-EM images.""","""The paper introduces a generative approach to reconstruct 3D images for cryo-electron microscopy (cryo-EM). All reviewers really liked the paper, appreciate the challenging problem tackled and the proposed solution. Acceptance is therefore recommended. """
699,"""Weakly Supervised Disentanglement with Guarantees""","['disentanglement', 'theory of disentanglement', 'representation learning', 'generative models']","""Learning disentangled representations that correspond to factors of variation in real-world data is critical to interpretable and human-controllable machine learning. Recently, concerns about the viability of learning disentangled representations in a purely unsupervised manner has spurred a shift toward the incorporation of weak supervision. However, there is currently no formalism that identifies when and how weak supervision will guarantee disentanglement. To address this issue, we provide a theoretical framework to assist in analyzing the disentanglement guarantees (or lack thereof) conferred by weak supervision when coupled with learning algorithms based on distribution matching. We empirically verify the guarantees and limitations of several weak supervision methods (restricted labeling, match-pairing, and rank-pairing), demonstrating the predictive power and usefulness of our theoretical framework.""","""This paper first discusses some concepts related to disentanglement. The authors propose to decompose disentanglement into two distinct concepts: consistency and restrictiveness. Then, a calculus of disentanglement is introduced to reveal the relationship between restrictiveness and consistency. The proposed concepts are applied to analyze weak supervision methods. The reviewers ultimately decided this paper is well-written and has content which is of general interest to the ICLR community."""
700,"""Wasserstein Adversarial Regularization (WAR) on label noise""","['Label Noise', 'Adversarial regularization', 'Wasserstein']","""Noisy labels often occur in vision datasets, especially when they are obtained from crowdsourcing or Web scraping. We propose a new regularization method, which enables learning robust classifiers in presence of noisy data. To achieve this goal, we propose a new adversarial regularization scheme based on the Wasserstein distance. Using this distance allows taking into account specific relations between classes by leveraging the geometric properties of the labels space. Our Wasserstein Adversarial Regularization (WAR) encodes a selective regularization, which promotes smoothness of the classifier between some classes, while preserving sufficient complexity of the decision boundary between others. We first discuss how and why adversarial regularization can be used in the context of label noise and then show the effectiveness of our method on five datasets corrupted with noisy labels: in both benchmarks and real datasets, WAR outperforms the state-of-the-art competitors.""","""This article proposes a regularisation scheme to learn classifiers that take into account similarity of labels, and presents a series of experiments. The reviewers found the approach plausible, the paper well written, and the experiments sufficient. At the same time, they expressed concerns, mentioning that the technical contribution is limited (in particular, the Wasserstein distance has been used before in estimation of conditional distributions and in multi-label learning), and that it would be important to put more efforts into learning the metric. The author responses clarified a few points and agreed that learning the metric is an interesting problem. There were also concerns about the competitiveness of the approach, which were addressed in part in the authors' responses, albeit not fully convincing all of the reviewers. This article proposes an interesting technique for a relevant type of problems, and demonstrates that it can be competitive with extensive experiments. ``Although this is a reasonably good article, it is not good enough, given the very high acceptance bar for this year's ICLR. """
701,"""Local Label Propagation for Large-Scale Semi-Supervised Learning""",[],"""A significant issue in training deep neural networks to solve supervised learning tasks is the need for large numbers of labeled datapoints. The goal of semisupervised learning is to leverage ubiquitous unlabeled data, together with small quantities of labeled data, to achieve high task performance. Though substantial recent progress has been made in developing semi-supervised algorithms that are effective for comparatively small datasets, many of these techniques do not scale readily to the large (unlabeled) datasets characteristic of real-world applications. In this paper we introduce a novel approach to scalable semi-supervised learning, called Local Label Propagation (LLP). Extending ideas from recent work on unsupervised embedding learning, LLP first embeds datapoints, labeled and otherwise, in a common latent space using a deep neural network. It then propagates pseudolabels from known to unknown datapoints in a manner that depends on the local geometry of the embedding, taking into account both inter-point distance and local data density as a weighting on propagation likelihood. The parameters of the deep embedding are then trained to simultaneously maximize pseudolabel categorization performance as well as a metric of the clustering of datapoints within each psuedo-label group, iteratively alternating stages of network training and label propagation. We illustrate the utility of the LLP method on the ImageNet dataset, achieving results that outperform previous state-of-the-art scalable semi-supervised learning algorithms by large margins, consistently across a wide variety of training regimes. We also show that the feature representation learned with LLP transfers well to scene recognition in the Places 205 dataset.""","""The paper introduces an approach for semi-supervised learning based on local label propagation. While reviewers appreciate learning a consistent embedding space for prediction and label propagation, a few pointed out that this paper does not make it clear how different it is from preview work (Wu et al, Iscen et al., Zhuang et al.), in addition to complexity calculation, or pseudo-label accuracy. These are important points that werent included to the degree that reviewers/readers can understand, and reviewers seem to not change their minds after authors wrote back. This suggests the paper can use additional cycles of polishing/editing to make these points clear. We highly recommend authors to carefully reflect on reviewers both pros and cons of the paper to improve the paper for your future submission. """
702,"""Graph convolutional networks for learning with few clean and many noisy labels""",[],"""In this work we consider the problem of learning a classifier from noisy labels when a few clean labeled examples are given. The structure of clean and noisy data is modeled by a graph per class and Graph Convolutional Networks (GCN) are used to predict class relevance of noisy examples. For each class, the GCN is treated as a binary classifier learning to discriminate clean from noisy examples using a weighted binary cross-entropy loss function, and then the GCN-inferred ""clean"" probability is exploited as a relevance measure. Each noisy example is weighted by its relevance when learning a classifier for the end task. We evaluate our method on an extended version of a few-shot learning problem, where the few clean examples of novel classes are supplemented with additional noisy data. Experimental results show that our GCN-based cleaning process significantly improves the classification accuracy over not cleaning the noisy data and standard few-shot classification where only few clean examples are used. The proposed GCN-based method outperforms the transductive approach (Douze et al., 2018) that is using the same additional data without labels.""","""The paper combines graph convolutional networks with noisy label learning. The reviewers feel that novelty in the work is limited and there is a need for further experiments and extensions. """
703,"""Unsupervised Generative 3D Shape Learning from Natural Images""","['unsupervised', '3D', 'differentiable', 'rendering', 'disentangling', 'interpretable']","""In this paper we present, to the best of our knowledge, the first method to learn a generative model of 3D shapes from natural images in a fully unsupervised way. For example, we do not use any ground truth 3D or 2D annotations, stereo video, and ego-motion during the training. Our approach follows the general strategy of Generative Adversarial Networks, where an image generator network learns to create image samples that are realistic enough to fool a discriminator network into believing that they are natural images. In contrast, in our approach the image gen- eration is split into 2 stages. In the first stage a generator network outputs 3D ob- jects. In the second, a differentiable renderer produces an image of the 3D object from a random viewpoint. The key observation is that a realistic 3D object should yield a realistic rendering from any plausible viewpoint. Thus, by randomizing the choice of the viewpoint our proposed training forces the generator network to learn an interpretable 3D representation disentangled from the viewpoint. In this work, a 3D representation consists of a triangle mesh and a texture map that is used to color the triangle surface by using the UV-mapping technique. We provide analysis of our learning approach, expose its ambiguities and show how to over- come them. Experimentally, we demonstrate that our method can learn realistic 3D shapes of faces by using only the natural images of the FFHQ dataset.""","""The paper proposes a GAN approach for unsupervised learning of 3d object shapes from natural images. The key idea is a two-stage generative process where the 3d shape is first generated and then rendered to pixel-level images. While the experimental results are promising, the experimental results are mostly focused on faces (that are well aligned and share roughly similar 3d structures across the dataset). Results on other categories are preliminary and limited, so it's unclear how well the proposed method will work for more general domains. In addition, comparison to the existing baselines (e.g., HoloGAN; Pix2Scene; Rezende et al., 2016) is missing. Overall, further improvements are needed to be acceptable for ICLR. Extra note: Missing citation to a relevant work Wang and Gupta, Generative Image Modeling using Style and Structure Adversarial Networks pseudo-url"""
704,"""Unsupervised Data Augmentation for Consistency Training""","['Semi-supervised learning', 'computer vision', 'natural language processing']","""Semi-supervised learning lately has shown much promise in improving deep learning models when labeled data is scarce. Common among recent approaches is the use of consistency training on a large amount of unlabeled data to constrain model predictions to be invariant to input noise. In this work, we present a new perspective on how to effectively noise unlabeled examples and argue that the quality of noising, specifically those produced by advanced data augmentation methods, plays a crucial role in semi-supervised learning. By substituting simple noising operations with advanced data augmentation methods, our method brings substantial improvements across six language and three vision tasks under the same consistency training framework. On the IMDb text classification dataset, with only 20 labeled examples, our method achieves an error rate of 4.20, outperforming the state-of-the-art model trained on 25,000 labeled examples. On a standard semi-supervised learning benchmark, CIFAR-10, our method outperforms all previous approaches and achieves an error rate of 2.7% with only 4,000 examples, nearly matching the performance of models trained on 50,000 labeled examples. Our method also combines well with transfer learning, e.g., when finetuning from BERT, and yields improvements in high-data regime, such as ImageNet, whether when there is only 10% labeled data or when a full labeled set with 1.3M extra unlabeled examples is used.""","""The paper shows that data augmentation methods work well for consistency training on unlabeled data in semi-supervised learning. Reviewers and AC think that the reported experimental scores are interesting/strong, but scientific reasoning for convincing why the proposed method is valuable is limited. In particular, the authors are encouraged to justify novelty and hyper-parameters used in the paper. This is because I also think that it is not too surprising that more data augmentations in supervised learning are also effective in semi-supervised learning. It can be valuable if more scientific reasoning/justification is provided. Hence, I recommend rejection."""
705,"""Causally Correct Partial Models for Reinforcement Learning""","['causality', 'model-based reinforcement learning']","""In reinforcement learning, we can learn a model of future observations and rewards, and use it to plan the agent's next actions. However, jointly modeling future observations can be computationally expensive or even intractable if the observations are high-dimensional (e.g. images). For this reason, previous works have considered partial models, which model only part of the observation. In this paper, we show that partial models can be causally incorrect: they are confounded by the observations they don't model, and can therefore lead to incorrect planning. To address this, we introduce a general family of partial models that are provably causally correct, but avoid the need to fully model future observations.""","""The authors show that in a reinforcement learning setting, partial models can be causally incorrect, leading to improper evaluation of policies that are different from those used to collect the data for the model. They then propose a backdoor correction to this problem that allows the model to generalize properly by separating the effects of the stochasticity of the environment and the policy. The reviewers had substantial concerns about both issues of clarity and the clear, but largely undiscussed, connection to off-policy policy evaluation (OPPE). In response, the authors made a significant number of changes for the sake of clarity, as well as further explained the differences between their approach and the OPPE setting. First, OPPE is not typically model-based. Second, while an importance sampling solution would be technically possible, by re-training the model based on importance-weighted experiences, this would need to be done for every evaluation policy considered, whereas the authors' solution uses a fundamentally different approach of causal reasoning so that a causally correct model can be learned once and work for all policies. After much discussion, the reviewers could not come to a consensus about the validity of these arguments. Futhermore, there were lingering questions about writing clarity. Thus, in the future, it appears the paper could be significantly improved if the authors cite more of the off policy evaluation literature, in addition to their added textual clairifications of the relation of their work to that body of work. Overall, my recommendation at this time is to reject this paper."""
706,"""MIST: Multiple Instance Spatial Transformer Networks""",[],"""We propose a deep network that can be trained to tackle image reconstruction and classification problems that involve detection of multiple object instances, without any supervision regarding their whereabouts. The network learns to extract the most significant top-K patches, and feeds these patches to a task-specific network -- e.g., auto-encoder or classifier -- to solve a domain specific problem. The challenge in training such a network is the non-differentiable top-K selection process. To address this issue, we lift the training optimization problem by treating the result of top-K selection as a slack variable, resulting in a simple, yet effective, multi-stage training. Our method is able to learn to detect recurrent structures in the training dataset by learning to reconstruct images. It can also learn to localize structures when only knowledge on the occurrence of the object is provided, and in doing so it outperforms the state-of-the-art.""","""Two reviewers are negative on this paper while the other one is slightly positive. Overall, this paper does not make the bar of ICLR. A reject is recommended."""
707,"""From Variational to Deterministic Autoencoders""","['Unsupervised learning', 'Generative Models', 'Variational Autoencoders', 'Regularization']",""" Variational Autoencoders (VAEs) provide a theoretically-backed and popular framework for deep generative models. However, learning a VAE from data poses still unanswered theoretical questions and considerable practical challenges. In this work, we propose an alternative framework for generative modeling that is simpler, easier to train, and deterministic, yet has many of the advantages of the VAE. We observe that sampling a stochastic encoder in a Gaussian VAE can be interpreted as simply injecting noise into the input of a deterministic decoder. We investigate how substituting this kind of stochasticity, with other explicit and implicit regularization schemes, can lead to an equally smooth and meaningful latent space without having to force it to conform to an arbitrarily chosen prior. To retrieve a generative mechanism to sample new data points, we introduce an ex-post density estimation step that can be readily applied to the proposed framework as well as existing VAEs, improving their sample quality. We show, in a rigorous empirical study, that the proposed regularized deterministic autoencoders are able to generate samples that are comparable to, or better than, those of VAEs and more powerful alternatives when applied to images as well as to structured data such as molecules. ""","""This paper proposes an extension to deterministic autoencoders, namely instead of noise injection in the encoders of VAEs to use deterministic autoencoders with an explicit regularization term on the latent representations. While the reviewers agree that the paper studies an important question for the generative modeling community, the paper has been limited in terms of theoretical analysis and experimental validation. The authors, however, provided further experimental results to support the claims empirically during the discussion period and the reviewers agree that the paper is now acceptable for publication in ICLR-2020. """
708,"""Quantifying Point-Prediction Uncertainty in Neural Networks via Residual Estimation with an I/O Kernel""","['Uncertainty Estimation', 'Neural Networks', 'Gaussian Process']","""Neural Networks (NNs) have been extensively used for a wide spectrum of real-world regression tasks, where the goal is to predict a numerical outcome such as revenue, effectiveness, or a quantitative result. In many such tasks, the point prediction is not enough: the uncertainty (i.e. risk or confidence) of that prediction must also be estimated. Standard NNs, which are most often used in such tasks, do not provide uncertainty information. Existing approaches address this issue by combining Bayesian models with NNs, but these models are hard to implement, more expensive to train, and usually do not predict as accurately as standard NNs. In this paper, a new framework (RIO) is developed that makes it possible to estimate uncertainty in any pretrained standard NN. The behavior of the NN is captured by modeling its prediction residuals with a Gaussian Process, whose kernel includes both the NN's input and its output. The framework is justified theoretically and evaluated in twelve real-world datasets, where it is found to (1) provide reliable estimates of uncertainty, (2) reduce the error of the point predictions, and (3) scale well to large datasets. Given that RIO can be applied to any standard NN without modifications to model architecture or training pipeline, it provides an important ingredient for building real-world NN applications.""","""This paper presents a method to model uncertainty in deep learning regressors by applying a post-hoc procedure. Specifically, the authors model the residuals of neural networks using Gaussian processes, which provide a principled Bayesian estimate of uncertainty. The reviewers were initially mixed and a fourth reviewer was brought in for an additional perspective. The reviewers found that the paper was well written, well motivated and found the methodology sensible and experiments compelling. AnonReviewer4 raised issues with the theoretical exposition of the paper (going so far as to suggest that moving the theory into the supplementary and using the reclaimed space for additional clarifications would make the paper stronger). The reviewers found the author response compelling and as a result the reviewers have come to a consensus to accept. Thus the recommendation is to accept the paper. Please do take the reviewer feedback into account in preparing the camera ready version. In particular, please do address the remaining concerns from AnonReviewer4 regarding the theoretical portion of the paper. It seems that the methodological and empirical portions of the paper are strong enough to stand on their own (and therefore the recommendation for an accept). Adding theory just for the sake of having theory seems to detract from the message (particularly if it is irrelevant or incorrect as initially pointed out by the reviewer)."""
709,"""Data-Independent Neural Pruning via Coresets""","['coresets', 'neural pruning', 'network compression']","""Previous work showed empirically that large neural networks can be significantly reduced in size while preserving their accuracy. Model compression became a central research topic, as it is crucial for deployment of neural networks on devices with limited computational and memory resources. The majority of the compression methods are based on heuristics and offer no worst-case guarantees on the trade-off between the compression rate and the approximation error for an arbitrarily new sample. We propose the first efficient, data-independent neural pruning algorithm with a provable trade-off between its compression rate and the approximation error for any future test sample. Our method is based on the coreset framework, which finds a small weighted subset of points that provably approximates the original inputs. Specifically, we approximate the output of a layer of neurons by a coreset of neurons in the previous layer and discard the rest. We apply this framework in a layer-by-layer fashion from the top to the bottom. Unlike previous works, our coreset is data independent, meaning that it provably guarantees the accuracy of the function for any input \mathbb{R}^d$, including an adversarial one. We demonstrate the effectiveness of our method on popular network architectures. In particular, our coresets yield 90% compression of the LeNet-300-100 architecture on MNIST while improving the accuracy.""","""The rebuttal period influenced R1 to raise their rating of the paper. The most negative reviewer did not respond to the author response. This work proposes an interesting approach that will be of interest to the community. The AC recommends acceptance."""
710,"""Exploring Cellular Protein Localization Through Semantic Image Synthesis""","['Computational biology', 'image synthesis', 'GANs', 'exploring multiplex images', 'attention', 'interpretability']","""Cell-cell interactions have an integral role in tumorigenesis as they are critical in governing immune responses. As such, investigating specific cell-cell interactions has the potential to not only expand upon the understanding of tumorigenesis, but also guide clinical management of patient responses to cancer immunotherapies. A recent imaging technique for exploring cell-cell interactions, multiplexed ion beam imaging by time-of-flight (MIBI-TOF), allows for cells to be quantified in 36 different protein markers at sub-cellular resolutions in situ as high resolution multiplexed images. To explore the MIBI images, we propose a GAN for multiplexed data with protein specific attention. By conditioning image generation on cell types, sizes, and neighborhoods through semantic segmentation maps, we are able to observe how these factors affect cell-cell interactions simultaneously in different protein channels. Furthermore, we design a set of metrics and offer the first insights towards cell spatial orientations, cell protein expressions, and cell neighborhoods. Our model, cell-cell interaction GAN (CCIGAN), outperforms or matches existing image synthesis methods on all conventional measures and significantly outperforms on biologically motivated metrics. To our knowledge, we are the first to systematically model multiple cellular protein behaviors and interactions under simulated conditions through image synthesis.""","""This paper proposes a dedicated deep models for analysis of multiplexed ion beam imaging by time-of-flight (MIBI-TOF). The reviewers appreciated the contributions of the paper but not quite enough to make the cut. Rejection is recommended. """
711,"""Collaborative Filtering With A Synthetic Feedback Loop""",[],"""We propose a novel learning framework for recommendation systems, assisting collaborative filtering with a synthetic feedback loop. The proposed framework consists of a ``recommender'' and a ``virtual user.'' The recommender is formulizd as a collaborative-filtering method, recommending items according to observed user behavior. The virtual user estimates rewards from the recommended items and generates the influence of the rewards on observed user behavior. The recommender connected with the virtual user constructs a closed loop, that recommends users with items and imitates the unobserved feedback of the users to the recommended items. The synthetic feedback is used to augment observed user behavior and improve recommendation results. Such a model can be interpreted as the inverse reinforcement learning, which can be learned effectively via rollout (simulation). Experimental results show that the proposed framework is able to boost the performance of existing collaborative filtering methods on multiple datasets. ""","""The paper proposes to learn a ""virtual user"" while learning a ""recommender"" model, to improve the performance of the recommender system. A reinforcement learning algorithm is used for address the problem the authors defined. Multiple reviewers raised several concerns regarding its technical details including the feedback signal F, but the authors have not responded to any of the concerns raised by the reviewers. The lack of authors involvement in the discussion suggest that this paper is not at the stage to be published."""
712,"""On the Reflection of Sensitivity in the Generalization Error""","['Generalization Error', 'Sensitivity Analysis', 'Deep Neural Networks', 'Bias-variance Decomposition']","""Even though recent works have brought some insight into the performance improvement of techniques used in state-of-the-art deep-learning models, more work is needed to understand the generalization properties of over-parameterized deep neural networks. We shed light on this matter by linking the loss function to the outputs sensitivity to its input. We find a rather strong empirical relation between the output sensitivity and the variance in the bias-variance decomposition of the loss function, which hints on using sensitivity as a metric for comparing generalization performance of networks, without requiring labeled data. We find that sensitivity is decreased by applying popular methods which improve the generalization performance of the model, such as (1) using a deep network rather than a wide one, (2) adding convolutional layers to baseline classifiers instead of adding fully connected layers, (3) using batch normalization, dropout and max-pooling, and (4) applying parameter initialization techniques.""","""The paper proposes a definition of the sensitivity of the output to random perturbations of the input and its link to generalization. While both reviewers appreciated the timeliness of this research, they were taken aback by the striking similarity with the work of Novak et al. I encourage the authors to resubmit to a later conference with a lengthier analysis of the differences between the two frameworks, as they started to do in their rebuttal."""
713,"""On the Unintended Social Bias of Training Language Generation Models with News Articles""","['Fair AI', 'latent representations', 'sequence to sequence']","""There are concerns that neural language models may preserve some of the stereotypes of the underlying societies that generate the large corpora needed to train these models. For example, gender bias is a significant problem when generating text, and its unintended memorization could impact the user experience of many applications (e.g., the smart-compose feature in Gmail). In this paper, we introduce a novel architecture that decouples the representation learning of a neural model from its memory management role. This architecture allows us to update a memory module with an equal ratio across gender types addressing biased correlations directly in the latent space. We experimentally show that our approach can mitigate the gender bias amplification in the automatic generation of articles news while providing similar perplexity values when extending the Sequence2Sequence architecture.""","""The reviewers had a hard time fully identifying the intended contribution behind this paper, and raised concerns that suggest that the experimental results are not sufficient to justify any substantial contribution with the level of certainty that would warrant publication at a top venue. The authors have not responded, and the concerns are serious, so I have no choice but to reject this paper despite its potentially valuable topic."""
714,"""Quantum Expectation-Maximization for Gaussian Mixture Models""","['Quantum', 'ExpectationMaximization', 'Unsupervised', 'QRAM']","""The Expectation-Maximization (EM) algorithm is a fundamental tool in unsupervised machine learning. It is often used as an efficient way to solve Maximum Likelihood (ML) and Maximum A Posteriori estimation problems, especially for models with latent variables. It is also the algorithm of choice to fit mixture models: generative models that represent unlabelled points originating from pseudo-formula different processes, as samples from pseudo-formula multivariate distributions. In this work we define and use a quantum version of EM to fit a Gaussian Mixture Model. Given quantum access to a dataset of pseudo-formula vectors of dimension pseudo-formula , our algorithm has convergence and precision guarantees similar to the classical algorithm, but the runtime is only polylogarithmic in the number of elements in the training set, and is polynomial in other parameters - as the dimension of the feature space, and the number of components in the mixture. We generalize further the algorithm by fitting any mixture model of base distributions in the exponential family. We discuss the performance of the algorithm on datasets that are expected to be classified successfully by those algorithms, arguing that on those cases we can give strong guarantees on the runtime.""","""The reviewers were unanimous that this submission is not ready for publication at ICLR in its current form. Concerns raised include a significant lack of clarity, and the paper not being self-contained."""
715,"""Semantically-Guided Representation Learning for Self-Supervised Monocular Depth""","['computer vision', 'machine learning', 'deep learning', 'monocular depth estimation', 'self-supervised learning']","""Self-supervised learning is showing great promise for monocular depth estimation, using geometry as the only source of supervision. Depth networks are indeed capable of learning representations that relate visual appearance to 3D properties by implicitly leveraging category-level patterns. In this work we investigate how to leverage more directly this semantic structure to guide geometric representation learning, while remaining in the self-supervised regime. Instead of using semantic labels and proxy losses in a multi-task approach, we propose a new architecture leveraging fixed pretrained semantic segmentation networks to guide self-supervised representation learning via pixel-adaptive convolutions. Furthermore, we propose a two-stage training process to overcome a common semantic bias on dynamic objects via resampling. Our method improves upon the state of the art for self-supervised monocular depth prediction over all pixels, fine-grained details, and per semantic categories. ""","""The paper proposes a using pixel-adaptive convolutions to leverage semantic labels in self-supervised monocular depth estimation. Although there were initial concerns of the reviewers regarding the technical details and limited experiments, the authors responded reasonably to the issues raised by the reviewers. Reviewer2, who gave a weak reject rating, did not provide any answer to the authors comments. We do not see any major flaws to reject this paper."""
716,"""Stablizing Adversarial Invariance Induction by Discriminator Matching""","['invariance induction', 'adversarial training', 'domain generalization']","""Incorporating the desired invariance into representation learning is a key challenge in many situations, e.g., for domain generalization and privacy/fairness constraints. An adversarial invariance induction (AII) shows its power on this purpose, which maximizes the proxy of the conditional entropy between representations and attributes by adversarial training between an attribute discriminator and feature extractor. However, the practical behavior of AII is still unclear as the previous analysis assumes the optimality of the attribute classifier, which is rarely held in practice. This paper first analyzes the practical behavior of AII both theoretically and empirically, indicating that AII has theoretical difficulty as it maximizes variational {\em upper} bound of the actual conditional entropy, and AII catastrophically fails to induce invariance even in simple cases as suggested by the above theoretical findings. We then argue that a simple modification to AII can significantly stabilize the adversarial induction framework and achieve better invariant representations. Our modification is based on the property of conditional entropy; it is maximized if and only if the divergence between all pairs of marginal distributions over pseudo-formula between different attributes is minimized. The proposed method, {\em invariance induction by discriminator matching}, modify AII objective to explicitly consider the divergence minimization requirements by defining a proxy of the divergence by using the attribute discriminator. Empirical validations on both the toy dataset and four real-world datasets (related to applications of user anonymization and domain generalization) reveal that the proposed method provides superior performance when inducing invariance for nuisance factors. ""","""The paper proposes a modification to improve adversarial invariance induction for learning representations under invariance constraints. The authors provide both a formal analysis and experimental evaluation of the method. The reviewers generally agree that the experimental evaluation is rigorous and above average, but the paper lacks clarity making it difficult to judge the significance of it. Therefore, I recommend rejection, but encourage the authors to improve the presentation and resubmit."""
717,"""Stabilizing DARTS with Amended Gradient Estimation on Architectural Parameters""","['Neural Architecture Search', 'DARTS', 'Stability']","""Differentiable neural architecture search has been a popular methodology of exploring architectures for deep learning. Despite the great advantage of search efficiency, it often suffers weak stability, which obstacles it from being applied to a large search space or being flexibly adjusted to different scenarios. This paper investigates DARTS, the currently most popular differentiable search algorithm, and points out an important factor of instability, which lies in its approximation on the gradients of architectural parameters. In the current status, the optimization algorithm can converge to another point which results in dramatic inaccuracy in the re-training process. Based on this analysis, we propose an amending term for computing architectural gradients by making use of a direct property of the optimality of network parameter optimization. Our approach mathematically guarantees that gradient estimation follows a roughly correct direction, which leads the search stage to converge on reasonable architectures. In practice, our algorithm is easily implemented and added to DARTS-based approaches efficiently. Experiments on CIFAR and ImageNet demonstrate that our approach enjoys accuracy gain and, more importantly, enables DARTS-based approaches to explore much larger search spaces that have not been studied before.""","""This paper studies Differentiable Neural Architecture Search, focusing on a problem identified with the approximated gradient with respect to architectural parameters, and proposing an improved gradient estimation procedure. The authors claim that this alleviates the tendency of DARTS to collapse on degenerate architectures consisting of e.g. all skip connections, presently dealt with via early stopping. Reviewers generally liked the theoretical contribution, but found the evidence insufficient to support the claims. Requests for experiments by R1 with matched hyperparameters were granted (and several reviewers felt this strengthened the submission), though relegated to an appendix, but after a lengthy discussion reviewers still felt the evidence was insufficient. R1 also contended that the authors were overly dogmatic regarding ""AutoML"" -- that the early stopping heuristic was undesirable because of the additional human knowledge involved. I appreciate the sentiment but find this argument unconvincing -- while it is true that a great deal of human knowledge is still necessary to make architecture search work, the aim is certainly to develop fool-proof automatic methods. As reviewers were still unsatisfied with the empirical investigation after revisions and found that the weight of the contribution was insufficient for a 10 page paper, I recommend rejection at this time, while encouraging the authors to take seriously the reviewers' requests for a systematic study of the source of the empirical gains in order to strengthen their paper for future submission."""
718,"""Improving SAT Solver Heuristics with Graph Networks and Reinforcement Learning""","['SAT', 'reinforcement learning', 'graph neural networks', 'heuristics', 'DQN', 'boolean satisfiability']","""We present GQSAT, a branching heuristic in a Boolean SAT solver trained with value-based reinforcement learning (RL) using Graph Neural Networks for function approximation. Solvers using GQSAT are complete SAT solvers that either provide a satisfying assignment or a proof of unsatisfiability, which is required for many SAT applications. The branching heuristic commonly used in SAT solvers today suffers from bad decisions during their warm-up period, whereas GQSAT has been trained to examine the structure of the particular problem instance to make better decisions at the beginning of the search. Training GQSAT is data efficient and does not require elaborate dataset preparation or feature engineering to train. We train GQSAT on small SAT problems using RL interfacing with an existing SAT solver. We show that GQSAT is able to reduce the number of iterations required to solve SAT problems by 2-3X, and it generalizes to unsatisfiable SAT instances, as well as to problems with 5X more variables than it was trained on. We also show that, to a lesser extent, it generalizes to SAT problems from different domains by evaluating it on graph coloring. Our experiments show that augmenting SAT solvers with agents trained with RL and graph neural networks can improve performance on the SAT search problem.""","""SAT is NP-complete (Karp, 1972) due its intractable exhaustive search. As such, heuristics are commonly used to reduce the search space. While usually these heuristics rely on some in-domain expert knowledge, the authors propose a generic method that uses RL to learn a branching heuristic. The policy is parametrized by a GNN, and at each step selects a variable to expand and the process repeats until either a satisfying assignment has been found or the problem has been proved unsatisfiable. The main result of this is that the proposed heuristic results in fewer steps than VSIDS, a commonly used heuristic. All reviewers agreed that this is an interesting and well-presented submission. However, both R1 and R2 (rightly according to my judgment) point that at the moment the paper seems to be conducting an evaluation that is not entirely fair. Specifically, VSIDS has been implemented within a framework optimized for running time rather than number of iterations, whereas the proposed heuristic is doing the opposite. Moreover, the proposed heuristic is not stressed-test against larger datasets. So, the authors take a heuristic/framework that has been optimized to operate specifically well on large datasets (where running time is what ultimately makes the difference) scale it down to a smaller dataset and evaluate it on a metric that the proposed algorithm is optimized for. At the same time, they do not consider evaluation in larger datasets and defer all concerns about scalability to the one of industrial use vs answering ML questions related to whether or not it is possible to stretch existing RL techniques to learn a branching heuristic. This is a valid point and not all techniques need to be super scalable from iteration day 0, but this being ML, we need to make sure that our evaluation criteria are fair and that we are comparing apples to apples in testing hypotheses. As such, I do not feel comfortable suggesting acceptance of this submission, but I do sincerely hope the authors will take the reviewers' feedback and improve the evaluation protocols of their manuscript, resulting in a stronger future submission."""
719,"""Non-Sequential Melody Generation""","['melody generation', 'DCGAN', 'dilated convolutions']","""In this paper we present a method for algorithmic melody generation using a generative adversarial network without recurrent components. Music generation has been successfully done using recurrent neural networks, where the model learns sequence information that can help create authentic sounding melodies. Here, we use DCGAN architecture with dilated convolutions and towers to capture sequential information as spatial image information, and learn long-range dependencies in fixed-length melody forms such as Irish traditional reel. ""","""All the reviewers pointed out issues with the experiments, which the rebuttal did not address. The paper seems interesting, and the authors are encouraged to improve it."""
720,"""Compositional Transfer in Hierarchical Reinforcement Learning""","['Multitask', 'Transfer Learning', 'Reinforcement Learning', 'Hierarchical Reinforcement Learning', 'Compositional', 'Off-Policy']","""The successful application of flexible, general learning algorithms to real-world robotics applications is often limited by their poor data-efficiency. To address the challenge, domains with more than one dominant task of interest encourage the sharing of information across tasks to limit required experiment time. To this end, we investigate compositional inductive biases in the form of hierarchical policies as a mechanism for knowledge transfer across tasks in reinforcement learning (RL). We demonstrate that this type of hierarchy enables positive transfer while mitigating negative interference. Furthermore, we demonstrate the benefits of additional incentives to efficiently decompose task solutions. Our experiments show that these incentives are naturally given in multitask learning and can be easily introduced for single objectives. We design an RL algorithm that enables stable and fast learning of structured policies and the effective reuse of both behavior components and transition data across tasks in an off-policy setting. Finally, we evaluate our algorithm in simulated environments as well as physical robot experiments and demonstrate substantial improvements in data data-efficiency over competitive baselines.""","""This paper is concerned with improving data-efficiency in multitask reinforcement learning problems. This is achieved by taking a hierarchical approach, and learning commonalities across tasks for reuse. The authors present an off-policy actor-critic algorithm to learn and reuse these hierarchical policies. This is an interesting and promising paper, particularly with the ability to work with robots. The reviewers did however note issues with the novelty and making the contributions clear. Additionally, it was felt that the results proved the benefits of hierarchy rather than this approach, and that further comparisons to other approaches are required. As such, this paper is a weak reject at this point."""
721,"""Surrogate-Based Constrained Langevin Sampling With Applications to Optimal Material Configuration Design""","['Black-box Constrained Langevin sampling', 'surrogate methods', 'projected and proximal methods', 'approximation theory of gradients', 'nano-porous material configuration design']","""We consider the problem of generating configurations that satisfy physical constraints for optimal material nano-pattern design, where multiple (and often conflicting) properties need to be simultaneously satisfied. Consider, for example, the trade-off between thermal resistance, electrical conductivity, and mechanical stability needed to design a nano-porous template with optimal thermoelectric efficiency. To that end, we leverage the posterior regularization framework andshow that this constraint satisfaction problem can be formulated as sampling froma Gibbs distribution. The main challenges come from the black-box nature ofthose physical constraints, since they are obtained via solving highly non-linearPDEs. To overcome those difficulties, we introduce Surrogate-based Constrained Langevin dynamics for black-box sampling. We explore two surrogate approaches. The first approach exploits zero-order approximation of gradients in the Langevin Sampling and we refer to it as Zero-Order Langevin. In practice, this approach can be prohibitive since we still need to often query the expensive PDE solvers. The second approach approximates the gradients in the Langevin dynamics with deep neural networks, allowing us an efficient sampling strategy using the surrogate model. We prove the convergence of those two approaches when the target distribution is log-concave and smooth. We show the effectiveness of both approaches in designing optimal nano-porous material configurations, where the goal is to produce nano-pattern templates with low thermal conductivity and reasonable mechanical stability.""","""The paper is not overly well written and motivated. A guiding thread through the paper is often missing. Comparisons with constrained BO methods would have improved the paper as well as a more explicit link to multi-objective BO. It could have been interesting to evaluate the sensitivity w.r.t. the number of samples in the Monte Carlo estimate. What happens if the observations of the function are noisy? Is there a natural way to deal with this? Given that the paper is 10+ pages long, we expect a higher quality than an 8-pages paper (reviewing and submission guidelines). """
722,"""Meta-RCNN: Meta Learning for Few-Shot Object Detection""","['Few-shot detection', 'Meta-Learning', 'Object Detection']","""Despite significant advances in object detection in recent years, training effective detectors in a small data regime remains an open challenge. Labelling training data for object detection is extremely expensive, and there is a need to develop techniques that can generalize well from small amounts of labelled data. We investigate this problem of few-shot object detection, where a detector has access to only limited amounts of annotated data. Based on the recently evolving meta-learning principle, we propose a novel meta-learning framework for object detection named ``Meta-RCNN"", which learns the ability to perform few-shot detection via meta-learning. Specifically, Meta-RCNN learns an object detector in an episodic learning paradigm on the (meta) training data. This learning scheme helps acquire a prior which enables Meta-RCNN to do few-shot detection on novel tasks. Built on top of the Faster RCNN model, in Meta-RCNN, both the Region Proposal Network (RPN) and the object classification branch are meta-learned. The meta-trained RPN learns to provide class-specific proposals, while the object classifier learns to do few-shot classification. The novel loss objectives and learning strategy of Meta-RCNN can be trained in an end-to-end manner. We demonstrate the effectiveness of Meta-RCNN in addressing few-shot detection on Pascal VOC dataset and achieve promising results. ""","""This paper develops a meta-learning approach for few-shot object detection. This paper is borderline and the reviewers are split. The problem is important, albeit somewhat specific to computer vision applications. The main concerns were that it was lacking a head-to-head comparison to RepMet and that it was missing important details (e.g. the image resolution was not clarified, nor was the paper updated to include the details). The authors suggested that the RepMet code was not available, but I was able to find the official code for RepMet via a simple Google search: pseudo-url Reviewers also brought up concerns about an ICCV 2019 paper, though this should be considered as concurrent work, as it was not publicly available at the time of submission. Overall, I think the paper is borderline. Given that many meta-learning papers compare on rather synthetic benchmarks, the study of a more realistic problem setting is refreshing. That said, it's unclear if the insights from this paper would transfer to other machine learning problem settings of interest to the ICLR community. With all of this in mind, the paper is slightly below the bar for acceptance at ICLR."""
723,"""Gauge Equivariant Spherical CNNs""","['deep learning', 'convolutional networks', 'equivariance', 'gauge equivariance', 'symmetry', 'geometric deep learning', 'manifold convolution']","""Spherical CNNs are convolutional neural networks that can process signals on the sphere, such as global climate and weather patterns or omnidirectional images. Over the last few years, a number of spherical convolution methods have been proposed, based on generalized spherical FFTs, graph convolutions, and other ideas. However, none of these methods is simultaneously equivariant to 3D rotations, able to detect anisotropic patterns, computationally efficient, agnostic to the type of sample grid used, and able to deal with signals defined on only a part of the sphere. To address these limitations, we introduce the Gauge Equivariant Spherical CNN. Our method is based on the recently proposed theory of Gauge Equivariant CNNs, which is in principle applicable to signals on any manifold, and which can be computed on any set of local charts covering all of the manifold or only part of it. In this paper we show how this method can be implemented efficiently for the sphere, and show that the resulting method is fast, numerically accurate, and achieves good results on the widely used benchmark problems of climate pattern segmentation and omnidirectional semantic segmentation.""","""The paper extends Gauge invariant CNNs to Gauge invariant spherical CNNs. The authors significantly improved both theory and experiments during the rebuttal and the paper is well presented. However, the topic is somewhat niche, and the bar for ICLR this year was very high, so unfortunately this paper did not make it. We encourage the authors to resubmit the work including the new results obtained during the rebuttal period."""
724,"""An Information Theoretic Approach to Distributed Representation Learning""","['Information Bottleneck', 'Distributed Learning']","""The problem of distributed representation learning is one in which multiple sources of information X1,...,XK are processed separately so as to extract useful information about some statistically correlated ground truth Y. We investigate this problem from information-theoretic grounds. For both discrete memoryless (DM) and memoryless vector Gaussian models, we establish fundamental limits of learning in terms of optimal tradeoffs between accuracy and complexity. We also develop a variational bound on the optimal tradeoff that generalizes the evidence lower bound (ELBO) to the distributed setting. Furthermore, we provide a variational inference type algorithm that allows to compute this bound and in which the mappings are parametrized by neural networks and the bound approximated by Markov sampling and optimized with stochastic gradient descent. Experimental results on synthetic and real datasets are provided to support the efficiency of the approaches and algorithms which we develop in this paper.""","""The authors study generalization in distributed representation learning by describing limits in accuracy and complexity which stem from information theory. The paper has been controversial, but ultimately the reviewers who provided higher scores presented weaker and fewer arguments. By recruiting an additional reviewer it became clearer that, overall the paper needs a little more work to reach ICLR standards. The main suggestions for improvements have to do with improving clarity in a way that makes the motivation convincing and the practicality more obvious. Boosting the experimental results is a complemental way of increasing convincingness, as argued by reviewers. """
725,"""Progressive Compressed Records: Taking a Byte Out of Deep Learning Data""","['Deep Learning', 'Storage', 'Bandwidth', 'Compression']","""Deep learning training accesses vast amounts of data at high velocity, posing challenges for datasets retrieved over commodity networks and storage devices. We introduce a way to dynamically reduce the overhead of fetching and transporting training data with a method we term Progressive Compressed Records (PCRs). PCRs deviate from previous formats by leveraging progressive compression to split each training example into multiple examples of increasingly higher fidelity, without adding to the total data size. Training examples of similar fidelity are grouped together, which reduces both the system overhead and data bandwidth needed to train a model. We show that models can be trained on aggressively compressed representations of the training data and still retain high accuracy, and that PCRs can enable a 2x speedup on average over baseline formats using JPEG compression. Our results hold across deep learning architectures for a wide range of datasets: ImageNet, HAM10000, Stanford Cars, and CelebA-HQ.""","""Main content: Introduces Progressive Compressed Records (PCR), a new storage format for image datasets for machine learning training. Discussion: reviewer 4: Interesting application of progressive compression to reduce the disk I/O overhead. Main concern is paper could be clearer about setting. reviewer 5: (not knowledgable about area): well-written paper. concern is that related work could be better, including state of the art on the topic. reviewer 2: likes the topic but discusses many areas for improvement (stronger exeriments, better metrics reported, etc.). this is probably the most experienced reviewer marking reject. reviewer 3: paper is well written. Main issue is that exeriments are limited to image classification tasks, and it snot clear how the method works on larger scale. Recommendation: interesting idea but experiments could be stronger. I lean to Reject."""
726,"""Certifying Neural Network Audio Classifiers""","['Adversarial Examples', 'Audio Classifier', 'Speech Recognition', 'Certified Robustness', 'Deep Learning']","""We present the first end-to-end verifier of audio classifiers. Compared to existing methods, our approach enables analysis of both, the entire audio processing stage as well as recurrent neural network architectures (e.g., LSTM). The audio processing is verified using novel convex relaxations tailored to feature extraction operations used in audio (e.g., Fast Fourier Transform) while recurrent architectures are certified via a novel binary relaxation for the recurrent unit update. We show the verifier scales to large networks while computing significantly tighter bounds than existing methods for common audio classification benchmarks: on the challenging Google Speech Commands dataset we certify 95% more inputs than the interval approximation (only prior scalable method), for a perturbation of -90dB.""","""The paper developed log abstract transformer, square abstract transformer and sigmoid-tanh abstract transformer to certifiy robustness of neural network models for audio. The work is interesting but the scope is limited. It presented a neural network certification methods for one particular type of audio classifiers that use MFCC as input features and LSTM as the neural network layers. This thus may have limited interest to the general readers. The paper targets to present an end-to-end solution to audio classifiers. Investigation on one particular type of audio classifier is far from sufficient. As the reviewers pointed out, there're large literature of work using raw waveform inputs systems. Also there're many state-of-the-art systems are HMM/DNN and attnetion based encoder-decoder models. In terms of neural network models, resent based models, transformer models etc are also important. A more thorough investigation/comparison would greatly enlarge the scope of this paper. """
727,"""Point Process Flows""","['Temporal Point Process', 'Intensity-free Point Process']","""Event sequences can be modeled by temporal point processes (TPPs) to capture their asynchronous and probabilistic nature. We propose an intensity-free framework that directly models the point process as a non-parametric distribution by utilizing normalizing flows. This approach is capable of capturing highly complex temporal distributions and does not rely on restrictive parametric forms. Comparisons with state-of-the-art baseline models on both synthetic and challenging real-life datasets show that the proposed framework is effective at modeling the stochasticity of discrete event sequences. ""","""The paper proposed to use normalizing flow to model point processes. However, the reviews find that the paper is incremental. There have been several works using deep generative models to temporal data, and the proposed method is a simple combination of well-established existing works without problem-specific adaptation. """
728,"""Learning by shaking: Computing policy gradients by physical forward-propagation""","['Reinforcement Learning', 'Control Theory']","""Model-free and model-based reinforcement learning are two ends of a spectrum. Learning a good policy without a dynamic model can be prohibitively expensive. Learning the dynamic model of a system can reduce the cost of learning the policy, but it can also introduce bias if it is not accurate. We propose a middle ground where instead of the transition model, the sensitivity of the trajectories with respect to the perturbation (shaking) of the parameters is learned. This allows us to predict the local behavior of the physical system around a set of nominal policies without knowing the actual model. We assay our method on a custom-built physical robot in extensive experiments and show the feasibility of the approach in practice. We investigate potential challenges when applying our method to physical systems and propose solutions to each of them.""","""While the reviewers generally appreciated the idea behind the method in the paper, there was considerable concern about the experimental evaluation, which did not provide a convincing demonstration that the method works in interesting and relevant problem settings, and did not compare adequately to alternative approach. As such, I believe this paper is not quite ready for publication in its current form."""
729,"""Efficient Training of Robust and Verifiable Neural Networks""",[],"""Recent works have developed several methods of defending neural networks against adversarial attacks with certified guarantees. We propose that many common certified defenses can be viewed under a unified framework of regularization. This unified framework provides a technique for comparing different certified defenses with respect to robust generalization. In addition, we develop a new regularizer that is both more efficient than existing certified defenses and can be used to train networks with higher certified accuracy. Our regularizer also extends to an L0 threat model and ensemble models. Through experiments on MNIST, CIFAR-10 and GTSRB, we demonstrate improvements in training speed and certified accuracy compared to state-of-the-art certified defenses.""","""This paper studies the problem of certified robustness to adversarial examples. It first demonstrates that many existing certified defenses can be viewed under a unified framework of regularization. Then, it proposes a new double margin-based regularizer to obtain better certified robustness. Overall, it has major technical issues and the rebuttal is not satisfying."""
730,"""The Sooner The Better: Investigating Structure of Early Winning Lottery Tickets""","['pruning', 'lottery ticket hypothesis', 'deep neural network', 'compression', 'image classification']","""The recent success of the lottery ticket hypothesis by Frankle & Carbin (2018) suggests that small, sparsified neural networks can be trained as long as the network is initialized properly. Several follow-up discussions on the initialization of the sparsified model have discovered interesting characteristics such as the necessity of rewinding (Frankle et al. (2019)), importance of sign of the initial weights (Zhou et al. (2019)), and the transferability of the winning lottery tickets (S. Morcos et al. (2019)). In contrast, another essential aspect of the winning ticket, the structure of the sparsified model, has been little discussed. To find the lottery ticket, unfortunately, all the prior work still relies on computationally expensive iterative pruning. In this work, we conduct an in-depth investigation of the structure of winning lottery tickets. Interestingly, we discover that there exist many lottery tickets that can achieve equally good accuracy much before the regular training schedule even finishes. We provide insights into the structure of these early winning tickets with supporting evidence. 1) Under stochastic gradient descent optimization, lottery ticket emerges when weight magnitude of a model saturates; 2) Pruning before the saturation of a model causes the loss of capability in learning complex patterns, resulting in the accuracy degradation. We employ the memorization capacity analysis to quantitatively confirm it, and further explain why gradual pruning can achieve better accuracy over the one-shot pruning. Based on these insights, we discover the early winning tickets for various ResNet architectures on both CIFAR10 and ImageNet, achieving state-of-the-art accuracy at a high pruning rate without expensive iterative pruning. In the case of ResNet50 on ImageNet, this comes to the winning ticket of 75:02% Top-1 accuracy at 80% pruning rate in only 22% of the total epochs for iterative pruning.""","""This paper does extensive experiments to understand the lottery ticket hypothesis. The lottery ticket hypothesis is that there exist sparse sub-networks inside dense large models that achieve as good accuracy as the original model. The reviewers have issues with the novelty and significance of these experiments. They felt that it didn't shed new scientific light. They felt that epochs needed to do early detection was still expensive. I recommend doing further studies and submitting it to another venue."""
731,"""CONFEDERATED MACHINE LEARNING ON HORIZONTALLY AND VERTICALLY SEPARATED MEDICAL DATA FOR LARGE-SCALE HEALTH SYSTEM INTELLIGENCE""","['Confederated learning', 'siloed medical data', 'representation joining']","""A patients health information is generally fragmented across silos. Though it is technically feasible to unite data for analysis in a manner that underpins a rapid learning healthcare system, privacy concerns and regulatory barriers limit data centralization. Machine learning can be conducted in a federated manner on patient datasets with the same set of variables, but separated across sites of care. But federated learning cannot handle the situation where different data types for a given patient are separated vertically across different organizations. We call methods that enable machine learning model training on data separated by two or more degrees confederated machine learning. We built and evaluated a confederated machine learning model to stratify the risk of accidental falls among the elderly.""","""This manuscript proposes a strategy for fitting predictive models on data separated across nodes, with respect to both samples and features. The reviewers and AC agree that the problem studied is timely and interesting, and were impressed by the size and scope of the evaluation dataset (particularly for a medical application). However, reviewers were unconvinced about the novelty and clarity of the conceptual and empirical results. On the conceptual end, the AC also suggests that the authors look into closely related work on split learning (pseudo-url) which has also been applied to medical data settings."""
732,"""Implicit competitive regularization in GANs""","['GAN', 'competitive optimization', 'game theory']","""Generative adversarial networks (GANs) are capable of producing high quality samples, but they suffer from numerous issues such as instability and mode collapse during training. To combat this, we propose to model the generator and discriminator as agents acting under local information, uncertainty, and awareness of their opponent. By doing so we achieve stable convergence, even when the underlying game has no Nash equilibria. We call this mechanism \emph{implicit competitive regularization} (ICR) and show that it is present in the recently proposed \emph{competitive gradient descent} (CGD). When comparing CGD to Adam using a variety of loss functions and regularizers on CIFAR10, CGD shows a much more consistent performance, which we attribute to ICR. In our experiments, we achieve the highest inception score when using the WGAN loss (without gradient penalty or weight clipping) together with CGD. This can be interpreted as minimizing a form of integral probability metric based on ICR.""","""The paper proposes to study ""implicit competitive regularization"", a phenomenon borne of taking a more nuanced game theoretic perspective on GAN training, wherein the two competing networks are ""model[ed] ... as agents acting with limited information and in awareness of their opponent"". The meaning of this is developed through a series of examples using simpler games and didactic experiments on actual GANs. An adversary-aware variant employing a Taylor approximation to the loss. Reviewer assessment amounted to 3 relatively light reviews, two of which reported little background in the area, and one more in-depth review, which happened to also be the most critical. R1, R2, R3 all felt the contribution was interesting and valuable. R1 felt the contribution of the paper may be on the light side given the original competitive gradient descent paper, on which this manuscript leans heavily, included GAN training (the authors disagreed); they also felt the paper would be stronger with additional datasets in the empirical evaluation (this was not addressed). R2 felt the work suffered for lack of evidence of consistency via repeated experiments, which the authors explained was due to the resource-intensity of the experiments. R5 raised that Inception scores for both the method and being noticeably worse than those reported in the literature, a concern that was resolved in an update and seemed to center on the software implementation of the metric. R5 had several technical concerns, but was generally unhappy with the presentation and finishedness of the manuscript, in particular the degree to which details are deferred to the CGD paper. (The authors maintain that CGD is but one instantiation of a more general framework, but given that the empirical section of the paper relies on this instantiation I would concur that it is under-treated.) Minor updates were made to the paper, but R5 remains unconvinced (other reviewers did not revisit their reviews at all). In particular: experiments seem promising but not final (repeatability is a concern), the single paragraph ""intuitive explanation"" and cartoon offered in Figure 3 were viewed as insufficiently rigorous. A great deal of the paper is spent on simple cases, but not much is said about ICR specifically in those cases. This appears to have the makings of an important contribution, but I concur with R5 that it is not quite ready for mass consumption. As is, the narrative is locally consistent but quite difficult to follow section after section. It should also be noted that ICLR as a venue has a community that is not as steeped in the game theory literature as the authors clearly are, and the assumed technical background is quite substantial here. For a game theory novice, it is difficult to tell which turns of phrase refer to concepts from game theory and which may be more informally introduced herein. I believe the paper requires redrafting for greater clarity with a more rigorous theoretical and/or empirical characterization of ICR, perhaps involving small scale experiments which clearly demonstrates the effect. I also believe the authors have done themselves a disservice by not availing themselves of 10 pages rather than 8. I recommend rejection at this time, but hope that the authors view this feedback as valuable and continue to improve their manuscript, as I (and the reviewers) believe this line of work has the potential to be quite impactful."""
733,"""Deep Graph Translation""","['Graph translation', 'graph generation', 'deep neural network']","""Deep graph generation models have achieved great successes recently, among which, however, are typically unconditioned generative models that have no control over the target graphs are given an input graph. In this paper, we propose a novel Graph-Translation-Generative-Adversarial-Networks (GT-GAN) that transforms the input graphs into their target output graphs. GT-GAN consists of a graph translator equipped with innovative graph convolution and deconvolution layers to learn the translation mapping considering both global and local features, and a new conditional graph discriminator to classify target graphs by conditioning on input graphs. Extensive experiments on multiple synthetic and real-world datasets demonstrate that our proposed GT-GAN significantly outperforms other baseline methods in terms of both effectiveness and scalability. For instance, GT-GAN achieves at least 10X and 15X faster runtimes than GraphRNN and RandomVAE, respectively, when the size of the graph is around 50.""","""This paper studies a problem of graph translation, which aims at learning a graph translator to translate an input graph to a target graph using adversarial training framework. The reviewers think the problem is interesting. However, the paper needs to improve further in term of novelty and writing. """
734,"""Gradientless Descent: High-Dimensional Zeroth-Order Optimization""",['Zeroth Order Optimization'],"""Zeroth-order optimization is the process of minimizing an objective pseudo-formula , given oracle access to evaluations at adaptively chosen inputs pseudo-formula . In this paper, we present two simple yet powerful GradientLess Descent (GLD) algorithms that do not rely on an underlying gradient estimate and are numerically stable. We analyze our algorithm from a novel geometric perspective and we show that for {\it any monotone transform} of a smooth and strongly convex objective with latent dimension \ge n we present a novel analysis that shows convergence within an pseudo-formula -ball of the optimum in pseudo-formula evaluations, where the input dimension is pseudo-formula , pseudo-formula is the diameter of the input space and pseudo-formula is the condition number. Our rates are the first of its kind to be both 1) poly-logarithmically dependent on dimensionality and 2) invariant under monotone transformations. We further leverage our geometric perspective to show that our analysis is optimal. Both monotone invariance and its ability to utilize a low latent dimensionality are key to the empirical success of our algorithms, as demonstrated on synthetic and MuJoCo benchmarks. ""","""The paper considers an interesting algorithm on zeorth-order optimization and contains strong theory. All the reviewers agree to accept."""
735,"""Dynamic Scale Inference by Entropy Minimization""","['unsupervised learning', 'dynamic inference', 'equivariance', 'entropy']","""Given the variety of the visual world there is not one true scale for recognition: objects may appear at drastically different sizes across the visual field. Rather than enumerate variations across filter channels or pyramid levels, dynamic models locally predict scale and adapt receptive fields accordingly. The degree of variation and diversity of inputs makes this a difficult task. Existing methods either learn a feedforward predictor, which is not itself totally immune to the scale variation it is meant to counter, or select scales by a fixed algorithm, which cannot learn from the given task and data. We extend dynamic scale inference from feedforward prediction to iterative optimization for further adaptivity. We propose a novel entropy minimization objective for inference and optimize over task and structure parameters to tune the model to each input. Optimization during inference improves semantic segmentation accuracy and generalizes better to extreme scale variations that cause feedforward dynamic inference to falter.""","""This paper constitutes interesting progress on an important problem. I urge the authors to continue to refine their investigations, with the help of the reviewer comments; e.g., the quantitative analysis recommended by AnonReviewer4."""
736,"""Batch Normalization has Multiple Benefits: An Empirical Study on Residual Networks""","['batch normalization', 'residual networks', 'initialization', 'batch size', 'learning rate', 'ImageNet']","""Many state of the art models rely on two architectural innovations; skip connections and batch normalization. However batch normalization has a number of limitations. It breaks the independence between training examples within a batch, performs poorly when the batch size is too small, and significantly increases the cost of computing a parameter update in some models. This work identifies two practical benefits of batch normalization. First, it improves the final test accuracy. Second, it enables efficient training with larger batches and larger learning rates. However we demonstrate that the increase in the largest stable learning rate does not explain why the final test accuracy is increased under a finite epoch budget. Furthermore, we show that the gap in test accuracy between residual networks with and without batch normalization can be dramatically reduced by improving the initialization scheme. We introduce ZeroInit, which trains a 1000 layer deep Wide-ResNet without normalization to 94.3% test accuracy on CIFAR-10 in 200 epochs at batch size 64. This initialization scheme outperforms batch normalization when the batch size is very small, and is competitive with batch normalization for batch sizes that are not too large. We also show that ZeroInit matches the validation accuracy of batch normalization when training ResNet-50-V2 on ImageNet at batch size 1024.""","""The paper is rejected based on unanimous reviews."""
737,"""Learning to Represent Programs with Property Signatures""",['Program Synthesis'],"""We introduce the notion of property signatures, a representation for programs and program specifications meant for consumption by machine learning algorithms. Given a function with input type _in and output type _out, a property is a function of type: (_in, _out) Bool that (informally) describes some simple property of the function under consideration. For instance, if _in and _out are both lists of the same type, one property might ask is the input list the same length as the output list?. If we have a list of such properties, we can evaluate them all for our function to get a list of outputs that we will call the property signature. Crucially, we can guess the property signature for a function given only a set of input/output pairs meant to specify that function. We discuss several potential applications of property signatures and show experimentally that they can be used to improve over a baseline synthesizer so that it emits twice as many programs in less than one-tenth of the time.""","""The authors propose improved techniques for program synthesis by introducing the idea of property signatures. Property signatures help capture the specifications of the program and the authors show that using such property signatures they can synthesise programs more efficiently. I think it is an interesting work. Unfortunately, one of the reviewers has strong reservations about the work. However, after reading the reviewer's comments and the author's rebuttal to these comments I am convinced that the initial reservations of R1 have been adequately addressed. Similarly, the authors have done a great job of addressing the concerns of the other reviewers and have significantly updated their paper (including more experiments to address some of the concerns). Unfortunately R1 did not participate in subsequent discussions and it is not clear whether he/she read the rebuttal. Given the efforts put in by the authors to address different concerns of all the reviewers and considering the positive ratings given by the other two reviewers I recommend that this paper be accepted. Authors, Please include all the modifications done during the rebuttal period in your final version. Also move the comparison with DeepCoder to the main body of the paper."""
738,"""GDP: Generalized Device Placement for Dataflow Graphs""","['device placement', 'reinforcement learning', 'graph neural networks', 'transformer']","""Runtime and scalability of large neural networks can be significantly affected by the placement of operations in their dataflow graphs on suitable devices. With increasingly complex neural network architectures and heterogeneous device characteristics, finding a reasonable placement is extremely challenging even for domain experts. Most existing automated device placement approaches are impractical due to the significant amount of compute required and their inability to generalize to new, previously held-out graphs. To address both limitations, we propose an efficient end-to-end method based on a scalable sequential attention mechanism over a graph neural network that is transferable to new graphs. On a diverse set of representative deep learning models, including Inception-v3, AmoebaNet, Transformer-XL, and WaveNet, our method on average achieves 16% improvement over human experts and 9.2% improvement over the prior art with 15 times faster convergence. To further reduce the computation cost, we pre-train the policy network on a set of dataflow graphs and use a superposition network to fine-tune it on each individual graph, achieving state-of-the-art performance on large hold-out graphs with over 50k nodes, such as an 8-layer GNMT.""","""This paper presents a new reinforcement learning based approach to device placement for operations in computational graphs and demonstrates improvements for large scale standard models. The paper is borderline with all reviewers appreciating the paper even the reviewer with the lowest score. The reviewer with the lowest score is basing the score on minor reservation regarding lack of detail in explaining the experiments. Based upon the average score rejection is recommended. The reviewers' comments can help improve the paper and it is definitely recommended to submit it to the next conference. """
739,"""Mean Field Models for Neural Networks in Teacher-student Setting""","['mean field model', 'optimal transport', 'ResNet']","""Mean field models have provided a convenient framework for understanding the training dynamics for certain neural networks in the infinite width limit. The resulting mean field equation characterizes the evolution of the time-dependent empirical distribution of the network parameters. Following this line of work, this paper first focuses on the teacher-student setting. For the two-layer networks, we derive the necessary condition of the stationary distributions of the mean field equation and explain an empirical phenomenon concerning training speed differences using the Wasserstein flow description. Second, we apply this approach to two extended ResNet models and characterize the necessary condition of stationary distributions in the teacher-student setting.""","""This paper studies the evolution of the mean field dynamics of a two layer-fully connected and Resnet model. The focus is in a realizable or student/teacher setting where the labels are created according to a planted network. The authors study the stationary distribution of the mean-field method and use this to explain various observations. I think this is an interesting problem to study. However, the reviewers and I concur that the paper falls short in terms of clearly putting the results in the context of existing literature and demonstrating clear novel ideas. With the current writing of the paper is very difficult to surmise what is novel or new. I do agree with the authors' response that clearly they are looking at some novel aspects not studied by the previous work but this was not revised during the discussion period. Therefore, I do not think this paper is ready for publication. I suggest a substantial revision by the authors and recommend submission to future ML venues. """
740,"""Locality and Compositionality in Zero-Shot Learning""","['Zero-shot learning', 'Compositionality', 'Locality', 'Deep Learning']","""In this work we study locality and compositionality in the context of learning representations for Zero Shot Learning (ZSL). In order to well-isolate the importance of these properties in learned representations, we impose the additional constraint that, differently from most recent work in ZSL, no pre-training on different datasets (e.g. ImageNet) is performed. The results of our experiment show how locality, in terms of small parts of the input, and compositionality, i.e. how well can the learned representations be expressed as a function of a smaller vocabulary, are both deeply related to generalization and motivate the focus on more local-aware models in future research directions for representation learning.""","""This paper investigates the role of locality (ability to encode only information specific to locations of interest) and compositionality (ability to be expressed as a combination of simpler parts) in Zero-Shot Learning (ZSL). Main contributions of the paper are (i) compared to previous ZSL frameworks, the proposed approach is that the model is not allowed to be pretrained on another dataset (ii) a thorough evaluation of existing methods. Following discussions, weaknesses are (i) the proposed method (CMDIM) isn't sufficiently different or interesting compared to existing methods (ii) the paper does not do an in-depth discussion of locality and compositionality. The empirical evaluation being extensive, the accept decision is chosen. """
741,"""A Signal Propagation Perspective for Pruning Neural Networks at Initialization""","['neural network pruning', 'signal propagation perspective', 'sparse neural networks']","""Network pruning is a promising avenue for compressing deep neural networks. A typical approach to pruning starts by training a model and then removing redundant parameters while minimizing the impact on what is learned. Alternatively, a recent approach shows that pruning can be done at initialization prior to training, based on a saliency criterion called connection sensitivity. However, it remains unclear exactly why pruning an untrained, randomly initialized neural network is effective. In this work, by noting connection sensitivity as a form of gradient, we formally characterize initialization conditions to ensure reliable connection sensitivity measurements, which in turn yields effective pruning results. Moreover, we analyze the signal propagation properties of the resulting pruned networks and introduce a simple, data-free method to improve their trainability. Our modifications to the existing pruning at initialization method lead to improved results on all tested network models for image classification tasks. Furthermore, we empirically study the effect of supervision for pruning and demonstrate that our signal propagation perspective, combined with unsupervised pruning, can be useful in various scenarios where pruning is applied to non-standard arbitrarily-designed architectures.""","""This is a strong submission, and I recommend acceptance. The idea is an elegant one: sparsify a network at initialization using a distribution that achieves approximate orthogonality of the Jacobian for each layer. This is well motivated by dynamical isometry theory, and should imply good performance of the pruned network to the extent that the training dynamics are explainable in terms of a linearization around the initial weights. The paper is very well written, and all design decisions are clearly motivated. The experiments are careful, and cleanly demonstrate the effectiveness of the technique. The one shortcoming is that the experiments don't use state-of-the-art modern architectures, even thought that ought to have been easy to try. The architectures differ in ways that could impact the results, so it's not clear to what extent the same principles describe SOTA neural nets. Still, this is overall a very strong submission, and will be of interest to a lot of researchers at the conference. """
742,"""Towards More Realistic Neural Network Uncertainties""","['uncertainty', 'variational inference', 'MC dropout', 'variational autoencoder', 'evaluation']","""Statistical models are inherently uncertain. Quantifying or at least upper-bounding their uncertainties is vital for safety-critical systems. While standard neural networks do not report this information, several approaches exist to integrate uncertainty estimates into them. Assessing the quality of these uncertainty estimates is not straightforward, as no direct ground truth labels are available. Instead, implicit statistical assessments are required. For regression, we propose to evaluate uncertainty realism---a strict quality criterion---with a Mahalanobis distance-based statistical test. An empirical evaluation reveals the need for uncertainty measures that are appropriate to upper-bound heavy-tailed empirical errors. Alongside, we transfer the variational U-Net classification architecture to standard supervised image-to-image tasks. It provides two uncertainty mechanisms and significantly improves uncertainty realism compared to a plain encoder-decoder model.""","""This paper proposes two contributions to improve uncertainty in deep learning. The first is a Mahalanobis distance based statistical test and the second a model architecture. Unfortunately, the reviewers found the message of the paper somewhat confusing and particularly didn't understand the connection between these two contributions. A major question from the reviewers is why the proposed statistical test is better than using a proper scoring rule such as negative log likelihood. Some empirical justification of this should be presented."""
743,"""MACER: Attack-free and Scalable Robust Training via Maximizing Certified Radius""","['Adversarial Robustness', 'Provable Adversarial Defense', 'Randomized Smoothing', 'Robustness Certification']","""Adversarial training is one of the most popular ways to learn robust models but is usually attack-dependent and time costly. In this paper, we propose the MACER algorithm, which learns robust models without using adversarial training but performs better than all existing provable l2-defenses. Recent work shows that randomized smoothing can be used to provide a certified l2 radius to smoothed classifiers, and our algorithm trains provably robust smoothed classifiers via MAximizing the CErtified Radius (MACER). The attack-free characteristic makes MACER faster to train and easier to optimize. In our experiments, we show that our method can be applied to modern deep neural networks on a wide range of datasets, including Cifar-10, ImageNet, MNIST, and SVHN. For all tasks, MACER spends less training time than state-of-the-art adversarial training algorithms, and the learned models achieve larger average certified radius.""","""The submission proposes a robustness certification technique for smoothed classifiers for a given l_2 attack radius. Strengths: -The majority opinion is that this work is a non-trivial extension of prior work to provide radius certification. -The work is more efficient that strong recent baselines and provides better performance. -It successfully achieves this while avoiding adversarial training, which is another novel aspect. Weaknesses: -There were some initial concerns about missing experiments and unfair comparisons but these were sufficiently addressed in the discussion. AC shares the majority opinion and recommends acceptance."""
744,"""Robust anomaly detection and backdoor attack detection via differential privacy""","['outlier detection', 'novelty detection', 'backdoor attack detection', 'system log anomaly detection', 'differential privacy']","""Outlier detection and novelty detection are two important topics for anomaly detection. Suppose the majority of a dataset are drawn from a certain distribution, outlier detection and novelty detection both aim to detect data samples that do not fit the distribution. Outliers refer to data samples within this dataset, while novelties refer to new samples. In the meantime, backdoor poisoning attacks for machine learning models are achieved through injecting poisoning samples into the training dataset, which could be regarded as outliers that are intentionally added by attackers. Differential privacy has been proposed to avoid leaking any individuals information, when aggregated analysis is performed on a given dataset. It is typically achieved by adding random noise, either directly to the input dataset, or to intermediate results of the aggregation mechanism. In this paper, we demonstrate that applying differential privacy could improve the utility of outlier detection and novelty detection, with an extension to detect poisoning samples in backdoor attacks. We first present a theoretical analysis on how differential privacy helps with the detection, and then conduct extensive experiments to validate the effectiveness of differential privacy in improving outlier detection, novelty detection, and backdoor attack detection.""","""Thanks for the submission. This paper leverages the stability of differential privacy for the problems of anomaly and backdoor attack detection. The reviewers agree that this application of differential privacy is novel. The theory of the paper appears to be a bit weak (with very strong assumptions on the private learner), although it reflects the basic underlying idea of the detection technique. The paper also provides some empirical evaluation of the technique."""
745,"""A Closer Look at Deep Policy Gradients""","['deep policy gradient methods', 'deep reinforcement learning', 'trpo', 'ppo']",""" We study how the behavior of deep policy gradient algorithms reflects the conceptual framework motivating their development. To this end, we propose a fine-grained analysis of state-of-the-art methods based on key elements of this framework: gradient estimation, value prediction, and optimization landscapes. Our results show that the behavior of deep policy gradient algorithms often deviates from what their motivating framework would predict: surrogate rewards do not match the true reward landscape, learned value estimators fail to fit the true value function, and gradient estimates poorly correlate with the ""true"" gradient. The mismatch between predicted and empirical behavior we uncover highlights our poor understanding of current methods, and indicates the need to move beyond current benchmark-centric evaluation methods.""","""The paper empirically studies the behaviour of deep policy gradient algorithms, and reveals several unexpected observations that are not explained by the current theory. All three reviewers are excited about this work and recommend acceptance."""
746,"""EXACT ANALYSIS OF CURVATURE CORRECTED LEARNING DYNAMICS IN DEEP LINEAR NETWORKS""",[],"""Deep neural networks exhibit complex learning dynamics due to the highly non-convex loss landscape, which causes slow convergence and vanishing gradient problems. Second order approaches, such as natural gradient descent, mitigate such problems by neutralizing the effect of potentially ill-conditioned curvature on the gradient-based updates, yet precise theoretical understanding on how such curvature correction affects the learning dynamics of deep networks has been lack- ing. Here, we analyze the dynamics of training deep neural networks under a generalized family of natural gradient methods that applies curvature corrections, and derive precise analytical solutions. Our analysis reveals that curvature corrected update rules preserve many features of gradient descent, such that the learning trajectory of each singular mode in natural gradient descent follows precisely the same path as gradient descent, while only accelerating the temporal dynamics along the path. We also show that layer-restricted approximations of natural gradient, which are widely used in most second order methods (e.g. K-FAC), can significantly distort the learning trajectory into highly diverging dynamics that significantly differs from true natural gradient, which may lead to undesirable net- work properties. We also introduce fractional natural gradient that applies partial curvature correction, and show that it provides most of the benefit of full curvature correction in terms of convergence speed, with additional benefit of superior numerical stability and neutralizing vanishing/exploding gradient problems, which holds true also in layer-restricted approximations.""","""This paper aims to study the effect of curvature correction techniques on training dynamics. The focus is on understanding how natural gradient based methods affect training dynamics of deep linear networks. The main conclusion of the analysis is that it does not fundamentally affect the path of convergence but rather accelerates convergence. They also show that layer correction techniques alone do not suffice. In the discussion the reviewers raised concerns about extrapolating too much based on linear networks and also lack of a cohesive literature review. One reviewer also mentioned that there is not enough technical detail. These issues were partially addressed in the response. I think the topic of the paper is interesting and timely. However, I concur with Reviewer #2 that there are still lots of missing detail and the connection with the nonlinear case is not clear (however the latter is not strictly necessary in my opinion if the rest of the paper is better written). As a result I think the paper in its current form is not ready for publication. """
747,"""Unsupervised Hierarchical Graph Representation Learning with Variational Bayes""","['Hierarchical Graph Representation', 'Unsupervised Graph Learning', 'Variational Bayes', 'Graph classification']","""Hierarchical graph representation learning is an emerging subject owing to the increasingly popular adoption of graph neural networks in machine learning and applications. Loosely speaking, work under this umbrella falls into two categories: (a) use a predefined graph hierarchy to perform pooling; and (b) learn the hierarchy for a given graph through differentiable parameterization of the coarsening process. These approaches are supervised; a predictive task with ground-truth labels is used to drive the learning. In this work, we propose an unsupervised approach, \textsc{BayesPool}, with the use of variational Bayes. It produces graph representations given a predefined hierarchy. Rather than relying on labels, the training signal comes from the evidence lower bound of encoding a graph and decoding the subsequent one in the hierarchy. Node features are treated latent in this variational machinery, so that they are produced as a byproduct and are used in downstream tasks. We demonstrate a comprehensive set of experiments to show the usefulness of the learned representation in the context of graph classification.""","""The paper presents an unsupervised method for graph representation, building upon Loukas' method for generating a sequence of gradually coarsened graphs. The contribution is an ""encoder-decoder"" architecture trained by variational inference, where the encoder produces the embedding of the nodes in the next graph of the sequence, and the decoder produces the structure of the next graph. One important merit of the approach is that this unsupervised representation can be used effectively for supervised learning, with results quite competitive to the state of the art. However the reviewers were unconvinced by the novelty and positioning of the approach. The point of whether the approach should be viewed as variational Bayesian, or simply variational approximation was much debated between the reviewers and the authors. The area chair encourages the authors to pursue this very promising research, and to clarify the paper; perhaps the use of ""encoder-decoder"" generated too much misunderstanding. Another graph NN paper you might be interested in is ""Edge Contraction Pooling for Graph NNs"", by Frederik Diehl. """
748,"""Learning to Plan in High Dimensions via Neural Exploration-Exploitation Trees""","['learning to plan', 'representation learning', 'learning to design algorithm', 'reinforcement learning', 'meta learning']","""We propose a meta path planning algorithm named \emph{Neural Exploration-Exploitation Trees~(NEXT)} for learning from prior experience for solving new path planning problems in high dimensional continuous state and action spaces. Compared to more classical sampling-based methods like RRT, our approach achieves much better sample efficiency in high-dimensions and can benefit from prior experience of planning in similar environments. More specifically, NEXT exploits a novel neural architecture which can learn promising search directions from problem structures. The learned prior is then integrated into a UCB-type algorithm to achieve an online balance between \emph{exploration} and \emph{exploitation} when solving a new problem. We conduct thorough experiments to show that NEXT accomplishes new planning problems with more compact search trees and significantly outperforms state-of-the-art methods on several benchmarks.""","""All reviewers unanimously accept the paper."""
749,"""The Variational InfoMax AutoEncoder""","['autoencoder', 'information theory', 'infomax', 'vae']","""We propose the Variational InfoMax AutoEncoder (VIMAE), an autoencoder based on a new learning principle for unsupervised models: the Capacity-Constrained InfoMax, which allows the learning of a disentangled representation while maintaining optimal generative performance. The variational capacity of an autoencoder is defined and we investigate its role. We associate the two main properties of a Variational AutoEncoder (VAE), generation quality and disentangled representation, to two different information concepts, respectively Mutual Information and network capacity. We deduce that a small capacity autoencoder tends to learn a more robust and disentangled representation than a high capacity one. This observation is confirmed by the computational experiments.""","""This paper describes a new generative model based on the information theoretic principles for better representation learning. The approach is theoretically related to the InfoVAE and beta-VAE work, and is contrasted to vanilla VAEs. The reviewers have expressed strong concerns about the novelty of this work. Some of the very closely related baselines (e.g. Zhao et al., Chen et al., Alemi et a) are not compared against, and the contributions of this work over the baselines are not clearly discussed. Furthermore, the experimental section could be made stronger with more quantitative metrics. For these reasons I recommend rejection."""
750,"""Depth-Adaptive Transformer""","['Deep learning', 'natural language processing', 'sequence modeling']","""State of the art sequence-to-sequence models for large scale tasks perform a fixed number of computations for each input sequence regardless of whether it is easy or hard to process. In this paper, we train Transformer models which can make output predictions at different stages of the network and we investigate different ways to predict how much computation is required for a particular sequence. Unlike dynamic computation in Universal Transformers, which applies the same set of layers iteratively, we apply different layers at every step to adjust both the amount of computation as well as the model capacity. On IWSLT German-English translation our approach matches the accuracy of a well tuned baseline Transformer while using less than a quarter of the decoder layers.""","""This paper presents an adaptive computation time method for reducing the average-case inference time of a transformer sequence-to-sequence model. The reviewers reached a rough consensus: This paper makes a proposes a novel method for an important problem, and offers reasonably compelling evidence for that method. However, the experiments aren't *quite* sufficient to isolate the cause of the observed improvements, and the discussion of related work could be clearer. I acknowledge that this paper is borderline (and thank R3 for an extremely thorough discussion, both in public and privately), but I lean toward acceptance: The paper doesn't have any fatal flaws, and it brings some fresh ideas to an area where further work would be valuable."""
751,"""Deep Mining: Detecting Anomalous Patterns in Neural Network Activations with Subset Scanning""","['anomalous pattern detection', 'subset scanning', 'node activations', 'adversarial noise']","""This work views neural networks as data generating systems and applies anomalous pattern detection techniques on that data in order to detect when a network is processing a group of anomalous inputs. Detecting anomalies is a critical component for multiple machine learning problems including detecting the presence of adversarial noise added to inputs. More broadly, this work is a step towards giving neural networks the ability to detect groups of out-of-distribution samples. This work introduces ``Subset Scanning methods from the anomalous pattern detection domain to the task of detecting anomalous inputs to neural networks. Subset Scanning allows us to answer the question: ""``Which subset of inputs have larger-than-expected activations at which subset of nodes?"" Framing the adversarial detection problem this way allows us to identify systematic patterns in the activation space that span multiple adversarially noised images. Such images are ``""weird together"". Leveraging this common anomalous pattern, we show increased detection power as the proportion of noised images increases in a test set. Detection power and accuracy results are provided for targeted adversarial noise added to CIFAR-10 images on a 20-layer ResNet using the Basic Iterative Method attack. ""","""The paper investigates the use of the subset scanning to the detection of anomalous patterns in the input to a neural network. The paper has received mixed reviews (one positive and two negatives). The reviewers agree that the idea is interesting, has novelty, and is worth investigating. At the same time they raise issues about the clarity and the lack of comparisons with baselines. Despite a very detailed rebuttal, both of the negative reviewers still feel that addressing their concerns through paper revision would be needed for acceptance."""
752,"""Abductive Commonsense Reasoning""","['Abductive Reasoning', 'Commonsense Reasoning', 'Natural Language Inference', 'Natural Language Generation']","""Abductive reasoning is inference to the most plausible explanation. For example, if Jenny finds her house in a mess when she returns from work, and remembers that she left a window open, she can hypothesize that a thief broke into her house and caused the mess, as the most plausible explanation. While abduction has long been considered to be at the core of how people interpret and read between the lines in natural language (Hobbs et al., 1988), there has been relatively little research in support of abductive natural language inference and generation. We present the first study that investigates the viability of language-based abductive reasoning. We introduce a challenge dataset, ART, that consists of over 20k commonsense narrative contexts and 200k explanations. Based on this dataset, we conceptualize two new tasks (i) Abductive NLI: a multiple-choice question answering task for choosing the more likely explanation, and (ii) Abductive NLG: a conditional generation task for explaining given observations in natural language. On Abductive NLI, the best model achieves 68.9% accuracy, well below human performance of 91.4%. On Abductive NLG, the current best language generators struggle even more, as they lack reasoning capabilities that are trivial for humans. Our analysis leads to new insights into the types of reasoning that deep pre-trained language models fail to performdespite their strong performance on the related but more narrowly defined task of entailment NLIpointing to interesting avenues for future research.""","""This paper presents a dataset, created using a combination of existing resources, crowdsourcing, and model-based filtering, that aims to tests models' understanding of typical progressions of events in everyday situations. The dataset represents a challenge for a range of state of the art models for NLP and commonsense reasoning, and also can be used productively as a training task in transfer learning. After some discussion, reviewers came to a consensus that this represents an interesting contribution and a potentially valuable resource. There were some concernsnot fully resolvedabout the implications of using model-based filtering during data creation, but these were not so serious as to invalidate the primary contributions of the paper. While the thematic fit with ICLR is a bit weakthe primary contribution of the paper appears to be a dataset and task definition, rather than anything specific to representation learningthere are relevant secondary contributions, and I think that this work will be practically of interest to a reasonable fraction of the ICLR audience. """
753,"""Zeno++: Robust Fully Asynchronous SGD""","['fault-tolerance', 'Byzantine-tolerance', 'security', 'SGD', 'asynchronous']","""We propose Zeno++, a new robust asynchronous Stochastic Gradient Descent~(SGD) procedure which tolerates Byzantine failures of the workers. In contrast to previous work, Zeno++ removes some unrealistic restrictions on worker-server communications, allowing for fully asynchronous updates from anonymous workers, arbitrarily stale worker updates, and the possibility of an unbounded number of Byzantine workers. The key idea is to estimate the descent of the loss value after the candidate gradient is applied, where large descent values indicate that the update results in optimization progress. We prove the convergence of Zeno++ for non-convex problems under Byzantine failures. Experimental results show that Zeno++ outperforms existing approaches.""","""Main content: Blind review #2 summarizes it well: This paper investigates the security of distributed asynchronous SGD. Authors propose Zeno++, worker-server asynchronous implementation of SGD which is robust to Byzantine failures. To ensure that the gradients sent by the workers are correct, Zeno++ server scores each worker gradients using a reference gradient computed on a secret validation set. If the score is under a given threshold, then the worker gradient is discarded. Authors provide convergence guarantee for the Zeno++ optimizer for non-convex function. In addition, they provide an empirical evaluation of Zeno++ on the CIFAR10 datasets and compare with various baselines. -- Discussion: Reviews are generally weak on the limited novelty of the approach compared with Zeno, but the rebuttal of the authors on Nov 15 is fair (too long to summarize here). -- Recommendation and justification: I do not feel strongly enough to override the weak reviews (but if there is room in the program I would support a weak accept)."""
754,"""Semi-Supervised Few-Shot Learning with Prototypical Random Walks""","['Few-Shot Learning', 'Semi-Supervised Learning', 'Random Walks']","""Learning from a few examples is a key characteristic of human intelligence that inspired machine learning researchers to build data-efficient AI models. Recent progress has shown that few-shot learning can be improved with access to unlabelled data, known as semi-supervised few-shot learning(SS-FSL). We introduce an SS-FSL approach, dubbed as Prototypical Random Walk Networks (PRWN), built on top of Prototypical Networks (PN). We develop a random walk semi-supervised loss that enables the network to learn representations that are compact and well-separated. Our work is related to the very recent development on graph-based approaches for few-shot learning. However, we show that achieved compact and well-separated class embeddings can be achieved by our prototypical random walk notion without needing additional graph-NN parameters or requiring a transductive setting where collective test set is provided. Our model outperforms prior art in most benchmarks with significant improvements in some cases. For example, in a mini-Imagenet 5-shot classification task, we obtain 69.65% accuracy to the 64.59% state-of-the-art. Our model, trained with 40% of the data as labelled, compares competitively against fully supervised prototypical networks, trained on 100% of the labels, even outperforming it in the 1-shot mini-Imagenet case with 50.89% to 49.4% accuracy. We also show that our model is resistant to distractors, unlabeled data that does not belong to any of the training classes, and hence reflecting robustness to labelled/unlabelled class distribution mismatch. We also performed a challenging discriminative power test, showing a relative improvement on top of the baseline of 14% on 20 classes on mini-Imagenet and 60% on 800 classes on Ominiglot.""","""This paper proposed a semi-supervised few-shot learning method, on top of Prototypical Networks, wherein a regularization term that involves a random walk from a prototype to unlabeled samples and back to the same prototype. SotA results were obtained in several experiments by using this method. All reviewers agreed that the novelty of the paper is not such high compared with Haeusser et al. (2017) and the analysis and the experiments could be improved."""
755,"""Meta-learning curiosity algorithms""","['meta-learning', 'exploration', 'curiosity']","""We hypothesize that curiosity is a mechanism found by evolution that encourages meaningful exploration early in an agent's life in order to expose it to experiences that enable it to obtain high rewards over the course of its lifetime. We formulate the problem of generating curious behavior as one of meta-learning: an outer loop will search over a space of curiosity mechanisms that dynamically adapt the agent's reward signal, and an inner loop will perform standard reinforcement learning using the adapted reward signal. However, current meta-RL methods based on transferring neural network weights have only generalized between very similar tasks. To broaden the generalization, we instead propose to meta-learn algorithms: pieces of code similar to those designed by humans in ML papers. Our rich language of programs combines neural networks with other building blocks such as buffers, nearest-neighbor modules and custom loss functions. We demonstrate the effectiveness of the approach empirically, finding two novel curiosity algorithms that perform on par or better than human-designed published curiosity algorithms in domains as disparate as grid navigation with image inputs, acrobot, lunar lander, ant and hopper.""","""This paper proposes meta-learning auxiliary rewards as specified by a DSL. The approach was considered innovative and the results interesting by all reviewers. The paper is clearly of an acceptable standard, with the main concerns raised by reviewers having been addressed (admittedly at the 11th hour) by the authors during the discussion period. Accept."""
756,"""WORD SEQUENCE PREDICTION FOR AMHARIC LANGUAGE""","['Word prediction', 'POS', 'Statistical approach']","""Word prediction is guessing what word comes after, based on some current information, and it is the main focus of this study. Even though Amharic is used by a large number of populations, no significant work is done on the topic. In this study, Amharic word sequence prediction model is developed using Machine learning. We used statistical methods using Hidden Markov Model by incorporating detailed parts of speech tag and user profiling or adaptation. One of the needs for this research is to overcome the challenges on inflected languages. Word sequence prediction is a challenging task for inflected languages (Gustavii &Pettersson, 2003; Seyyed & Assi, 2005). These kinds of languages are morphologically rich and have enormous word forms, which is a word can have different forms. As Amharic language is morphologically rich it shares the problem (Tessema, 2014).This problem makes word prediction system much more difficult and results poor performance. Previous researches used dictionary approach with no consideration of context information. Due to this reason, storing all forms in a dictionary wont solve the problem as in English and other less inflected languages. Therefore, we introduced two models; tags and words and linear interpolation that use parts of speech tag information in addition to word n-grams in order to maximize the likelihood of syntactic appropriateness of the suggestions. The statistics included in the systems varies from single word frequencies to parts-of-speech tag n-grams. We described a combined statistical and lexical word prediction system and developed Amharic language models of bigram and trigram for the training purpose. The overall study followed Design Science Research Methodology (DSRM). ""","""This paper presents a language model for Amharic using HMMs and incorporating POS tags. The paper is very short and lacks essential parts such as describing the exact model and the experimental design and results. The reviewers all rejected this paper, and there was no author rebuttal. This paper is clearly not appropriate for publication at ICLR. """
757,"""Probabilistic View of Multi-agent Reinforcement Learning: A Unified Approach""","['multi-agent reinforcement learning', 'maximum entropy reinforcement learning']","""Formulating the reinforcement learning (RL) problem in the framework of probabilistic inference not only offers a new perspective about RL, but also yields practical algorithms that are more robust and easier to train. While this connection between RL and probabilistic inference has been extensively studied in the single-agent setting, it has not yet been fully understood in the multi-agent setup. In this paper, we pose the problem of multi-agent reinforcement learning as the problem of performing inference in a particular graphical model. We model the environment, as seen by each of the agents, using separate but related Markov decision processes. We derive a practical off-policy maximum-entropy actor-critic algorithm that we call Multi-agent Soft Actor-Critic (MA-SAC) for performing approximate inference in the proposed model using variational inference. MA-SAC can be employed in both cooperative and competitive settings. Through experiments, we demonstrate that MA-SAC outperforms a strong baseline on several multi-agent scenarios. While MA-SAC is one resultant multi-agent RL algorithm that can be derived from the proposed probabilistic framework, our work provides a unified view of maximum-entropy algorithms in the multi-agent setting.""","""The paper takes the perspective of ""reinforcement learning as inference"", extends it to the multi-agent setting and derives a multi-agent RL algorithm that extends Soft Actor Critic. Several reviewer questions were addressed in the rebuttal phase, including key design choices. A common concern was the limited empirical comparison, including comparisons to existing approaches. """
758,"""Adversarial Training: embedding adversarial perturbations into the parameter space of a neural network to build a robust system""","['Adversarial Training', 'Adversarial Examples']","""Adversarial training, in which a network is trained on both adversarial and clean examples, is one of the most trusted defense methods against adversarial attacks. However, there are three major practical difficulties in implementing and deploying this method - expensive in terms of extra memory and computation costs; accuracy trade-off between clean and adversarial examples; and lack of diversity of adversarial perturbations. Classical adversarial training uses fixed, precomputed perturbations in adversarial examples (input space). In contrast, we introduce dynamic adversarial perturbations into the parameter space of the network, by adding perturbation biases to the fully connected layers of deep convolutional neural network. During training, using only clean images, the perturbation biases are updated in the Fast Gradient Sign Direction to automatically create and store adversarial perturbations by recycling the gradient information computed. The network learns and adjusts itself automatically to these learned adversarial perturbations. Thus, we can achieve adversarial training with negligible cost compared to requiring a training set of adversarial example images. In addition, if combined with classical adversarial training, our perturbation biases can alleviate accuracy trade-off difficulties, and diversify adversarial perturbations.""","""This paper proposes to introduce perturbation biases as a counter-measure against adversarial perturbations. The perturbation biases are additional bias terms that are trained by a variant of gradient ascent. Serious issues were raised in the comments. No rebuttal was provided."""
759,"""NORML: Nodal Optimization for Recurrent Meta-Learning""","['meta-learning', 'learning to learn', 'few-shot classification', 'memory-based optimization']","""Meta-learning is an exciting and powerful paradigm that aims to improve the effectiveness of current learning systems. By formulating the learning process as an optimization problem, a model can learn how to learn while requiring significantly less data or experience than traditional approaches. Gradient-based meta-learning methods aims to do just that, however recent work have shown that the effectiveness of these approaches are primarily due to feature reuse and very little has to do with priming the system for rapid learning (learning to make effective weight updates on unseen data distributions). This work introduces Nodal Optimization for Recurrent Meta-Learning (NORML), a novel meta-learning framework where an LSTM-based meta-learner performs neuron-wise optimization on a learner for efficient task learning. Crucially, the number of meta-learner parameters needed in NORML, increases linearly relative to the number of learner parameters. Allowing NORML to potentially scale to learner networks with very large numbers of parameters. While NORML also benefits from feature reuse it is shown experimentally that the meta-learner LSTM learns to make effective weight updates using information from previous data-points and update steps.""","""The paper proposes a LSTM-based meta-learning approach that learns how to update each neuron in another model for best few-shot learning performance. The reviewers agreed that this is a worthwhile problem and the approach has merits, but that it is hard to judge the significance of the work, given limited or unclear novelty compared to the work of Ravi & Larochelle (2017) and a lack of fair baseline comparisons. I recommend rejecting the paper for now, but encourage the authors to take the reviewers' feedback into account and submit to another venue."""
760,"""Amortized Nesterov's Momentum: Robust and Lightweight Momentum for Deep Learning""","['momentum', 'nesterov', 'optimization', 'deep learning', 'neural networks']","""Stochastic Gradient Descent (SGD) with Nesterov's momentum is a widely used optimizer in deep learning, which is observed to have excellent generalization performance. However, due to the large stochasticity, SGD with Nesterov's momentum is not robust, i.e., its performance may deviate significantly from the expectation. In this work, we propose Amortized Nesterov's Momentum, a special variant of Nesterov's momentum which has more robust iterates, faster convergence in the early stage and higher efficiency. Our experimental results show that this new momentum achieves similar (sometimes better) generalization performance with little-to-no tuning. In the convex case, we provide optimal convergence rates for our new methods and discuss how the theorems explain the empirical results. ""","""This paper introduces a variant of Nesterov momentum which saves computation by only periodically recomputing certain quantities, and which is claimed to be more robust in the stochastic setting. The method seems easy to use, so there's probably no harm in trying it. However, the reviewers and I don't find the benefits persuasive. While there is theoretical analysis, its role is to show that the algorithm maintains the convergence properties while having other benefits. However, the computations saved by amortization seem like a small fraction of the total cost, and I'm having trouble seeing how the increased ""robustness"" is justified. (It's possible I missed something, but clarity of exposition is another area the paper could use some improvement in.) Overall, this submission seems promising, but probably needs to be cleaned up before publication at ICLR. """
761,"""Precision Gating: Improving Neural Network Efficiency with Dynamic Dual-Precision Activations""","['deep learning', 'neural network', 'dynamic quantization', 'dual precision', 'efficient gating']","""We propose precision gating (PG), an end-to-end trainable dynamic dual-precision quantization technique for deep neural networks. PG computes most features in a low precision and only a small proportion of important features in a higher precision to preserve accuracy. The proposed approach is applicable to a variety of DNN architectures and significantly reduces the computational cost of DNN execution with almost no accuracy loss. Our experiments indicate that PG achieves excellent results on CNNs, including statically compressed mobile-friendly networks such as ShuffleNet. Compared to the state-of-the-art prediction-based quantization schemes, PG achieves the same or higher accuracy with 2.4 less compute on ImageNet. PG furthermore applies to RNNs. Compared to 8-bit uniform quantization, PG obtains a 1.2% improvement in perplexity per word with 2.7 computational cost reduction on LSTM on the Penn Tree Bank dataset.""","""The submission proposes an approach to accelerate network training by modifying the precision of individual weights, allowing a substantial speed up without a decrease in model accuracy. The magnitude of the activations determines whether it will be computed at a high or low bitwidth. The reviewers agreed that the paper should be published given the strong results, though there were some salient concerns which the authors should address in their final revision, such as how the method could be implemented on GPU and what savings could be achieved. Recommendation is to accept."""
762,"""Coordinated Exploration via Intrinsic Rewards for Multi-Agent Reinforcement Learning""","['multi-agent reinforcement learning', 'multi-agent', 'exploration', 'intrinsic motivation', 'MARL', 'coordinated exploration']","""Solving tasks with sparse rewards is one of the most important challenges in reinforcement learning. In the single-agent setting, this challenge has been addressed by introducing intrinsic rewards that motivate agents to explore unseen regions of their state spaces. Applying these techniques naively to the multi-agent setting results in agents exploring independently, without any coordination among themselves. We argue that learning in cooperative multi-agent settings can be accelerated and improved if agents coordinate with respect to what they have explored. In this paper we propose an approach for learning how to dynamically select between different types of intrinsic rewards which consider not just what an individual agent has explored, but all agents, such that the agents can coordinate their exploration and maximize extrinsic returns. Concretely, we formulate the approach as a hierarchical policy where a high-level controller selects among sets of policies trained on different types of intrinsic rewards and the low-level controllers learn the action policies of all agents under these specific rewards. We demonstrate the effectiveness of the proposed approach in a multi-agent gridworld domain with sparse rewards, and then show that our method scales up to more complex settings by evaluating on the VizDoom platform.""","""The authors present a method that utilizes intrinsic rewards to coordinate the exploration of agents in a multi-agent reinforcement learning setting. The reviewers agreed that the proposed approach was relatively novel and an interesting research direction for multiagent RL. However, the reviewers had substantial concerns about writing clarity, the significance of the contribution of the propose method, and the thoroughness of evaluation (particularly the number of agents used and limited baselines). While the writing clarity and several technical points (including addition ablations) were addressed in the rebuttal, the reviewers still felt that the core contribution of the work was a bit too marginal. Thus, I recommend this paper to be rejected at this time."""
763,"""Farkas layers: don't shift the data, fix the geometry""","['initialization', 'deep networks', 'residual networks', 'batch normalization', 'training', 'optimization']","""Successfully training deep neural networks often requires either {batch normalization}, appropriate {weight initialization}, both of which come with their own challenges. We propose an alternative, geometrically motivated method for training. Using elementary results from linear programming, we introduce Farkas layers: a method that ensures at least one neuron is active at a given layer. Focusing on residual networks with ReLU activation, we empirically demonstrate a significant improvement in training capacity in the absence of batch normalization or methods of initialization across a broad range of network sizes on benchmark datasets.""","""This paper proposes a new normalization scheme that attempts to prevent all units in a ReLU layer from being dead. The experimental results show that this normalization can effectively be used to train deep networks, though not as well as batch normalization. A significant issue is that the paper does not sufficiently establish that their explanation for the success of Farkas layer is valid. For example, do networks usually have layers with only inactive units in practice?"""
764,"""Insights on Visual Representations for Embodied Navigation Tasks""",[],"""Recent advances in deep reinforcement learning require a large amount of training data and generally result in representations that are often over specialized to the target task. In this work, we study the underlying potential causes for this specialization by measuring the similarity between representations trained on related, but distinct tasks. We use the recently proposed projection weighted Canonical Correlation Analysis (PWCCA) to examine the task dependence of visual representations learned across different embodied navigation tasks. Surprisingly, we find that slight differences in task have no measurable effect on the visual representation for both SqueezeNet and ResNet architectures. We then empirically demonstrate that visual representations learned on one task can be effectively transferred to a different task. Interestingly, we show that if the tasks constrain the agent to spatially disjoint parts of the environment, differences in representation emerge for SqueezeNet models but less-so for ResNets, suggesting that ResNets feature inductive biases which encourage more task-agnostic representations, even in the context of spatially separated tasks. We generalize our analysis to examine permutations of an environment and find, surprisingly, permutations of an environment also do not influence the visual representation. Our analysis provides insight on the overfitting of representations in RL and provides suggestions of how to design tasks that induce task-agnostic representations.""","""The general consensus amongst the reviewers is that this paper is not quite ready for publication, and needs to dig a little deeper in some areas. Some reviewers thought the contributions are unclear, or unsupported. I hope these reviews will help you as you work towards finding a home for this work."""
765,"""Ordinary differential equations on graph networks""","['Graph Networks', 'Ordinary differential equation']","""Recently various neural networks have been proposed for irregularly structured data such as graphs and manifolds. To our knowledge, all existing graph networks have discrete depth. Inspired by neural ordinary differential equation (NODE) for data in the Euclidean domain, we extend the idea of continuous-depth models to graph data, and propose graph ordinary differential equation (GODE). The derivative of hidden node states are parameterized with a graph neural network, and the output states are the solution to this ordinary differential equation. We demonstrate two end-to-end methods for efficient training of GODE: (1) indirect back-propagation with the adjoint method; (2) direct back-propagation through the ODE solver, which accurately computes the gradient. We demonstrate that direct backprop outperforms the adjoint method in experiments. We then introduce a family of bijective blocks, which enables pseudo-formula memory consumption. We demonstrate that GODE can be easily adapted to different existing graph neural networks and improve accuracy. We validate the performance of GODE in both semi-supervised node classification tasks and graph classification tasks. Our GODE model achieves a continuous model in time, memory efficiency, accurate gradient estimation, and generalizability with different graph networks.""","""This paper introduces a few ideas to potentially improve the performance of neural ODEs on graph networks. However, the reviewers disagreed about the motivations for the proposed modifications. Specifically, it's not clear that neural ODEs provide a more advantageous parameterization in this setting than standard discrete networks. It's also not clear at all why the authors are discussion graph neural networks in particular, as all of their proposed changes would apply to all types of network. Another major problem I had with this paper was the assertion that the running the original system backwards leads to large numerical error. This is a plausible claim, but it was never verified. It's extremely easy to check (e.g. by comparing the reconstructed initial state at t0 with the true original state at t0, or by comparing gradients computed by different methods). It's also not clear if the authors enforced the constraints on their dynamics function needed to ensure that a unique solution exists in the first place."""
766,"""Leveraging Simple Model Predictions for Enhancing its Performance""","['simple models', 'interpretability', 'resource constraints']","""There has been recent interest in improving performance of simple models for multiple reasons such as interpretability, robust learning from small data, deployment in memory constrained settings as well as environmental considerations. In this paper, we propose a novel method SRatio that can utilize information from high performing complex models (viz. deep neural networks, boosted trees, random forests) to reweight a training dataset for a potentially low performing simple model such as a decision tree or a shallow network enhancing its performance. Our method also leverages the per sample hardness estimate of the simple model which is not the case with the prior works which primarily consider the complex model's confidences/predictions and is thus conceptually novel. Moreover, we generalize and formalize the concept of attaching probes to intermediate layers of a neural network, which was one of the main ideas in previous work \citep{profweight}, to other commonly used classifiers and incorporate this into our method. The benefit of these contributions is witnessed in the experiments where on 6 UCI datasets and CIFAR-10 we outperform competitors in a majority (16 out of 27) of the cases and tie for best performance in the remaining cases. In fact, in a couple of cases, we even approach the complex model's performance. We also conduct further experiments to validate assertions and intuitively understand why our method works. Theoretically, we motivate our approach by showing that the weighted loss minimized by simple models using our weighting upper bounds the loss of the complex model.""","""The authors propose a sample reweighting scheme that helps to learn a simple model with similar performance as a more complex one. The authors contained critical errors in their original submission and the paper seems to lack in terms of originality and novelty of the proposed method."""
767,"""Synthetic vs Real: Deep Learning on Controlled Noise""","['controlled experiments', 'robust deep learning', 'corrupted label', 'real-world noisy data']","""Performing controlled experiments on noisy data is essential in thoroughly understanding deep learning across a spectrum of noise levels. Due to the lack of suitable datasets, previous research have only examined deep learning on controlled synthetic noise, and real-world noise has never been systematically studied in a controlled setting. To this end, this paper establishes a benchmark of real-world noisy labels at 10 controlled noise levels. As real-world noise possesses unique properties, to understand the difference, we conduct a large-scale study across a variety of noise levels and types, architectures, methods, and training settings. Our study shows that: (1) Deep Neural Networks (DNNs) generalize much better on real-world noise. (2) DNNs may not learn patterns first on real-world noisy data. (3) When networks are fine-tuned, ImageNet architectures generalize well on noisy data. (4) Real-world noise appears to be less harmful, yet it is more difficult for robust DNN methods to improve. (5) Robust learning methods that work well on synthetic noise may not work as well on real-world noise, and vice versa. We hope our benchmark, as well as our findings, will facilitate deep learning research on noisy data. ""","""Thanks for your detailed feedback to the reviewers, which helped us a lot to better understand your paper. However, given high competition at ICLR2020, we think the current manuscript is premature and still below the bar to be accepted to ICLR2020. We hope that the reviewers' comments are useful to improve your manuscript for potential future submission."""
768,"""VL-BERT: Pre-training of Generic Visual-Linguistic Representations""","['Visual-Linguistic', 'Generic Representation', 'Pre-training']","""We introduce a new pre-trainable generic representation for visual-linguistic tasks, called Visual-Linguistic BERT (VL-BERT for short). VL-BERT adopts the simple yet powerful Transformer model as the backbone, and extends it to take both visual and linguistic embedded features as input. In it, each element of the input is either of a word from the input sentence, or a region-of-interest (RoI) from the input image. It is designed to fit for most of the visual-linguistic downstream tasks. To better exploit the generic representation, we pre-train VL-BERT on the massive-scale Conceptual Captions dataset, together with text-only corpus. Extensive empirical analysis demonstrates that the pre-training procedure can better align the visual-linguistic clues and benefit the downstream tasks, such as visual commonsense reasoning, visual question answering and referring expression comprehension. It is worth noting that VL-BERT achieved the first place of single model on the leaderboard of the VCR benchmark.""","""The paper proposed a new pretrained language model which can take visual information into the embeddings. Experiments showed state-of-the-art results on three downstream tasks. The paper is well written and detailed comparisons with related work are given. There are some concerns about the clarity and novelty raised by the reviewers which is answered in details and I think the paper is acceptable."""
769,"""Task-Based Top-Down Modulation Network for Multi-Task-Learning Applications""","['deep learning', 'multi-task learning']","""A general problem that received considerable recent attention is how to perform multiple tasks in the same network, maximizing both efficiency and prediction accuracy. A popular approach consists of a multi-branch architecture on top of a shared backbone, jointly trained on a weighted sum of losses. However, in many cases, the shared representation results in non-optimal performance, mainly due to an interference between conflicting gradients of uncorrelated tasks. Recent approaches address this problem by a channel-wise modulation of the feature-maps along the shared backbone, with task specific vectors, manually or dynamically tuned. Taking this approach a step further, we propose a novel architecture which modulate the recognition network channel-wise, as well as spatial-wise, with an efficient top-down image-dependent computation scheme. Our architecture uses no task-specific branches, nor task specific modules. Instead, it uses a top-down modulation network that is shared between all of the tasks. We show the effectiveness of our scheme by achieving on par or better results than alternative approaches on both correlated and uncorrelated sets of tasks. We also demonstrate our advantages in terms of model size, the addition of novel tasks and interpretability. Code will be released.""","""The paper is interested in multi-task learning. It introduces a new architecture which condition the model in a particular manner: images features and task ID features are fed to a top-down network which generates task-specific weights, which are then used in a bottom-up network to produce final labels. The paper is experimental, and the contribution rather incremental, considering existing work in the area. Experimental section is currently not convincing enough, given marginal improvements over existing approaches - multiple runs as well as confidence intervals would help in that respect. """
770,"""An Exponential Learning Rate Schedule for Deep Learning""","['batch normalization', 'weight decay', 'learning rate', 'deep learning theory']","""Intriguing empirical evidence exists that deep learning can work well with exotic schedules for varying the learning rate. This paper suggests that the phenomenon may be due to Batch Normalization or BN(Ioffe & Szegedy, 2015), which is ubiq- uitous and provides benefits in optimization and generalization across all standard architectures. The following new results are shown about BN with weight decay and momentum (in other words, the typical use case which was not considered in earlier theoretical analyses of stand-alone BN (Ioffe & Szegedy, 2015; Santurkar et al., 2018; Arora et al., 2018) Training can be done using SGD with momentum and an exponentially in- creasing learning rate schedule, i.e., learning rate increases by some (1 + ) factor in every epoch for some > 0. (Precise statement in the paper.) To the best of our knowledge this is the first time such a rate schedule has been successfully used, let alone for highly successful architectures. As ex- pected, such training rapidly blows up network weights, but the net stays well-behaved due to normalization. Mathematical explanation of the success of the above rate schedule: a rigor- ous proof that it is equivalent to the standard setting of BN + SGD + Standard Rate Tuning + Weight Decay + Momentum. This equivalence holds for other normalization layers as well, Group Normalization(Wu & He, 2018), Layer Normalization(Ba et al., 2016), Instance Norm(Ulyanov et al., 2016), etc. A worked-out toy example illustrating the above linkage of hyper- parameters. Using either weight decay or BN alone reaches global minimum, but convergence fails when both are used.""","""After the revision, the reviewers agree on acceptance of this paper. Let's do it."""
771,"""Adapting to Label Shift with Bias-Corrected Calibration""","['calibration', 'label shift', 'domain adaptation', 'temperature scaling', 'em', 'bbse']","""Label shift refers to the phenomenon where the marginal probability p(y) of observing a particular class changes between the training and test distributions, while the conditional probability p(x|y) stays fixed. This is relevant in settings such as medical diagnosis, where a classifier trained to predict disease based on observed symptoms may need to be adapted to a different distribution where the baseline frequency of the disease is higher. Given estimates of p(y|x) from a predictive model, one can apply domain adaptation procedures including Expectation Maximization (EM) and Black-Box Shift Estimation (BBSE) to efficiently correct for the difference in class proportions between the training and test distributions. Unfortunately, modern neural networks typically fail to produce well-calibrated estimates of p(y|x), reducing the effectiveness of these approaches. In recent years, Temperature Scaling has emerged as an efficient approach to combat miscalibration. However, the effectiveness of Temperature Scaling in the context of adaptation to label shift has not been explored. In this work, we study the impact of various calibration approaches on shift estimates produced by EM or BBSE. In experiments with image classification and diabetic retinopathy detection, we find that calibration consistently tends to improve shift estimation. In particular, calibration approaches that include class-specific bias parameters are significantly better than approaches that lack class-specific bias parameters, suggesting that reducing systematic bias in the calibrated probabilities is especially important for domain adaptation.""","""This was a borderline paper, but in the end two of the reviewers remain unconvinced by this paper in its current form, and the last reviewer is not willing to argue for acceptance. The first reviewer's comments were taken seriously in making a decision on this paper. As such, it is my suggestion that the authors revise the paper in its current form, and resubmit, addressing some of the first reviewers comments, such as discussion of utility of the methodology, and to improve the exposition such that less knowledgable reviewers understand the material presented better. The comments that the first reviewer makes about lack of motivation for parts of the presented methodology is reflected in the other reviewers comments, and I'm convinced that the authors can address this issue and make this a really awesome submission at a future conference. On a different note, I think the authors should be congratulated on making their results reproducible. That is definitely something the field needs to see more of."""
772,"""SCELMo: Source Code Embeddings from Language Models""","['Transfer Learning', 'Pretraining', 'Program Repair']","""Continuous embeddings of tokens in computer programs have been used to support a variety of software development tools, including readability, code search, and program repair. Contextual embeddings are common in natural language processing but have not been previously applied in software engineering. We introduce a new set of deep contextualized word representations for computer programs based on language models. We train a set of embeddings using the ELMo (embeddings from language models) framework of Peters et al (2018). We investigate whether these embeddings are effective when fine-tuned for the downstream task of bug detection. We show that even a low-dimensional embedding trained on a relatively small corpus of programs can improve a state-of-the-art machine learning system for bug detection.""","""This paper improves DeepBugs by borrowing the NLP method ELMo as new representations. The effectiveness of the embedding is investigated using the downstream task of bug detection. Two reviewers reject the paper for two main concerns: 1 The novelty of the paper is not strong enough for ICLR as this paper mainly uses a standard context embedding technique from NLP. 2 The experimental results are not convincing enough and more comprehensive evaluation are needed. Overall, this novelty of this paper does not meet the standard of ICLR. """
773,"""Model-based Saliency for the Detection of Adversarial Examples""","['Adversarial Examples', 'Defense', 'Model-based Saliency']","""Adversarial perturbations cause a shift in the salient features of an image, which may result in a misclassification. We demonstrate that gradient-based saliency approaches are unable to capture this shift, and develop a new defense which detects adversarial examples based on learnt saliency models instead. We study two approaches: a CNN trained to distinguish between natural and adversarial images using the saliency masks produced by our learnt saliency model, and a CNN trained on the salient pixels themselves as its input. On MNIST, CIFAR-10 and ASSIRA, our defenses are able to detect various adversarial attacks, including strong attacks such as C&W and DeepFool, contrary to gradient-based saliency and detectors which rely on the input image. The latter are unable to detect adversarial images when the L_2- and L_infinity- norms of the perturbations are too small. Lastly, we find that the salient pixel based detector improves on saliency map based detectors as it is more robust to white-box attacks.""","""This submission proposes a method for detecting adversarial attacks using saliency maps. Strengths: -The experimental results are encouraging. Weaknesses: -The novelty is minor. -Experimental validation of some claims (e.g. robustness to white-box attacks) is lacking. These weaknesses were not sufficiently addressed in the discussion phase. AC agrees with the majority recommendation to reject. """
774,"""Causal Discovery with Reinforcement Learning""","['causal discovery', 'structure learning', 'reinforcement learning', 'directed acyclic graph']","""Discovering causal structure among a set of variables is a fundamental problem in many empirical sciences. Traditional score-based casual discovery methods rely on various local heuristics to search for a Directed Acyclic Graph (DAG) according to a predefined score function. While these methods, e.g., greedy equivalence search, may have attractive results with infinite samples and certain model assumptions, they are less satisfactory in practice due to finite data and possible violation of assumptions. Motivated by recent advances in neural combinatorial optimization, we propose to use Reinforcement Learning (RL) to search for the DAG with the best scoring. Our encoder-decoder model takes observable data as input and generates graph adjacency matrices that are used to compute rewards. The reward incorporates both the predefined score function and two penalty terms for enforcing acyclicity. In contrast with typical RL applications where the goal is to learn a policy, we use RL as a search strategy and our final output would be the graph, among all graphs generated during training, that achieves the best reward. We conduct experiments on both synthetic and real datasets, and show that the proposed approach not only has an improved search ability but also allows for a flexible score function under the acyclicity constraint. ""","""This paper proposes an RL-based structure search method for causal discovery. The reviewers and AC think that the idea of applying reinforcement learning to causal structure discovery is novel and intriguing. While there were initially some concerns regarding presentation of the results, these have been taken care of during the discussion period. The reviewers agree that this is a very good submission, which merits acceptance to ICLR-2020."""
775,"""A Mean-Field Theory for Kernel Alignment with Random Features in Generative Adverserial Networks""","['Kernel Learning', 'Generative Adversarial Networks', 'Mean Field Theory']","""We propose a novel supervised learning method to optimize the kernel in maximum mean discrepancy generative adversarial networks (MMD GANs). Specifically, we characterize a distributionally robust optimization problem to compute a good distribution for the random feature model of Rahimi and Recht to approximate a good kernel function. Due to the fact that the distributional optimization is infinite dimensional, we consider a Monte-Carlo sample average approximation (SAA) to obtain a more tractable finite dimensional optimization problem. We subsequently leverage a particle stochastic gradient descent (SGD) method to solve finite dimensional optimization problems. Based on a mean-field analysis, we then prove that the empirical distribution of the interactive particles system at each iteration of the SGD follows the path of the gradient descent flow on the Wasserstein manifold. We also establish the non-asymptotic consistency of the finite sample estimator. Our empirical evaluation on synthetic data-set as well as MNIST and CIFAR-10 benchmark data-sets indicates that our proposed MMD GAN model with kernel learning indeed attains higher inception scores well as Fr\`{e}chet inception distances and generates better images compared to the generative moment matching network (GMMN) and MMD GAN with untrained kernels.""","""This paper was assessed by three reviewers who scored it as 6/1/6. The main criticism included somewhat weak experiments due to the manual tuning of bandwidth, the use of old (and perhaps mostly solved/not challenging) datasets such as Mnist and Cifar10, lack of ablation studies. The other issue voiced in the review is that the proposed method is very close to a MMD-GAN with a kernel plus random features. Taking into account all positives and negatives, we regret to conclude that this submission falls short of the quality required by ICLR2020, thus it cannot be accepted at this time. """
776,"""Cross-Domain Few-Shot Classification via Learned Feature-Wise Transformation""",[],"""Few-shot classification aims to recognize novel categories with only few labeled images in each class. Existing metric-based few-shot classification algorithms predict categories by comparing the feature embeddings of query images with those from a few labeled images (support examples) using a learned metric function. While promising performance has been demonstrated, these methods often fail to generalize to unseen domains due to large discrepancy of the feature distribution across domains. In this work, we address the problem of few-shot classification under domain shifts for metric-based methods. Our core idea is to use feature-wise transformation layers for augmenting the image features using affine transforms to simulate various feature distributions under different domains in the training stage. To capture variations of the feature distributions under different domains, we further apply a learning-to-learn approach to search for the hyper-parameters of the feature-wise transformation layers. We conduct extensive experiments and ablation studies under the domain generalization setting using five few-shot classification datasets: mini-ImageNet, CUB, Cars, Places, and Plantae. Experimental results demonstrate that the proposed feature-wise transformation layer is applicable to various metric-based models, and provides consistent improvements on the few-shot classification performance under domain shift.""","""This submission addresses the problem of few-shot classification. The proposed solution centers around metric-based models with a core argument that prior work may lead to learned embeddings which are overfit to the few labeled examples available during learning. Thus, when measuring cross-domain performance, the specialization of the original classifier to the initial domain will be apparent through degraded test time (new domain) performance. The authors therefore, study the problem of domain generalization in the few-shot learning scenario. The main algorithmic contribution is the introduction of a feature-wise transformation layer. All reviewers suggest to accept this paper. Reviewer 3 says this problem statement is especially novel. Reviewer 1 and 2 had concerns over lack of comparisons with recent state-of-the-art methods. The authors responded with some additional results during the rebuttal phase, which should be included in the final draft. Overall the AC recommends acceptance, based on the positive comments and the fact that this paper addresses a sufficiently new problem statement. """
777,"""Anomaly Detection Based on Unsupervised Disentangled Representation Learning in Combination with Manifold Learning""","['anomaly detection', 'disentangled representation learning', 'manifold learning']","""Identifying anomalous samples from highly complex and unstructured data is a crucial but challenging task in a variety of intelligent systems. In this paper, we present a novel deep anomaly detection framework named AnoDM (standing for Anomaly detection based on unsupervised Disentangled representation learning and Manifold learning). The disentanglement learning is currently implemented by beta-VAE for automatically discovering interpretable factorized latent representations in a completely unsupervised manner. The manifold learning is realized by t-SNE for projecting the latent representations to a 2D map. We define a new anomaly score function by combining beta-VAE's reconstruction error in the raw feature space and local density estimation in the t-SNE space. AnoDM was evaluated on both image and time-series data and achieved better results than models that use just one of the two measures and other deep learning methods.""","""The paper presents AnoDM (Anomaly detection based on unsupervised Disentangled representation learning and Manifold learning) that combine beta-VAE and t-SNE for anomaly detection. Experiment results on both image and time series data are shown to demonstrate the effectiveness of the proposed solution. The paper aims to attack a challenging problem. The proposed solution is reasonable. The authors did a job at addressing some of the concerns raised in the reviews. However, two major concerns remain: (1) the novelty in the proposed model (a combination of two existing models) is not clear; (2) the experiment results are not fully convincing. While theoretical analysis is not a must for all models, it would be useful to conduct thorough experiments to fully understand how the model works, which is missing in the current version. Given the two reasons above, the paper did not attract enough enthusiasm from the reviewers during the discussion. We hope the reviews can help improve the paper for a better publication in the future. """
778,"""SAFE-DNN: A Deep Neural Network with Spike Assisted Feature Extraction for Noise Robust Inference""","['Noise robust', 'deep learning', 'DNN', 'image classification']","""We present a Deep Neural Network with Spike Assisted Feature Extraction (SAFE-DNN) to improve robustness of classification under stochastic perturbation of inputs. The proposed network augments a DNN with unsupervised learning of low-level features using spiking neuron network (SNN) with Spike-Time-Dependent-Plasticity (STDP). The complete network learns to ignore local perturbation while performing global feature detection and classification. The experimental results on CIFAR-10 and ImageNet subset demonstrate improved noise robustness for multiple DNN architectures without sacrificing accuracy on clean images.""",""" The paper proposes to improve noise robustness of the network learned features, by augmenting deep networks with Spike-Time-Dependent-Plasticity (STDP). The new network show improved noise robustness with better classification accuracy on Cifar10 and ImageNet subset when input data have noise. While this paper is well written, a number of concerns are raised by the reviewers. They include that the proposed method would not be favored from computer vision perspective, it is not convincing why spiking nets are more robust to random noises, and the method fails to address works in adversarial perturbations and adversarial training. Also, Reviewer #2 pointed out the low level of methodological novelty. The authors provided response to the questions, but did not change the rating of the reviewers. Given the various concerns raised, the ACs recommend reject."""
779,"""Differentially Private Mixed-Type Data Generation For Unsupervised Learning""","['Differential privacy', 'synthetic data', 'private data generation', 'mixed-type', 'unsupervised learning', 'autoencoder', 'GAN', 'private deep learning']","""In this work we introduce the DP-auto-GAN framework for synthetic data generation, which combines the low dimensional representation of autoencoders with the flexibility of GANs. This framework can be used to take in raw sensitive data, and privately train a model for generating synthetic data that should satisfy the same statistical properties as the original data. This learned model can be used to generate arbitrary amounts of publicly available synthetic data, which can then be freely shared due to the post-processing guarantees of differential privacy. Our framework is applicable to unlabled \emph{mixed-type data}, that may include binary, categorical, and real-valued data. We implement this framework on both unlabeled binary data (MIMIC-III) and unlabeled mixed-type data (ADULT). We also introduce new metrics for evaluating the quality of synthetic mixed-type data, particularly in unsupervised settings.""","""This provides a new method, called DPAutoGAN, for the problem of differentially private synthetic generation. The method uses private auto-encoder to reduce the dimension of the data, and apply private GAN on the latent space. The reviewers think that there is not sufficient justification for why this is a good approach for synthetic generation. They also think that the presentation is not ready for publication."""
780,"""Training Provably Robust Models by Polyhedral Envelope Regularization""","['deep learning', 'adversarial attack', 'robust certification']","""Training certifiable neural networks enables one to obtain models with robustness guarantees against adversarial attacks. In this work, we use a linear approximation to bound models output given an input adversarial budget. This allows us to bound the adversary-free region in the data neighborhood by a polyhedral envelope and yields finer-grained certified robustness than existing methods. We further exploit this certifier to introduce a framework called polyhedral envelope regular- ization (PER), which encourages larger polyhedral envelopes and thus improves the provable robustness of the models. We demonstrate the flexibility and effectiveness of our framework on standard benchmarks; it applies to networks with general activation functions and obtains comparable or better robustness guarantees than state-of-the-art methods, with very little cost in clean accuracy, i.e., without over-regularizing the model.""","""The authors develop a new technique for training neural networks to be provably robust to adversarial attacks. The technique relies on constructing a polyhedral envelope on the feasible set of activations and using this to derive a lower bound on the maximum certified radius. By training with this as a regularizer, the authors are able to train neural networks that achieve strong provable robustness to adversarial attacks. The paper makes a number of interesting contributions that the reviewers appreciated. However, two of the reviewers had some concerns with the significance of the contributions made: 1) The contributions of the paper are not clearly defined relative to prior work on bound propagation (Fast-Lin/KW/CROWN). In particular, the authors simply use the linear approximation derived in these prior works to obtain a bound on the radius to be certified. The authors claim faster convergence based on this, but this does not seem like a very significant contribution. 2) The improvements on the state of the art are marginal. These were discussed in detail during the rebuttal phase and the two reviewers with concerns about the paper decided to maintain their score after reading the rebuttals, as the fundamental issues above were not Given these concerns, I believe this paper is borderline - it has some interesting contributions, but the overall novelty on the technical side and strength of empirical results is not very high."""
781,"""Learning Latent Dynamics for Partially-Observed Chaotic Systems""","['Dynamical systems', 'Neural networks', 'Embedding', 'Partially observed systems', 'Forecasting', 'chaos']","""This paper addresses the data-driven identification of latent representations of partially-observed dynamical systems, i.e. dynamical systems whose some components are never observed, with an emphasis on forecasting applications and long-term asymptotic patterns. Whereas state-of-the-art data-driven approaches rely on delay embeddings and linear decompositions of the underlying operators, we introduce a framework based on the data-driven identification of an augmented state-space model using a neural-network-based representation. For a given training dataset, it amounts to jointly reconstructing the latent states and learning an ODE (Ordinary Differential Equation) representation in this space. Through numerical experiments, we demonstrate the relevance of the proposed framework w.r.t. state-of-the-art approaches in terms of short-term forecasting errors and long-term behaviour. We further discuss how the proposed framework relates to Koopman operator theory and Takens' embedding theorem.""","""This paper presents an ODE-based latent variable model, argues that extra unobserved dimensions are necessary in general, and that deterministic encodings are also insufficient in general. Instead, they optimize the latent representation during training. They include small-scale experiments showing that their framework beats alternatives. In my mind, the argument about fixed mappings being inadequate is a fair one, but it misses the fact that the variational inference framework already has several ways to address this shortcoming: 1) The recognition network outputs a distribution over latent values, which in itself does not address this issue, but provides regularization benefits. 2) The recognition network is just a strategy for speeding up inference. There's no reason you can't just do variational inference or MCMC for inference instead (which is similar to your approach), or do semi-amortized variational inference. Basically, this paper could have been somewhat convincing as a general exploration of approximate inference strategies in the latent ODE model. Instead, it provides a lot of philosophical arguments and a small amount of empirical evidence that a particular encoder is insufficient when doing MAP inference. It also seems like a problem that hyperparameters were copied from Chen et al 2018, but are used in a MAP setting instead of a VAE setting. Finally, it's not clear how hyperparameters such as the size of the latent dimensions were chosen."""
782,"""LEARNING EXECUTION THROUGH NEURAL CODE FUSION""","['code understanding', 'graph neural networks', 'learning program execution', 'execution traces', 'program performance']","""As the performance of computer systems stagnates due to the end of Moores Law, there is a need for new models that can understand and optimize the execution of general purpose code. While there is a growing body of work on using Graph Neural Networks (GNNs) to learn static representations of source code, these representations do not understand how code executes at runtime. In this work, we propose a new approach using GNNs to learn fused representations of general source code and its execution. Our approach defines a multi-task GNN over low-level representations of source code and program state (i.e., assembly code and dynamic memory states), converting complex source code constructs and data structures into a simpler, more uniform format. We show that this leads to improved performance over similar methods that do not use execution and it opens the door to applying GNN models to new tasks that would not be feasible from static code alone. As an illustration of this, we apply the new model to challenging dynamic tasks (branch prediction and prefetching) from the SPEC CPU benchmark suite, outperforming the state-of-the-art by 26% and 45% respectively. Moreover, we use the learned fused graph embeddings to demonstrate transfer learning with high performance on an indirectly related algorithm classification task.""","""This paper presents a method to learn representations of programs via code and execution. The paper presents an interesting method, and results on branch prediction and address pre-fetching are conclusive. The only main critiques associated with this paper seemed to be (1) potential lack of interest to the ICLR community, and (2) lack of comparison to other methods that similarly improve performance using other varieties of information. I am satisfied by the authors' responses to these concerns, and believe the paper warrants acceptance."""
783,"""Under what circumstances do local codes emerge in feed-forward neural networks""","['localist coding', 'emergence', 'contructionist science', 'neural networks', 'feed-forward', 'learning representation', 'distributed coding', 'generalisation', 'memorisation', 'biological plausibility', 'deep-NNs', 'training conditions']","""Localist coding schemes are more easily interpretable than the distributed schemes but generally believed to be biologically implausible. Recent results have found highly selective units and object detectors in NNs that are indicative of local codes (LCs). Here we undertake a constructionist study on feed-forward NNs and find LCs emerging in response to invariant features, and this finding is robust until the invariant feature is perturbed by 40%. Decreasing the number of input data, increasing the relative weight of the invariant features and large values of dropout all increase the number of LCs. Longer training times increase the number of LCs and the turning point of the LC-epoch curve correlates well with the point at which NNs reach 90-100% on both test and training accuracy. Pseudo-deep networks (2 hidden layers) which have many LCs lose them when common aspects of deep-NN research are applied (large training data, ReLU activations, early stopping on training accuracy and softmax), suggesting that LCs may not be found in deep-NNs. Switching to more biologically feasible constraints (sigmoidal activation functions, longer training times, dropout, activation noise) increases the number of LCs. If LCs are not found in the feed-forward classification layers of modern deep-CNNs these data suggest this could either be caused by a lack of (moderately) invariant features being passed to the fully connected layers or due to the choice of training conditions and architecture. Should the interpretability and resilience to noise of LCs be required, this work suggests how to tune a NN so they emerge. ""","""This paper studies when hidden units provide local codes by analyzing the hidden units of trained fully connected classification networks under various architectures and regularizers. The reviewers and the AC believe that the paper in its current form is not ready for acceptance to ICLR-2020. Further work and experiments are needed in order to identify an explanation for the emergence of local codes. This would significantly strengthen the paper."""
784,"""DEEP GRAPH SPECTRAL EVOLUTION NETWORKS FOR GRAPH TOPOLOGICAL TRANSFORMATION""","['deep graph learning', 'graph transformation', 'brain network']","""Characterizing the underlying mechanism of graph topological evolution from a source graph to a target graph has attracted fast increasing attention in the deep graph learning domain. However, there lacks expressive and efficient that can handle global and local evolution patterns between source and target graphs. On the other hand, graph topological evolution has been investigated in the graph signal processing domain historically, but it involves intensive labors to manually determine suitable prescribed spectral models and prohibitive difficulty to fit their potential combinations and compositions. To address these challenges, this paper proposes the deep Graph Spectral Evolution Network (GSEN) for modeling the graph topology evolution problem by the composition of newly-developed generalized graph kernels. GSEN can effectively fit a wide range of existing graph kernels and their combinations and compositions with the theoretical guarantee and experimental verification. GSEN has outstanding efficiency in terms of time complexity ( pseudo-formula ) and parameter complexity ( pseudo-formula ), where pseudo-formula is the number of nodes of the graph. Extensive experiments on multiple synthetic and real-world datasets have demonstrated outstanding performance.""","""The reviewers kept their scores after the author response period, pointing to continued concerns with methodology, needing increased exposition in parts, and not being able to verify theoretical results. As such, my recommendation is to improve the clarity around the methodological and theoretical contributions in a revision."""
785,"""Self-supervised Training of Proposal-based Segmentation via Background Prediction""",[],"""While supervised object detection and segmentation methods achieve impressive accuracy, they generalize poorly to images whose appearance significantly differs from the data they have been trained on. To address this in scenarios where annotating data is prohibitively expensive, we introduce a self-supervised approach to detection and segmentation, able to work with monocular images captured with a moving camera. At the heart of our approach lies the observations that object segmentation and background reconstruction are linked tasks, and that, for structured scenes, background regions can be re-synthesized from their surroundings, whereas regions depicting the object cannot. We encode this intuition as a self-supervised loss function that we exploit to train a proposal-based segmentation network. To account for the discrete nature of the proposals, we develop a Monte Carlo-based training strategy that allows the algorithm to explore the large space of object proposals. We apply our method to human detection and segmentation in images that visually depart from those of standard benchmarks, achieving competitive results compared to the few existing self-supervised methods and approaching the accuracy of supervised ones that exploit large annotated datasets.""","""This work proposes a self-supervised segmentation method: building upon Crawford and Pineau 2019, this work adds a Monte-Carlo based training strategy to explore object proposals. Reviewers found the method interesting and clever, but shared concerns about the lack of a better comparison to Crawford and Pineau, as well as generally a lack of care in comparisons to others, which were not satisfactorily addressed by authors response. For these reasons, we recommend rejection."""
786,"""Goal-Conditioned Video Prediction""","['predictive models', 'video prediction', 'latent variable models']","""Many processes can be concisely represented as a sequence of events leading from a starting state to an end state. Given raw ingredients, and a finished cake, an experienced chef can surmise the recipe. Building upon this intuition, we propose a new class of visual generative models: goal-conditioned predictors (GCP). Prior work on video generation largely focuses on prediction models that only observe frames from the beginning of the video. GCP instead treats videos as start-goal transformations, making video generation easier by conditioning on the more informative context provided by the first and final frames. Not only do existing forward prediction approaches synthesize better and longer videos when modified to become goal-conditioned, but GCP models can also utilize structures that are not linear in time, to accomplish hierarchical prediction. To this end, we study both auto-regressive GCP models and novel tree-structured GCP models that generate frames recursively, splitting the video iteratively into finer and finer segments delineated by subgoals. In experiments across simulated and real datasets, our GCP methods generate high-quality sequences over long horizons. Tree-structured GCPs are also substantially easier to parallelize than auto-regressive GCPs, making training and inference very efficient, and allowing the model to train on sequences that are thousands of frames in length.Finally, we demonstrate the utility of GCP approaches for imitation learning in the setting without access to expert actions. Videos are on the supplementary website: pseudo-url""","""The paper addresses a video generation setting where both initial and goal state are provided as a basis for long-term prediction. The authors propose two types of models, sequential and hierarchical, and obtain interesting insights into the performance of these two models. Reviewers raised concerns about evaluation metrics, empirical comparisons, and the relationship of the proposed model to prior work. While many of the initial concerns have been addressed by the authors, reviewers remain concerned about two issues in particular. First, the proposed model is similar to previous approaches with sequential latent variable models, and it is unclear how such existing models would compare if applied in this setting. Second, there are remaining concerns on whether the model may learn degenerate solutions. I quote from the discussion here, as I am not sure this will be visible to authors [about Figure 12]: ""now the two examples with two samples they show have the same door in the middle frame which makes me doubt the method learn[s] anything meaningful in terms of the agent walking through the door but just go to the middle of the screen every time."""""
787,"""Deceptive Opponent Modeling with Proactive Network Interdiction for Stochastic Goal Recognition Control""",[],"""Goal recognition based on the observations of the behaviors collected online has been used to model some potential applications. Newly formulated problem of goal recognition design aims at facilitating the online goal recognition process by performing offline redesign of the underlying environment with hard action removal. In this paper, we propose the stochastic goal recognition control (S-GRC) problem with two main stages: (1) deceptive opponent modeling based on maximum entropy regularized Markov decision processes (MDPs) and (2) goal recognition control under proactively static interdiction. For the purpose of evaluation, we propose to use the worst case distinctiveness (wcd) as a measure of the non-distinctive path without revealing the true goals, the task of S-GRC is to interdict a set of actions that improve or reduce the wcd. We empirically demonstrate that our proposed approach control the goal recognition process based on opponent's deceptive behavior.""","""This paper has been withdrawn by the authors."""
788,"""Manifold Modeling in Embedded Space: A Perspective for Interpreting ""Deep Image Prior""""","['Deep image prior', 'Manifold model', 'Auto-encoder', 'Convolutional neural network', 'Delay-embedding', 'Hankelization', 'Tensor completion', 'Image inpainting', 'Supperresolution']","""Deep image prior (DIP), which utilizes a deep convolutional network (ConvNet) structure itself as an image prior, has attracted huge attentions in computer vision community. It empirically shows the effectiveness of ConvNet structure for various image restoration applications. However, why the DIP works so well is still unknown, and why convolution operation is essential for image reconstruction or enhancement is not very clear. In this study, we tackle these questions. The proposed approach is dividing the convolution into ``delay-embedding'' and ``transformation (\ie encoder-decoder)'', and proposing a simple, but essential, image/tensor modeling method which is closely related to dynamical systems and self-similarity. The proposed method named as manifold modeling in embedded space (MMES) is implemented by using a novel denoising-auto-encoder in combination with multi-way delay-embedding transform. In spite of its simplicity, the image/tensor completion and super-resolution results of MMES are quite similar even competitive to DIP in our extensive experiments, and these results would help us for reinterpreting/characterizing the DIP from a perspective of ``low-dimensional patch-manifold prior''.""","""The paper proposes a combination of a delay embedding as well as an autoencoder to perform representation learning. The proposed algorithm shows competitive performance with deep image prior, which is a convnet structure. The paper claims that the new approach is interpretable and provides explainable insight into image priors. The discussion period was used constructively, with the authors addressing reviewer comments, and the reviewers acknowledging this an updating their scores. Overall, the proposed architecture is good, but the structure and presentation of the paper is still not up to the standards of ICLR. The current presentation seems to over-claim interpretability, without sufficient theoretical or empirical evidence. """
789,"""FSNet: Compression of Deep Convolutional Neural Networks by Filter Summary""","['Compression of Convolutional Neural Networks', 'Filter Summary CNNs', 'Weight Sharing']","""We present a novel method of compression of deep Convolutional Neural Networks (CNNs) by weight sharing through a new representation of convolutional filters. The proposed method reduces the number of parameters of each convolutional layer by learning a pseudo-formula D vector termed Filter Summary (FS). The convolutional filters are located in FS as overlapping pseudo-formula D segments, and nearby filters in FS share weights in their overlapping regions in a natural way. The resultant neural network based on such weight sharing scheme, termed Filter Summary CNNs or FSNet, has a FS in each convolution layer instead of a set of independent filters in the conventional convolution layer. FSNet has the same architecture as that of the baseline CNN to be compressed, and each convolution layer of FSNet has the same number of filters from FS as that of the basline CNN in the forward process. With compelling computational acceleration ratio, the parameter space of FSNet is much smaller than that of the baseline CNN. In addition, FSNet is quantization friendly. FSNet with weight quantization leads to even higher compression ratio without noticeable performance loss. We further propose Differentiable FSNet where the way filters share weights is learned in a differentiable and end-to-end manner. Experiments demonstrate the effectiveness of FSNet in compression of CNNs for computer vision tasks including image classification and object detection, and the effectiveness of DFSNet is evidenced by the task of Neural Architecture Search.""","""The paper proposes to compress convolutional neural networks via weight sharing across filters of each convolution layer. A fast convolution algorithm is also designed for the convolution layer with this approach. Experimental results show (i) effectiveness in CNN compression, (ii) acceleration on the tasks of image classification, object detection and neural architecture search. While the authors addressed most of reviewers' concerns, the weakness of the paper which remains is that no wall-clock runtime numbers (only FLOPS) are reported - so efficiency of the approach in practice in uncertain. """
790,"""Random Matrix Theory Proves that Deep Learning Representations of GAN-data Behave as Gaussian Mixtures""","['Random Matrix Theory', 'Deep Learning Representations', 'GANs']","""This paper shows that deep learning (DL) representations of data produced by generative adversarial nets (GANs) are random vectors which fall within the class of so-called concentrated random vectors. Further exploiting the fact that Gram matrices, of the type G = X'X with X = [x_1 , . . . , x_n ] R pn and x_i independent concentrated random vectors from a mixture model, behave asymptotically (as n, p ) as if the x_i were drawn from a Gaussian mixture, suggests that DL representations of GAN-data can be fully described by their first two statistical moments for a wide range of standard classifiers. Our theoretical findings are validated by generating images with the BigGAN model and across different popular deep representation networks.""","""The paper theoretically shows that the data (embedded by representations learned by GANs) are essentially the same as a high dimensional Gaussian mixture. The result is based on a recent result from random matrix theory on the covariance matrix of data, which the authors extend to a theorem on the Gram matrix of the data. The authors also provide a small experiment comparing the spectrum and principle 2D subspace of BigGAN and Gaussian mixtures, demonstrating that their theorem applies in practice. Two of the reviews (with confident reviewers) were quite negative about the contributions of the paper, and the reviewers unfortunately did not participate in the discussion period. Overall, the paper seems solid, but the reviews indicate that improvements are needed in the structure and presentation of the theoretical results. Given the large number of submissions at ICLR this year, the paper in its current form does not pass the quality threshold for acceptance."""
791,"""Wyner VAE: A Variational Autoencoder with Succinct Common Representation Learning""","[""Wyner's common information"", 'information theoretic regularization', 'information bottleneck', 'representation learning', 'generative models', 'conditional generation', 'joint generation', 'style transfer', 'variational autoencoders']","""A new variational autoencoder (VAE) model is proposed that learns a succinct common representation of two correlated data variables for conditional and joint generation tasks. The proposed Wyner VAE model is based on two information theoretic problems---distributed simulation and channel synthesis---in which Wyner's common information arises as the fundamental limit of the succinctness of the common representation. The Wyner VAE decomposes a pair of correlated data variables into their common representation (e.g., a shared concept) and local representations that capture the remaining randomness (e.g., texture and style) in respective data variables by imposing the mutual information between the data variables and the common representation as a regularization term. The utility of the proposed approach is demonstrated through experiments for joint and conditional generation with and without style control using synthetic data and real images. Experimental results show that learning a succinct common representation achieves better generative performance and that the proposed model outperforms existing VAE variants and the variational information bottleneck method.""","""This paper adds a new model to the literature on representation learning from correlated variables with some common and some ""private"" dimensions, and takes a variational approach based on Wyner's common information. The literature in this area includes models where both of the correlated variables are assumed to be available as input at all times, as well as models where only one of the two may be available; the proposed approach falls into the first category. Pros: The reviewers generally agree, as do I, that the motivation is very interesting and the resulting model is reasonable and produces solid results. Cons: The model is somewhat complex and the paper is lacking a careful ablation study on the components. In addition, the results are not a clear ""win"" for the proposed model. The authors have started to do an ablation study, and I think eventually an interesting story is likely to come out of that. But at the moment the paper feels a bit too preliminary/inconclusive for publication."""
792,"""On the Pareto Efficiency of Quantized CNN""","['convolutional neural networks quantization', 'model compression', 'efficient neural network']","""Weight Quantization for deep convolutional neural networks (CNNs) has shown promising results in compressing and accelerating CNN-powered applications such as semantic segmentation, gesture recognition, and scene understanding. Prior art has shown that different datasets, tasks, and network architectures admit different iso-accurate precision values, which increase the complexity of efficient quantized neural network implementations from both hardware and software perspectives. In this work, we show that when the number of channels is allowed to vary in an iso-model size scenario, lower precision values Pareto dominate higher precision ones (in accuracy vs. model size) for networks with standard convolutions. Relying on comprehensive empirical analyses, we find that the Pareto optimal precision value of a convolution layer depends on the number of input channels per output filters and provide theoretical insights for it. To this end, we develop a simple algorithm to select the precision values for CNNs that outperforms corresponding 8-bit quantized networks by 0.9% and 2.2% in top-1 accuracy on ImageNet for ResNet50 and MobileNetV2, respectively. ""","""This paper studies the trade-off between the model size and quantization levels in quantized CNNs by varying different channel width multipliers. The paper is well motivated and draws interesting observations but can be improved in terms of evaluation. It is a borderline case and rejection is made due to the high competition. """
793,"""Geometric Analysis of Nonconvex Optimization Landscapes for Overcomplete Learning""","['dictionary learning', 'sparse representations', 'nonconvex optimization']","""Learning overcomplete representations finds many applications in machine learning and data analytics. In the past decade, despite the empirical success of heuristic methods, theoretical understandings and explanations of these algorithms are still far from satisfactory. In this work, we provide new theoretical insights for several important representation learning problems: learning (i) sparsely used overcomplete dictionaries and (ii) convolutional dictionaries. We formulate these problems as pseudo-formula -norm optimization problems over the sphere and study the geometric properties of their nonconvex optimization landscapes. For both problems, we show the nonconvex objective has benign (global) geometric structures, which enable the development of efficient optimization methods finding the target solutions. Finally, our theoretical results are justified by numerical simulations. ""","""This paper investigates the use non-convex optimization for two dictionary learning problems, i.e., over-complete dictionary learning and convolutional dictionary learning. The paper provides theoretical results, associated with empirical experiments, about the fact that, that when formulating the problem as an l4 optimization, gives rise to a landscape with strict saddle points and as such, they can be escaped with negative curvature. As a result, descent methods can be used for learning with provable guarantees. All reviews found the work extremely interesting, highlighting the importance of the results that constitute ""a solid improvement over the prior understandings on over-complete DL"" and ""extends our understanding of provable methods for dictionary learning"". This is an interesting submission on non-convex optimization, and as such of interest to the ML community of ICLR . I'm recommending this work for acceptance."""
794,"""Latent Question Reformulation and Information Accumulation for Multi-Hop Machine Reading""","['question-answering', 'machine comprehension', 'deep learning']","""Multi-hop text-based question-answering is a current challenge in machine comprehension. This task requires to sequentially integrate facts from multiple passages to answer complex natural language questions. In this paper, we propose a novel architecture, called the Latent Question Reformulation Network (LQR-net), a multi-hop and parallel attentive network designed for question-answering tasks that require reasoning capabilities. LQR-net is composed of an association of \textbf{reading modules} and \textbf{reformulation modules}. The purpose of the reading module is to produce a question-aware representation of the document. From this document representation, the reformulation module extracts essential elements to calculate an updated representation of the question. This updated question is then passed to the following hop. We evaluate our architecture on the \hotpotqa question-answering dataset designed to assess multi-hop reasoning capabilities. Our model achieves competitive results on the public leaderboard and outperforms the best current \textit{published} models in terms of Exact Match (EM) and pseudo-formula score. Finally, we show that an analysis of the sequential reformulations can provide interpretable reasoning paths.""","""This paper proposes a novel approach, Latent Question Reformulation Network (LQR-net), a multi-hop and parallel attentive network designed for question-answering tasks that require multi-hop reasoning capabilities. Experimental results on the HotPotQA dataset achieve competitive results and outperform the top system in terms of exact match and F1 scores. However, reviewers note the limited setting of the experiments on the unrealistic, closed-domain setting of this dataset and suggested experimenting with other data (such as complex WebQuesitons). Reviewers were also concerned about the scalability of the system due to the significant amount of computations. They also noted several previous studies were not included in the paper. Authors acknowledged and made changes according to these suggestions. They also included experiments only on the open-domain subset of the HotPotQA in their rebuttal, unfortunately the results are not as good as before. Hence, I suggest rejecting this paper."""
795,"""MetaPix: Few-Shot Video Retargeting""","['Meta-learning', 'Few-shot Learning', 'Generative Adversarial Networks', 'Video Retargeting']","""We address the task of unsupervised retargeting of human actions from one video to another. We consider the challenging setting where only a few frames of the target is available. The core of our approach is a conditional generative model that can transcode input skeletal poses (automatically extracted with an off-the-shelf pose estimator) to output target frames. However, it is challenging to build a universal transcoder because humans can appear wildly different due to clothing and background scene geometry. Instead, we learn to adapt or personalize a universal generator to the particular human and background in the target. To do so, we make use of meta-learning to discover effective strategies for on-the-fly personalization. One significant benefit of meta-learning is that the personalized transcoder naturally enforces temporal coherence across its generated frames; all frames contain consistent clothing and background geometry of the target. We experiment on in-the-wild internet videos and images and show our approach improves over widely-used baselines for the task. ""","""Three reviewers have assessed this paper and they have scored it 6/6/6 after rebuttal. Nonetheless, the reviewers have raised a number of criticisms and the authors are encouraged to resolve them for the camera-ready submission."""
796,"""Identity Crisis: Memorization and Generalization Under Extreme Overparameterization""","['Generalization', 'Memorization', 'Understanding', 'Inductive Bias']","""We study the interplay between memorization and generalization of overparameterized networks in the extreme case of a single training example and an identity-mapping task. We examine fully-connected and convolutional networks (FCN and CNN), both linear and nonlinear, initialized randomly and then trained to minimize the reconstruction error. The trained networks stereotypically take one of two forms: the constant function (memorization) and the identity function (generalization). We formally characterize generalization in single-layer FCNs and CNNs. We show empirically that different architectures exhibit strikingly different inductive biases. For example, CNNs of up to 10 layers are able to generalize from a single example, whereas FCNs cannot learn the identity function reliably from 60k examples. Deeper CNNs often fail, but nonetheless do astonishing work to memorize the training output: because CNN biases are location invariant, the model must progressively grow an output pattern from the image boundaries via the coordination of many layers. Our work helps to quantify and visualize the sensitivity of inductive biases to architectural choices such as depth, kernel width, and number of channels. ""","""The paper studies the effect of various hyperparameters of neural networks including architecture, width, depth, initialization, optimizer, etc. on the generalization and memorization. The paper carries out a rather through empirical study of these phenomena. The authors also rain a model to mimic identity function which allows rich visualization and easy evaluation. The reviewers were mostly positive but expressed concern about the general picture. One reviewer also has concerns about ""generality of the observed phenomenon in this paper"". The authors had a thorough response which addressed many of these concerns. My view of the paper is positive. I think the authors do a great job of carrying out careful experiments. As a result I think this is a good addition to ICLR and recommend acceptance."""
797,"""Meta-Learning Runge-Kutta""",[],"""Initial value problems, i.e. differential equations with specific, initial conditions, represent a classic problem within the field of ordinary differential equations(ODEs). While the simplest types of ODEs may have closed-form solutions, most interesting cases typically rely on iterative schemes for numerical integration such as the family of Runge-Kutta methods. They are, however, sensitive to the strategy the step size is adapted during integration, which has to be chosen by the experimenter. In this paper, we show how the design of a step size controller can be cast as a learning problem, allowing deep networks to learn to exploit structure in the initial value problem at hand in an automatic way. The key ingredients for the resulting Meta-Learning Runge-Kutta (MLRK) are the development of a good performance measure and the identification of suitable input features. Traditional approaches suggest the local error estimates as input to the controller. However, by studying the characteristics of the local error function we show that including the partial derivatives of the initial value problem is favorable. Our experiments demonstrate considerable benefits over traditional approaches. In particular, MLRK is able to mitigate sudden spikes in the local error function by a faster adaptation of the step size. More importantly, the additional information in the form of partial derivatives and function values leads to a substantial improvement in performance. The source code can be found at pseudo-url""","""Summary: This paper casts the problem of step-size tuning in the Runge-Kutta method as a meta learning problem. The paper gives a review of the existing approaches to step size control in RK method. Deriving knowledge from these approaches the paper reasons about appropriate features and loss functions to use in the meta learning update. The paper shows that the proposed approach is able to generalize sufficiently enough to obtain better performance than a baseline. The paper was lacking in advocates for its merits, and needs better comparisons with other baselines before it is ready to be published."""
798,"""Ellipsoidal Trust Region Methods for Neural Network Training""","['non-convex', 'optimization', 'neural networks', 'trust-region']","""We investigate the use of ellipsoidal trust region constraints for second-order optimization of neural networks. This approach can be seen as a higher-order counterpart of adaptive gradient methods, which we here show to be interpretable as first-order trust region methods with ellipsoidal constraints. In particular, we show that the preconditioning matrix used in RMSProp and Adam satisfies the necessary conditions for provable convergence of second-order trust region methods with standard worst-case complexities. Furthermore, we run experiments across different neural architectures and datasets to find that the ellipsoidal constraints constantly outperform their spherical counterpart both in terms of number of backpropagations and asymptotic loss value. Finally, we find comparable performance to state-of-the-art first-order methods in terms of backpropagations, but further advances in hardware are needed to render Newton methods competitive in terms of time.""","""This paper interprets adaptive gradient methods as trust region methods, and then extends the trust regions to axis-aligned ellipsoids determined by the approximate curvature. It's fairly natural to try to extend the algorithms in this way, but the paper doesn't show much evidence that this is actually effective. (The experiments show an improvement only in terms of iterations, which doesn't account for the computational cost or the increased batch size; there doesn't seem to be an improvement in terms of epochs.) I suspect the second-order version might also lose some of the online convex optimization guarantees of the original methods, raising the question of whether the trust-region interpretation really captures the benefits of the original methods. The reviewers recommend rejection (even after discussion) because they are unsatisfied with the experiments; I agree with their assessment. """
799,"""Learning Hierarchical Discrete Linguistic Units from Visually-Grounded Speech""","['visually-grounded speech', 'self-supervised learning', 'discrete representation learning', 'vision and language', 'vision and speech', 'hierarchical representation learning']","""In this paper, we present a method for learning discrete linguistic units by incorporating vector quantization layers into neural models of visually grounded speech. We show that our method is capable of capturing both word-level and sub-word units, depending on how it is configured. What differentiates this paper from prior work on speech unit learning is the choice of training objective. Rather than using a reconstruction-based loss, we use a discriminative, multimodal grounding objective which forces the learned units to be useful for semantic image retrieval. We evaluate the sub-word units on the ZeroSpeech 2019 challenge, achieving a 27.3% reduction in ABX error rate over the top-performing submission, while keeping the bitrate approximately the same. We also present experiments demonstrating the noise robustness of these units. Finally, we show that a model with multiple quantizers can simultaneously learn phone-like detectors at a lower layer and word-like detectors at a higher layer. We show that these detectors are highly accurate, discovering 279 words with an F1 score of greater than 0.5.""","""The paper is extremely well-written with a clear motivation (Section 1). The approach is novel. But I think the paper's biggest strength is in its very thorough experimental investigation. Their approach is compared to other very recent speech discretization methods on the same data using the same (ABX) evaluation metric. But the work goes further in that it systematically attempts to actually understand what types of structures are captured in the intermediate discrete layers, and it is able to answer this question convincingly. Finally, very good results on standard benchmarks are achieved. To authors: Please do include the additional discussions and results in the final paper. """
800,"""Hierarchical Disentangle Network for Object Representation Learning""",[],"""An object can be described as the combination of primary visual attributes. Disentangling such underlying primitives is the long objective of representation learning. It is observed that categories have the natural multi-granularity or hierarchical characteristics, i.e. any two objects can share some common primitives in a particular category granularity while they may possess their unique ones in another granularity. However, previous works usually operate in a flat manner (i.e. in a particular granularity) to disentangle the representations of objects. Though they may obtain the primitives to constitute objects as the categories in that granularity, their results are obviously not efficient and complete. In this paper, we propose the hierarchical disentangle network (HDN) to exploit the rich hierarchical characteristics among categories to divide the disentangling process in a coarse-to-fine manner, such that each level only focuses on learning the specific representations in its granularity and finally the common and unique representations in all granularities jointly constitute the raw object. Specifically, HDN is designed based on an encoder-decoder architecture. To simultaneously ensure the disentanglement and interpretability of the encoded representations, a novel hierarchical generative adversarial network (GAN) is elaborately designed. Quantitative and qualitative evaluations on four object datasets validate the effectiveness of our method.""","""The authors propose a new method for learning hierarchically disentangled representations. One reviewer is positive, one is between weak accept and borderline and two reviewers recommend rejection, and keep their assessment after rebuttal and a discussion. The main criticism is the lack of disentanglement metrics and comparisons. After reading the paper and the discussion, the AC tends to agree with the negative reviewers. Authors are encouraged to strengthen their work and resubmit to a future venue."""
801,"""ConQUR: Mitigating Delusional Bias in Deep Q-Learning""","['reinforcement learning', 'q-learning', 'deep reinforcement learning', 'Atari']","""Delusional bias is a fundamental source of error in approximate Q-learning. To date, the only techniques that explicitly address delusion require comprehensive search using tabular value estimates. In this paper, we develop efficient methods to mitigate delusional bias by training Q-approximators with labels that are ""consistent"" with the underlying greedy policy class. We introduce a simple penalization scheme that encourages Q-labels used across training batches to remain (jointly) consistent with the expressible policy class. We also propose a search framework that allows multiple Q-approximators to be generated and tracked, thus mitigating the effect of premature (implicit) policy commitments. Experimental results demonstrate that these methods can improve the performance of Q-learning in a variety of Atari games, sometimes dramatically.""","""While there was some support for the ideas presented, the majority of reviewers felt that this submission is not ready for publication at ICLR in its present form. Concerns raised included the need for better motivation of the practicality of the approach, versus its computational cost. The need for improved evaluations was also raised."""
802,"""Defending Against Adversarial Examples by Regularized Deep Embedding""",[],"""Recent studies have demonstrated the vulnerability of deep convolutional neural networks against adversarial examples. Inspired by the observation that the intrinsic dimension of image data is much smaller than its pixel space dimension and the vulnerability of neural networks grows with the input dimension, we propose to embed high-dimensional input images into a low-dimensional space to perform classification. However, arbitrarily projecting the input images to a low-dimensional space without regularization will not improve the robustness of deep neural networks. We propose a new framework, Embedding Regularized Classifier (ER-Classifier), which improves the adversarial robustness of the classifier through embedding regularization. Experimental results on several benchmark datasets show that, our proposed framework achieves state-of-the-art performance against strong adversarial attack methods.""","""The paper suggests a new way to defend against adversarial attacks on neural networks. Two of the reviewers were negative, one of them (the most experienced in the subarea) strongly negative. One reviewer is weakly positive. The main two concerns of the reviewers are insufficient comparisons with SOTA and lack of clarity. The authors' response, though detailed, has not convinced the reviewers and has not alleviated their concerns. """
803,"""Prediction, Consistency, Curvature: Representation Learning for Locally-Linear Control""","['Embed-to-Control', 'Representation Learning', 'Stochastic Optimal Control', 'VAE', 'iLQR']","""Many real-world sequential decision-making problems can be formulated as optimal control with high-dimensional observations and unknown dynamics. A promising approach is to embed the high-dimensional observations into a lower-dimensional latent representation space, estimate the latent dynamics model, then utilize this model for control in the latent space. An important open question is how to learn a representation that is amenable to existing control algorithms? In this paper, we focus on learning representations for locally-linear control algorithms, such as iterative LQR (iLQR). By formulating and analyzing the representation learning problem from an optimal control perspective, we establish three underlying principles that the learned representation should comprise: 1) accurate prediction in the observation space, 2) consistency between latent and observation space dynamics, and 3) low curvature in the latent space transitions. These principles naturally correspond to a loss function that consists of three terms: prediction, consistency, and curvature (PCC). Crucially, to make PCC tractable, we derive an amortized variational bound for the PCC loss function. Extensive experiments on benchmark domains demonstrate that the new variational-PCC learning algorithm benefits from significantly more stable and reproducible training, and leads to superior control performance. Further ablation studies give support to the importance of all three PCC components for learning a good latent space for control.""","""This paper studies optimal control with low-dimensional representation. The paper presents interesting progress, although I urge the authors to address all issues raised by reviewers in their revisions."""
804,"""Semantic Hierarchy Emerges in the Deep Generative Representations for Scene Synthesis""","['Feature visualization', 'feature interpretation', 'generative models']","""Despite the success of Generative Adversarial Networks (GANs) in image synthesis, there lacks enough understanding on what networks have learned inside the deep generative representations and how photo-realistic images are able to be composed from random noises. In this work, we show that highly-structured semantic hierarchy emerges from the generative representations as the variation factors for synthesizing scenes. By probing the layer-wise representations with a broad set of visual concepts at different abstraction levels, we are able to quantify the causality between the activations and the semantics occurring in the output image. Such a quantification identifies the human-understandable variation factors learned by GANs to compose scenes. The qualitative and quantitative results suggest that the generative representations learned by GAN are specialized to synthesize different hierarchical semantics: the early layers tend to determine the spatial layout and configuration, the middle layers control the categorical objects, and the later layers finally render the scene attributes as well as color scheme. Identifying such a set of manipulatable latent semantics facilitates semantic scene manipulation.""","""The paper proposes to study what information is encoded in different layers of StyleGAN. The authors do so by training classifiers for different layers of latent codes and investigating whether changing the latent code changes the generated output in the expected fashion. The paper received borderline reviews with two weak accepts and one weak reject. Initially, the reviewers were more negative (with one reject, one weak reject, and one weak accept). After the rebuttal, the authors addressed most of the reviewer questions/concerns. Overall, the reviewers thought the results were interesting and appreciated the care the authors took in their investigations. The main concern of the reviewers is that the analysis is limited to only StyleGAN. It would be more interesting and informative if the authors applied their methodology to different GANs. Then they can analyze whether the methodology and findings holds for other types of GANs as well. R1 notes that given the wide interest in StyleGAN-like models, the work maybe of interest to the community despite the limited investigation. The reviewers also point out the writing can be improved to be more precise. The AC agrees that the paper is mostly well written and well presented. However, there are limitations in what is achieved in the paper and it would be of limited interest to the community. The AC recommends that the authors consider improving their work, potentially broadening their investigation to other GAN architectures, and resubmit to an appropriate venue."""
805,"""Mixup Inference: Better Exploiting Mixup to Defend Adversarial Attacks""","['Trustworthy Machine Learning', 'Adversarial Robustness', 'Inference Principle', 'Mixup']","""It has been widely recognized that adversarial examples can be easily crafted to fool deep networks, which mainly root from the locally non-linear behavior nearby input examples. Applying mixup in training provides an effective mechanism to improve generalization performance and model robustness against adversarial perturbations, which introduces the globally linear behavior in-between training examples. However, in previous work, the mixup-trained models only passively defend adversarial attacks in inference by directly classifying the inputs, where the induced global linearity is not well exploited. Namely, since the locality of the adversarial perturbations, it would be more efficient to actively break the locality via the globality of the model predictions. Inspired by simple geometric intuition, we develop an inference principle, named mixup inference (MI), for mixup-trained models. MI mixups the input with other random clean samples, which can shrink and transfer the equivalent perturbation if the input is adversarial. Our experiments on CIFAR-10 and CIFAR-100 demonstrate that MI can further improve the adversarial robustness for the models trained by mixup and its variants.""","""This paper proposed a mixup inference (MI) method, for mixup-trained models, to better defend adversarial attacks. The idea is novel and is proved to be effective on CIFAR-10 and CIFAR-100. All reviewers and the AC agree to accept the paper."""
806,"""GRAPH ANALYSIS AND GRAPH POOLING IN THE SPATIAL DOMAIN""","['Graph Neural Network', 'Graph Classification', 'Graph Pooling', 'Graph Embedding']","""The spatial convolution layer which is widely used in the Graph Neural Networks (GNNs) aggregates the feature vector of each node with the feature vectors of its neighboring nodes. The GNN is not aware of the locations of the nodes in the global structure of the graph and when the local structures corresponding to different nodes are similar to each other, the convolution layer maps all those nodes to similar or same feature vectors in the continuous feature space. Therefore, the GNN cannot distinguish two graphs if their difference is not in their local structures. In addition, when the nodes are not labeled/attributed the convolution layers can fail to distinguish even different local structures. In this paper, we propose an effective solution to address this problem of the GNNs. The proposed approach leverages a spatial representation of the graph which makes the neural network aware of the differences between the nodes and also their locations in the graph. The spatial representation which is equivalent to a point-cloud representation of the graph is obtained by a graph embedding method. Using the proposed approach, the local feature extractor of the GNN distinguishes similar local structures in different locations of the graph and the GNN infers the topological structure of the graph from the spatial distribution of the locally extracted feature vectors. Moreover, the spatial representation is utilized to simplify the graph down-sampling problem. A new graph pooling method is proposed and it is shown that the proposed pooling method achieves competitive or better results in comparison with the state-of-the-art methods. ""","""The authors identify a limitation of aggregating GNNs, which is that global structure can be mostly lost. They propose a method which combines a graph embedding with the spatial convolution GNN and show that the resulting GNN can better distinguish between similar local structures. The reviewers were mixed in their scores. The proposed approach is clearly motivated and justified and may be relelvant for some graphnet researchers, but the approach is only applicable in some circumstances - in other cases it may be desirable to ignore global structure. This, plus the high computational complexity of the proposed approach, mean that the significance is weaker. Overall the reviewers felt that the contribution was not significant enough and that the results were not statistically convincing. Decision is to reject."""
807,"""SQIL: Imitation Learning via Reinforcement Learning with Sparse Rewards""","['Imitation Learning', 'Reinforcement Learning']","""Learning to imitate expert behavior from demonstrations can be challenging, especially in environments with high-dimensional, continuous observations and unknown dynamics. Supervised learning methods based on behavioral cloning (BC) suffer from distribution shift: because the agent greedily imitates demonstrated actions, it can drift away from demonstrated states due to error accumulation. Recent methods based on reinforcement learning (RL), such as inverse RL and generative adversarial imitation learning (GAIL), overcome this issue by training an RL agent to match the demonstrations over a long horizon. Since the true reward function for the task is unknown, these methods learn a reward function from the demonstrations, often using complex and brittle approximation techniques that involve adversarial training. We propose a simple alternative that still uses RL, but does not require learning a reward function. The key idea is to provide the agent with an incentive to match the demonstrations over a long horizon, by encouraging it to return to demonstrated states upon encountering new, out-of-distribution states. We accomplish this by giving the agent a constant reward of r=+1 for matching the demonstrated action in a demonstrated state, and a constant reward of r=0 for all other behavior. Our method, which we call soft Q imitation learning (SQIL), can be implemented with a handful of minor modifications to any standard Q-learning or off-policy actor-critic algorithm. Theoretically, we show that SQIL can be interpreted as a regularized variant of BC that uses a sparsity prior to encourage long-horizon imitation. Empirically, we show that SQIL outperforms BC and achieves competitive results compared to GAIL, on a variety of image-based and low-dimensional tasks in Box2D, Atari, and MuJoCo. This paper is a proof of concept that illustrates how a simple imitation method based on RL with constant rewards can be as effective as more complex methods that use learned rewards.""","""The authors present a simple alternative to adversarial imitation learning methods like GAIL that is potentially less brittle, and can skip learning a reward function, instead learning an imitation policy directly. Their method has a close relationship with behavioral cloning, but overcomes some of the disadvantages of BC by encouraging the agent via reward to return to demonstration states if it goes out of distribution. The reviewers agree that overcoming the difficulties of both BC and adversarial imitation is an important contribution. Additionally, the authors reasonably addressed the majority of the minor concerns that the reviewers had. Therefore, I recommend for this paper to be accepted."""
808,"""Improved Training Speed, Accuracy, and Data Utilization via Loss Function Optimization""","['metalearning', 'evolutionary computation', 'loss functions', 'optimization', 'genetic programming']","""As the complexity of neural network models has grown, it has become increasingly important to optimize their design automatically through metalearning. Methods for discovering hyperparameters, topologies, and learning rate schedules have lead to significant increases in performance. This paper shows that loss functions can be optimized with metalearning as well, and result in similar improvements. The method, Genetic Loss-function Optimization (GLO), discovers loss functions de novo, and optimizes them for a target task. Leveraging techniques from genetic programming, GLO builds loss functions hierarchically from a set of operators and leaf nodes. These functions are repeatedly recombined and mutated to find an optimal structure, and then a covariance-matrix adaptation evolutionary strategy (CMA-ES) is used to find optimal coefficients. Networks trained with GLO loss functions are found to outperform the standard cross-entropy loss on standard image classification tasks. Training with these new loss functions requires fewer steps, results in lower test error, and allows for smaller datasets to be used. Loss function optimization thus provides a new dimension of metalearning, and constitutes an important step towards AutoML.""","""This paper proposes a GA-based method for optimizing the loss function a model is trained on to produce better models (in terms of final performance). The general consensus from the reviewers is that the paper, while interesting, dedicates too much of its content to analyzing one such discovered loss (the Baikal loss), and that the experimental setting (MNIST and Cifar10) is too basic to be conclusive. It seems this paper can be so significantly improved with some further and larger scale experiments that it would be wrong to prematurely recommend acceptance. My recommendation is that the authors consider the reviewer feedback, run the suggested further experiments, and are hopefully in the position to submit a significantly stronger version of this paper to a future conference."""
809,"""Distribution-Guided Local Explanation for Black-Box Classifiers""","['explanation', 'cnn', 'saliency map']","""Existing local explanation methods provide an explanation for each decision of black-box classifiers, in the form of relevance scores of features according to their contributions. To obtain satisfying explainability, many methods introduce ad hoc constraints into the classification loss to regularize these relevance scores. However, the large information gap between the classification loss and these constraints increases the difficulty of tuning hyper-parameters. To bridge this gap, in this paper we present a simple but effective mask predictor. Specifically, we model the above constraints with a distribution controller, and integrate it with a neural network to directly guide the distribution of relevance scores. The benefit of this strategy is to facilitate the setting of involved hyper-parameters, and enable discriminative scores over supporting features. The experimental results demonstrate that our method outperforms others in terms of faithfulness and explainability. Meanwhile, it also provides effective saliency maps for explaining each decision. ""","""This paper proposed a method to estimate the instance-wise saliency map for image classification, for the purpose of improving the faithfulness of the explainer. Based on the U-net, two modifications are proposed in this work. While reviewer #3 is overall positive about this work, both Reviewer #1 and #2 rated weak reject and raised a number of concerns. The major concerns include the modifications either already exist or suffer potential issue. Reviewer #2 considered that the contributions are not enough for ICLR, and the performance improvement is marginal. The authors provided detailed responses to the reviewers concerns, which help to make the paper stronger, but did not change the rating. Given the concerns raised by the reviewers, the ACs agree that this paper can not be accepted at its current state."""
810,"""Encoder-decoder Network as Loss Function for Summarization""","['encoder-decoder', 'summarization', 'loss functions']","""We present a new approach to defining a sequence loss function to train a summarizer by using a secondary encoder-decoder as a loss function, alleviating a shortcoming of word level training for sequence outputs. The technique is based on the intuition that if a summary is a good one, it should contain the most essential information from the original article, and therefore should itself be a good input sequence, in lieu of the original, from which a summary can be generated. We present experimental results where we apply this additional loss function to a general abstractive summarizer on a news summarization dataset. The result is an improvement in the ROUGE metric and an especially large improvement in human evaluations, suggesting enhanced performance that is competitive with specialized state-of-the-art models.""","""This paper presents an encoder-decoder based architecture to generate summaries. The real contribution of the paper is to use a recoder matrix which takes the output from an existing encoder-decoder network and tries to generate the reference summary again. The output here is basically the softmax layer produced by the first encoder-decoder network which then goes through a feed-forward layer before being fed as embeddings into the recoder. So, since there is no discretization, the whole model can be trained jointly. (the original loss of the first encoder-decoder model is used as well anyway). I agree with the reviewers here, that this whole model can in fact be viewed as a large encoder-decoder model, its not really clear where the improvements come from. Can you just increase the number of parameters of the original encoder-decoder model and see if it performs as good as the encoder-decoder + recoder? The paper also does not achieve SOTA on the task as there are other RL based papers which have been shown to perform better, so the choice of the recorder model is also not empirically justified. I recommend rejection of the paper in its current form."""
811,"""Effects of Linguistic Labels on Learned Visual Representations in Convolutional Neural Networks: Labels matter!""","['category learning', 'visual representation', 'linguistic labels', 'human behavior prediction']","""We investigated the changes in visual representations learnt by CNNs when using different linguistic labels (e.g., trained with basic-level labels only, superordinate-level only, or both at the same time) and how they compare to human behavior when asked to select which of three images is most different. We compared CNNs with identical architecture and input, differing only in what labels were used to supervise the training. The results showed that in the absence of labels, the models learn very little categorical structure that is often assumed to be in the input. Models trained with superordinate labels (vehicle, tool, etc.) are most helpful in allowing the models to match human categorization, implying that human representations used in odd-one-out tasks are highly modulated by semantic information not obviously present in the visual input.""","""This paper explores training CNNs with labels of differing granularity, and finds that the types of information learned by the method depends intimately on the structure of the labels provided. Thought the reviewers found value in the paper, they felt there were some issues with clarity, and didn't think the analyses were as thorough as they could be. I thank the authors for making changes to their paper in light of the reviews, and hope that they feel their paper is stronger because of the review process."""
812,"""Distributed Training Across the World""","['Distributed Training', 'Bandwidth']","""Traditional synchronous distributed training is performed inside a cluster, since it requires high bandwidth and low latency network (e.g. 25Gb Ethernet or Infini-band). However, in many application scenarios, training data are often distributed across many geographic locations, where physical distance is long and latency is high. Traditional synchronous distributed training cannot scale well under such limited network conditions. In this work, we aim to scale distributed learning un-der high-latency network. To achieve this, we propose delayed and temporally sparse (DTS) update that enables synchronous training to tolerate extreme network conditions without compromising accuracy. We benchmark our algorithms on servers deployed across three continents in the world: London (Europe), Tokyo(Asia), Oregon (North America) and Ohio (North America). Under such challenging settings, DTS achieves90speedup over traditional methods without loss of accuracy on ImageNet.""","""The paper introduces a distributed algorithm for training deep nets in clusters with high-latency (i.e. very remote) nodes. While the motivation and clarity are the strengths of the paper, the reviewers have some concerns regarding novelty and insufficient theoretical analysis. """
813,"""Zero-shot task adaptation by homoiconic meta-mapping""","['Meta-mapping', 'zero-shot', 'task adaptation', 'task representation', 'meta-learning']","""How can deep learning systems flexibly reuse their knowledge? Toward this goal, we propose a new class of challenges, and a class of architectures that can solve them. The challenges are meta-mappings, which involve systematically transforming task behaviors to adapt to new tasks zero-shot. We suggest that the key to achieving these challenges is representing the task being performed in such a way that this task representation is itself transformable. We therefore draw inspiration from functional programming and recent work in meta-learning to propose a class of Homoiconic Meta-Mapping (HoMM) approaches that represent data points and tasks in a shared latent space, and learn to infer transformations of that space. HoMM approaches can be applied to any type of machine learning task, including supervised learning and reinforcement learning. We demonstrate the utility of this perspective by exhibiting zero-shot remapping of behavior to adapt to new tasks.""","""The authors presents a method for adapting models to new tasks in a zero shot manner using learned meta-mappings. The reviewers largely agreed that this is an interesting and creative research direction. However, there was also agreement that the writing was unclear in many sections, that the appropriate metalearning baselines were not compared to, and that the power of the method was unclear due to overly simplistic domains. While the baseline issue was mostly cleared up in rebuttal and discussion, the other issues remain. Thus, I recommend rejection at this time."""
814,"""MLModelScope: A Distributed Platform for ML Model Evaluation and Benchmarking at Scale""","['Evaluation', 'Scalable', 'Repeatable', 'Fair', 'System']","""Machine Learning (ML) and Deep Learning (DL) innovations are being introduced at such a rapid pace that researchers are hard-pressed to analyze and study them. The complicated procedures for evaluating innovations, along with the lack of standard and efficient ways of specifying and provisioning ML/DL evaluation, is a major ""pain point"" for the community. This paper proposes MLModelScope, an open-source, framework/hardware agnostic, extensible and customizable design that enables repeatable, fair, and scalable model evaluation and benchmarking. We implement the distributed design with support for all major frameworks and hardware, and equip it with web, command-line, and library interfaces. To demonstrate MLModelScope's capabilities we perform parallel evaluation and show how subtle changes to model evaluation pipeline affects the accuracy and HW/SW stack choices affect performance.""","""The paper proposes a platform for benchmarking, and in particular hardware-agnostic evaluation of machine learning models. This is an important problem as our field strives for more reproducibility. This was a very confusing paper to discuss and review, since most of the reviewers (and myself) do not know much about the area. Two of the reviewers found the paper contributions sufficient to be (weakly) accepted. The third reviewer had many issues with the work and engaged in a lengthy debate with the authors, but there was strong disagreement regarding their understanding of the scope of the paper as a Tools/Systems submission. Given the lack of consensus, I must recommend rejection at this time, but highly encourage the authors to take the feedback into account and resubmit to a future venue."""
815,"""Learning to Recognize the Unseen Visual Predicates""","['Visual Relationship Detection', 'Scene Graph Generation', 'Knowledge', 'Zero-shot Learning']","""Visual relationship recognition models are limited in the ability to generalize from finite seen predicates to unseen ones. We propose a new problem setting named predicate zero-shot learning (PZSL): learning to recognize the predicates without training data. It is unlike the previous zero-shot learning problem on visual relationship recognition which learns to recognize the unseen relationship triplets () but requires all components (subject, predicate, and object) to be seen in the training set. For the PZSL problem, however, the models are expected to recognize the diverse even unseen predicates, which is meaningful for many downstream high-level tasks, like visual question answering, to handle complex scenes and open questions. The PZSL is a very challenging task since the predicates are very abstract and follow an extreme long-tail distribution. To address the PZSL problem, we present a model that performs compatibility learning leveraging the linguistic priors from the corpus and knowledge base. An unbalanced sampled-softmax is further developed to tackle the extreme long-tail distribution of predicates. Finally, the experiments are conducted to analyze the problem and verify the effectiveness of our methods. The dataset and source code will be released for further study. ""","""The paper proposes a new problem setting of predicate zero-shot learning for visual relation recognition for the setting when some of the predicates are missing, and a model that is able to address it. All reviewers agreed that the problem setting is interesting and important, but had reservations about the proposed model. In particular, the reviewers were concerned that it is too simple of a step from existing methods. One reviewer also pointed towards potential comparisons with other zero-shot methods. Following that discussion, I recommend rejection at this time but highly encourage the authors to take the feedback into account and resubmit to another venue."""
816,"""SUMO: Unbiased Estimation of Log Marginal Probability for Latent Variable Models""",[],"""Standard variational lower bounds used to train latent variable models produce biased estimates of most quantities of interest. We introduce an unbiased estimator of the log marginal likelihood and its gradients for latent variable models based on randomized truncation of infinite series. If parameterized by an encoder-decoder architecture, the parameters of the encoder can be optimized to minimize its variance of this estimator. We show that models trained using our estimator give better test-set likelihoods than a standard importance-sampling based approach for the same average computational cost. This estimator also allows use of latent variable models for tasks where unbiased estimators, rather than marginal likelihood lower bounds, are preferred, such as minimizing reverse KL divergences and estimating score functions.""","""The paper proposes a new way to train latent variable models. The standard way of training using the ELBO produces biased estimates for many quantities of interest. The authors introduce an unbiased estimate for the log marginal probability and its derivative to address this. The new estimator is based on the importance weighted autoencoder, correcting the remaining bias using russian roulette sampling. The model is empirically shown to give better test set likelihood, and can be used in tasks where unbiased estimates are needed. All reviewers are positive about the paper. Support for the main claims is provided through empirical and theoretical results. The reviewers had some minor comments, especially about the theory, which the authors have addressed with additional clarification, which was appreciated by the reviewers. The paper was deemed to be well organized. There were some unclarities about variance issues and bias from gradient clipping, which have been addressed by the authors in additional explanation as well as an additional plot. The approach is novel and addresses a very relevant problem for the ICLR community: optimizing latent variable models, especially in situations where unbiased estimates are required. The method results in marginally better optimization compared to IWAE with much smaller average number of samples. The method was deemed by the reviewers to open up new possibilities such as entropy minimization. """
817,"""Enhancing the Transformer with explicit relational encoding for math problem solving""","['Tensor Product Representation', 'Transformer', 'Mathematics Dataset', 'Attention']","""We incorporate Tensor-Product Representations within the Transformer in order to better support the explicit representation of relation structure. Our Tensor-Product Transformer (TP-Transformer) sets a new state of the art on the recently-introduced Mathematics Dataset containing 56 categories of free-form math word-problems. The essential component of the model is a novel attention mechanism, called TP-Attention, which explicitly encodes the relations between each Transformer cell and the other cells from which values have been retrieved by attention. TP-Attention goes beyond linear combination of retrieved values, strengthening representation-building and resolving ambiguities introduced by multiple layers of regular attention. The TP-Transformer's attention maps give better insights into how it is capable of solving the Mathematics Dataset's challenging problems. Pretrained models and code will be made available after publication.""","""This paper proposes a change in the attention mechanism of Transformers yielding the so-called ""Tensor-Product Transformer"" (TP-Transformer). The main idea is to capture filler-role relationships by incorporating a Hadamard product of each value vector representation (after attention) with a relation vector, for every attention head at every layer. The resulting model achieves SOTA on the Mathematics Dataset. Attention maps are shown in the analysis to give insights into how TP-Transformer is capable of solving the Mathematics Dataset's challenging problems. While the modified attention mechanism is interesting and the analysis is insightful (and improved with the addition of an experiment in NMT after the rebuttal), the reviewers expressed some concerns in the discussion stage: 1. The comparison to baseline is not fair (not to mention the 8.24% claim in conclusion). The proposed approach adds 5 million parameters to a normal transformer (table 1, 5M is a lot!), but in terms of interpolation, it only improves 3% (extrapolation improves 0.5%) at 700k steps. The rebuttal claimed that it is fair as long as the hidden size is comparable, but I don't think that's a fair argument. I suspect that increasing the feedforward hidden size (d_ff) of a normal transformer to match parameters (and add #training steps to match #train steps) might change the conclusion. 2. The new experiment on WMT further convinces me that the theoretical motivation does not hold in practice. Even with the added few million more parameters, it only improved BLEU by 0.05 (we usually consider >0.5 as significant or non-random). This might be because the feedforward and non-linearity can disambiguate as well. I also found the name TP-Transformer a bit misleading, since what is proposed and tested here is the Hadamard product (i.e. only the diagonal part of the tensor product). I recommend resubmitting an improved version of this paper with stronger empirical evidence of outperformance of regular Transformers with comparable number of parameters."""
818,"""Harnessing the Power of Infinitely Wide Deep Nets on Small-data Tasks""","['small data', 'neural tangent kernel', 'UCI database', 'few-shot learning', 'kernel SVMs', 'deep learning theory', 'kernel design']","""Recent research shows that the following two models are equivalent: (a) infinitely wide neural networks (NNs) trained under l2 loss by gradient descent with infinitesimally small learning rate (b) kernel regression with respect to so-called Neural Tangent Kernels (NTKs) (Jacot et al., 2018). An efficient algorithm to compute the NTK, as well as its convolutional counterparts, appears in Arora et al. (2019a), which allowed studying performance of infinitely wide nets on datasets like CIFAR-10. However, super-quadratic running time of kernel methods makes them best suited for small-data tasks. We report results suggesting neural tangent kernels perform strongly on low-data tasks. 1. On a standard testbed of classification/regression tasks from the UCI database, NTK SVM beats the previous gold standard, Random Forests (RF), and also the corresponding finite nets. 2. On CIFAR-10 with 10 640 training samples, Convolutional NTK consistently beats ResNet-34 by 1% - 3%. 3. On VOC07 testbed for few-shot image classification tasks on ImageNet with transfer learning (Goyal et al., 2019), replacing the linear SVM currently used with a Convolutional NTK SVM consistently improves performance. 4. Comparing the performance of NTK with the finite-width net it was derived from, NTK behavior starts at lower net widths than suggested by theoretical analysis(Arora et al., 2019a). NTKs efficacy may trace to lower variance of output.""","""This paper carries out extensive experiments on Neural Tangent Kernel (NTK) --kernel methods based on infinitely wide neural nets on small-data tasks. I recommend acceptance."""
819,"""Combining Q-Learning and Search with Amortized Value Estimates""","['model-based RL', 'Q-learning', 'MCTS', 'search']","""We introduce ""Search with Amortized Value Estimates"" (SAVE), an approach for combining model-free Q-learning with model-based Monte-Carlo Tree Search (MCTS). In SAVE, a learned prior over state-action values is used to guide MCTS, which estimates an improved set of state-action values. The new Q-estimates are then used in combination with real experience to update the prior. This effectively amortizes the value computation performed by MCTS, resulting in a cooperative relationship between model-free learning and model-based search. SAVE can be implemented on top of any Q-learning agent with access to a model, which we demonstrate by incorporating it into agents that perform challenging physical reasoning tasks and Atari. SAVE consistently achieves higher rewards with fewer training steps, and---in contrast to typical model-based search approaches---yields strong performance with very small search budgets. By combining real experience with information computed during search, SAVE demonstrates that it is possible to improve on both the performance of model-free learning and the computational cost of planning.""","""This paper proposes Search with Amortized Value Estimates (SAVE) that combines Q-learning and MCTS. SAVE uses the estimated Q-values obtained by MCTS at the root node to update the value network, and uses the learned value function to guide MCTS. The rebuttal addressed the reviewers concerns, and they are now all positive about the paper. I recommend acceptance."""
820,"""Exploration Based Language Learning for Text-Based Games""","['Text-Based Games', 'Exploration', 'Language Learning']","""This work presents an exploration and imitation-learning-based agent capable of state-of-the-art performance in playing text-based computer games. Text-based computer games describe their world to the player through natural language and expect the player to interact with the game using text. These games are of interest as they can be seen as a testbed for language understanding, problem-solving, and language generation by artificial agents. Moreover, they provide a learning environment in which these skills can be acquired through interactions with an environment rather than using fixed corpora. One aspect that makes these games particularly challenging for learning agents is the combinatorially large action space. Existing methods for solving text-based games are limited to games that are either very simple or have an action space restricted to a predetermined set of admissible actions. In this work, we propose to use the exploration approach of Go-Explore (Ecoffet et al., 2019) for solving text-based games. More specifically, in an initial exploration phase, we first extract trajectories with high rewards, after which we train a policy to solve the game by imitating these trajectories. Our experiments show that this approach outperforms existing solutions in solving text-based games, and it is more sample efficient in terms of the number of interactions with the environment. Moreover, we show that the learned policy can generalize better than existing solutions to unseen games without using any restriction on the action space.""","""The paper applies the Go-Explore algorithm to text-based games and shows that it is able to solve text-based game with better sample efficiency and generalization than some alternatives. The Go-Explore algorithm is used to extract high reward trajectories that can be used to train a policy using a seq2seq model that maps observations to actions. Paper received 1 weak accept and 2 weak rejects. Initially the paper received three weak rejects, with the author response and revision convincing one reviewer to increase their score to a weak accept. Overall, the authors liked the paper and thought that it was well-written with good experiments. However, there is concern that the paper lacks technical novelty and would not be of interest to the broader ICLR community (beyond those that are interested in text-based games). Another concern reviewers expressed was that the proposed method was only compared against baselines with simple exploration strategies and that baselines with more advanced exploration strategies should be included. The AC agrees with above concerns and encourage the authors to improve their paper based on the reviewer feedback, and to consider resubmitting to a venue that is more focused on text-based games (perhaps an NLP conference)."""
821,"""PNAT: Non-autoregressive Transformer by Position Learning""",['Text Generation'],"""Non-autoregressive generation is a new paradigm for text generation. Previous work hardly considers to explicitly model the positions of generated words. However, position modeling of output words is an essential problem in non-autoregressive text generation. In this paper, we propose PNAT, which explicitly models positions of output words as latent variables in text generation. The proposed PNATis simple yet effective. Experimental results show that PNATgives very promising results in machine translation and paraphrase generation tasks, outperforming many strong baselines.""","""This paper presents a non-autoregressive NMT model which predicts the positions of the words to be produced as a latent variable in addition to predicting the words. This is a novel idea in the field of several other papers which are trying to do similar things, and obtains good results on benchmark tasks. The major concerns are systematic comparisons with the FlowSeq paper which seems to have been published before the ICLR submission deadline. The reviewers are still not convinced by the empirical performance comparison as well as speed comparisons. With some more work this could be a good contribution. As of now, I am recommending a Rejection."""
822,"""Provable Representation Learning for Imitation Learning via Bi-level Optimization""","['imitation learning', 'representation learning', 'multitask learning', 'theory', 'behavioral cloning', 'imitation from observations alone', 'reinforcement learning']","""A common strategy in modern learning systems is to learn a representation which is useful for many tasks, a.k.a, representation learning. We study this strategy in the imitation learning setting where multiple experts trajectories are available. We formulate representation learning as a bi-level optimization problem where the ""outer"" optimization tries to learn the joint representation and the ""inner"" optimization encodes the imitation learning setup and tries to learn task-specific parameters. We instantiate this framework for the cases where the imitation setting being behavior cloning and observation alone. Theoretically, we provably show using our framework that representation learning can reduce the sample complexity of imitation learning in both settings. We also provide proof-of-concept experiments to verify our theoretical findings.""","""This paper proposes a methodology for learning a representation given multiple demonstrations, by optimizing the representation as well as the learned policy parameters. The paper includes some theoretical results showing that this is a sensible thing to do, and an empirical evaluation. Post-discussion, the reviewers (and me!) agreed that this is an interesting approach that has a lot of promise. But there was still concern about he empirical evaluation and the writing. Hence I am recommending rejection."""
823,"""XD: Cross-lingual Knowledge Distillation for Polyglot Sentence Embeddings""","['cross-lingual transfer', 'sentence embeddings', 'polyglot language models', 'knowledge distillation', 'natural language inference', 'embedding alignment', 'embedding mapping']","""Current state-of-the-art results in multilingual natural language inference (NLI) are based on tuning XLM (a pre-trained polyglot language model) separately for each language involved, resulting in multiple models. We reach significantly higher NLI results with a single model for all languages via multilingual tuning. Furthermore, we introduce cross-lingual knowledge distillation (XD), where the same polyglot model is used both as teacher and student across languages to improve its sentence representations without using the end-task labels. When used alone, XD beats multilingual tuning for some languages and the combination of them both results in a new state-of-the-art of 79.2% on the XNLI dataset, surpassing the previous result by absolute 2.5%. The models and code for reproducing our experiments will be made publicly available after de-anonymization.""","""This paper proposes a method for transferring an NLP model trained on one language a new language, without using labeled data in the new language. Reviewers were split on their recommendations, but the reviews collectively raised a number of concerns which, together, make me uncomfortable accepting the paper. Reviewers were not convinced by the value of the experimental setting described in the paperat least in the experiments conducted here, the claim that the model is distinctively effective depend on ruling out a large class of models arbitrarily. it would likely be valuable to find a concrete task/dataset/language combination that more closely aligns with the motivations for this work, and to evaluate whether the proposed method is genuinely the most effective practical option in that setting. Further, the reviewers raise a number of points involving baseline implementations, language families, and other issues, that collectively make me doubt that the paper is fully sound in its current form."""
824,"""Distributed Bandit Learning: Near-Optimal Regret with Efficient Communication""","['Theory', 'Bandit Algorithms', 'Communication Efficiency']","""We study the problem of regret minimization for distributed bandits learning, in which pseudo-formula agents work collaboratively to minimize their total regret under the coordination of a central server. Our goal is to design communication protocols with near-optimal regret and little communication cost, which is measured by the total amount of transmitted data. For distributed multi-armed bandits, we propose a protocol with near-optimal regret and only pseudo-formula communication cost, where pseudo-formula is the number of arms. The communication cost is independent of the time horizon pseudo-formula , has only logarithmic dependence on the number of arms, and matches the lower bound except for a logarithmic factor. For distributed pseudo-formula -dimensional linear bandits, we propose a protocol that achieves near-optimal regret and has communication cost of order \log d\right)\log T\right) which has only logarithmic dependence on pseudo-formula .""","""This paper tackles the problem of regret minimization in a multi-agent bandit problem, where distributed learning bandit algorithms collaborate in order to minimize their total regret. More specifically, the work focuses on efficient communication protocols and the regret corresponds to the communication cost. The goal is therefore to design protocols with little communication cost. The authors first establish lower bounds on the communication cost, and then introduce an algorithm with provable near-optimal regret. The only concern with the paper is that ICLR may not be the appropriate venue given that this work lacks representation learning contributions. However, all reviewers being otherwise positive about the quality and contributions of this work, I would recommend acceptance."""
825,"""Meta-Learning Acquisition Functions for Transfer Learning in Bayesian Optimization""","['Transfer Learning', 'Meta Learning', 'Bayesian Optimization', 'Reinforcement Learning']","""Transferring knowledge across tasks to improve data-efficiency is one of the open key challenges in the field of global black-box optimization. Readily available algorithms are typically designed to be universal optimizers and, therefore, often suboptimal for specific tasks. We propose a novel transfer learning method to obtain customized optimizers within the well-established framework of Bayesian optimization, allowing our algorithm to utilize the proven generalization capabilities of Gaussian processes. Using reinforcement learning to meta-train an acquisition function (AF) on a set of related tasks, the proposed method learns to extract implicit structural information and to exploit it for improved data-efficiency. We present experiments on a simulation-to-real transfer task as well as on several synthetic functions and on two hyperparameter search problems. The results show that our algorithm (1) automatically identifies structural properties of objective functions from available source tasks or simulations, (2) performs favourably in settings with both scarse and abundant source data, and (3) falls back to the performance level of general AFs if no particular structure is present.""","""This paper explores the idea of using meta-learning for acquisition functions. It is an interesting and novel research direction with promising results. The paper could be strengthened by adding more insights about the new acquisition function and performing more comparisons e.g. to Chen et al. 2017. But in any case, the current form of the paper should already be of high interest to the community """
826,"""Effective Mechanism to Mitigate Injuries During NFL Plays ""","['Concussion', 'American football', 'Predictive modelling', 'Injuries', 'NFL Plays', 'Optimization']","""NFL(American football),which is regarded as the premier sports icon of America, has been severely accused in the recent years of being exposed to dangerous injuries that prove to be a bigger crisis as the players' lives have been increasingly at risk. Concussions, which refer to the serious brain traumas experienced during the passage of NFL play, have displayed a dramatic rise in the recent seasons concluding in an alarming rate in 2017/18. Acknowledging the potential risk, the NFL has been trying to fight via NeuroIntel AI mechanism as well as modifying existing game rules and risky play practices to reduce the rate of concussions. As a remedy, we are suggesting an effective mechanism to extensively analyse the potential concussion risks by adopting predictive analysis to project injury risk percentage per each play and positional impact analysis to suggest safer team formation pairs to lessen injuries to offer a comprehensive study on NFL injury analysis. The proposed data analytical approach differentiates itself from the other similar approaches that were focused only on the descriptive analysis rather than going for a bigger context with predictive modelling and formation pairs mining that would assist in modifying existing rules to tackle injury concerns. The predictive model that works with Kafka-stream processor real-time inputs and risky formation pairs identification by designing FP-Matrix, makes this far-reaching solution to analyse injury data on various grounds wherever applicable.""","""All reviewers recommend reject, and there is no rebuttal."""
827,"""Iterative Deep Graph Learning for Graph Neural Networks""","['deep learning', 'graph neural networks', 'graph learning']","""In this paper, we propose an end-to-end graph learning framework, namely Iterative Deep Graph Learning (IDGL), for jointly learning graph structure and graph embedding simultaneously. We first cast graph structure learning problem as similarity metric learning problem and leverage an adapted graph regularization for controlling smoothness, connectivity and sparsity of the generated graph. We further propose a novel iterative method for searching for hidden graph structure that augments the initial graph structure. Our iterative method dynamically stops when learning graph structure approaches close enough to the ground truth graph. Our extensive experiments demonstrate that the proposed IDGL model can consistently outperform or match state-of-the-art baselines in terms of both classification accuracy and computational time. The proposed approach can cope with both transductive training and inductive training. ""","""The submission proposes a method for learning a graph structure and node embeddings through an iterative process. Smoothness and sparsity are both optimized in this approach. The iterative method has a stopping mechanism based on distance from a ground truth. The concerns of the reviewers were about scalability and novelty. Since other methods have used the same costs for optimization, as well as other aspects of this approach, there is little contribution other than the iterative process. The improvement over LDS, the most similar approach, is relatively minor. Although the paper is promising, more work is required to establish the contributions of the method. Recommendation is for rejection. """
828,"""Well-Read Students Learn Better: On the Importance of Pre-training Compact Models""","['NLP', 'self-supervised learning', 'language model pre-training', 'knowledge distillation', 'BERT', 'compact models']","""Recent developments in natural language representations have been accompanied by large and expensive models that leverage vast amounts of general-domain text through self-supervised pre-training. Due to the cost of applying such models to down-stream tasks, several model compression techniques on pre-trained language representations have been proposed (Sun et al., 2019; Sanh, 2019). However, surprisingly, the simple baseline of just pre-training and fine-tuning compact models has been overlooked. In this paper, we first show that pre-training remains important in the context of smaller architectures, and fine-tuning pre-trained compact models can be competitive to more elaborate methods proposed in concurrent work. Starting with pre-trained compact models, we then explore transferring task knowledge from large fine-tuned models through standard knowledge distillation. The resulting simple, yet effective and general algorithm, Pre-trained Distillation, brings further improvements. Through extensive experiments, we more generally explore the interaction between pre-training and distillation under two variables that have been under-studied: model size and properties of unlabeled task data. One surprising observation is that they have a compound effect even when sequentially applied on the same data. To accelerate future research, we will make our 24 pre-trained miniature BERT models publicly available.""",""" Though the reviewers thought the ideas in this paper were interesting, they questioned the importance and magnitude of the contribution. Though it is important to share empirical results, the reviewers were not sure that there was enough for this paper to be accepted."""
829,"""Representing Model Uncertainty of Neural Networks in Sparse Information Form""","['Model Uncertainty', 'Neural Networks', 'Sparse representation']","""This paper addresses the problem of representing a system's belief using multi-variate normal distributions (MND) where the underlying model is based on a deep neural network (DNN). The major challenge with DNNs is the computational complexity that is needed to obtain model uncertainty using MNDs. To achieve a scalable method, we propose a novel approach that expresses the parameter posterior in sparse information form. Our inference algorithm is based on a novel Laplace Approximation scheme, which involves a diagonal correction of the Kronecker-factored eigenbasis. As this makes the inversion of the information matrix intractable - an operation that is required for full Bayesian analysis, we devise a low-rank approximation of this eigenbasis and a memory-efficient sampling scheme. We provide both a theoretical analysis and an empirical evaluation on various benchmark data sets, showing the superiority of our approach over existing methods.""","""This paper presents a variant of recently developed Kronecker-factored approximations to BNN posteriors. It corrects the diagonal entries of the approximate Hessian, and in order to make this scalable, approximates the Kronecker factors as low-rank. The approach seems reasonable, and is a natural thing to try. The novelty is fairly limited, however, and the calculations are mostly routine. In terms of the experiments: it seems like it improved the Frobenius norm of the error, though it's not clear to me that this would be a good measure of practical effectiveness. On the toy regression experiment, it's hard for me to tell the difference from the other variational methods. It looks like it helped a bit in the quantitative comparisons, though the improvement over K-FAC doesn't seem significant enough to justify acceptance purely based on the results. Reviewers felt like there was a potentially useful idea here and didn't spot any serious red flags, but didn't feel like the novelty or the experimental results were enough to justify acceptance. I tend to agree with this assessment. """
830,"""Revisiting Gradient Episodic Memory for Continual Learning""",[],"""Gradient Episodic Memory (GEM) is an effective model for continual learning, where each gradient update for the current task is formulated as a quadratic program problem with inequality constraints that alleviate catastrophic forgetting of previous tasks. However, practical use of GEM is impeded by several limitations: (1) the data examples stored in the episodic memory may not be representative of past tasks; (2) the inequality constraints appear to be rather restrictive for competing or conflicting tasks; (3) the inequality constraints can only avoid catastrophic forgetting but can not assure positive backward transfer. To address these issues, in this paper we aim at improving the original GEM model via three handy techniques without extra computational cost. Experiments on MNIST Permutations and incremental CIFAR100 datasets demonstrate that our techniques enhance the performance of GEM remarkably. On CIFAR100 the average accuracy is improved from 66.48% to 68.76%, along with the backward (knowledge) transfer growing from 1.38% to 4.03%.""","""This paper proposes an extension of Gradient Episodic Memory (GEM) namely support examples, soft gradient constraints, and positive backward transfer. The authors argue that experiments on MNIST and CIFAR show that the proposed method consistently improves over the original GEM. All three reviewers are not convinced with experiments in the paper. R1 and R3 mentioned that the improvements over GEM appear to be small. R2 and R3 also have some concerns without results with multiple runs. R3 has questions about hyperparameter tuning. The authors also appears to be missing recent developments in this area (e.g., A-GEM). The authors did not provide a rebuttal to these concerns. I agree with the reviewers and recommend rejecting this paper."""
831,"""Learning to Prove Theorems by Learning to Generate Theorems""",[],"""We consider the task of automated theorem proving, a key AI task. Deep learning has shown promise for training theorem provers, but there are limited human-written theorems and proofs available for supervised learning. To address this limitation, we propose to learn a neural generator that automatically synthesizes theorems and proofs for the purpose of training a theorem prover. Experiments on real-world tasks demonstrate that synthetic data from our approach significantly improves the theorem prover and advances the state of the art of automated theorem proving in Metamath.""","""This paper proposes to augment training data for theorem provers by learning a deep neural generator that generates data to train a prover, resulting in an improvement over the Holophrasm baseline prover. The results were restricted to one particular mathematical formalism -- MetaMath, a limitation raised one by reviewer. All reviewers agree that it's an interesting method for addressing an important problem. However there were some concerns about the strength of the experimental results from R4 and R1. R4 in particular wanted to see results on more datasets, an assessment with which I agree. Although the authors argued vigorously against using other datasets, I am not convinced. For instance, they claim that other datasets do not afford the opportunity to generate new theorems, or the human proofs provided cannot be understood by an automatic prover. In their words, ""The idea of theorem generation can be applied to other systems beyond Metamath, but realizing it on another system is highly nontrivial. It can even involve new research challenges. In particular, due to large differences in logic foundations, grammar, inference rules, and benchmarking environments, the generation process, which is a key component of our approach, would be almost completely different for a new system. And the entire pipeline essentially needs to be re-designed and re-coded from scratch for a new formal system, which can require an unreasonable amount of engineering."" It sounds like they've essentially tailored their approach for this one dataset, which limits the generality of their approach, a limitation that was not discussed in the paper. There is also only one baseline considered, which renders their experimental findings rather weak. For these reasons, I think this work is not quite ready for publication at ICLR 2020, although future versions with stronger baselines and experiments could be quite impactful. """
832,"""The Geometry of Sign Gradient Descent""","['Sign gradient descent', 'signSGD', 'steepest descent', 'Adam']","""Sign gradient descent has become popular in machine learning due to its favorable communication cost in distributed optimization and its good performance in neural network training. However, we currently do not have a good understanding of which geometrical properties of the objective function determine the relative speed of sign gradient descent compared to standard gradient descent. In this work, we frame sign gradient descent as steepest descent with respect to the maximum norm. We review the steepest descent framework and the related concept of smoothness with respect to arbitrary norms. By studying the smoothness constant resulting from the pseudo-formula -geometry, we isolate properties of the objective which favor sign gradient descent relative to gradient descent. In short, we find two requirements on its Hessian: (i) some degree of ``diagonal dominance'' and (ii) the maximal eigenvalue being much larger than the average eigenvalue. We also clarify the meaning of a certain separable smoothness assumption used in previous analyses of sign gradient descent. Experiments verify the developed theory.""","""The paper is rejected based on unanimous reviews."""
833,"""Compression based bound for non-compressed network: unified generalization error analysis of large compressible deep neural network""","['Generalization error', 'compression based bound', 'local Rademacher complexity']","""One of the biggest issues in deep learning theory is the generalization ability of networks with huge model size. The classical learning theory suggests that overparameterized models cause overfitting. However, practically used large deep models avoid overfitting, which is not well explained by the classical approaches. To resolve this issue, several attempts have been made. Among them, the compression based bound is one of the promising approaches. However, the compression based bound can be applied only to a compressed network, and it is not applicable to the non-compressed original network. In this paper, we give a unified frame-work that can convert compression based bounds to those for non-compressed original networks. The bound gives even better rate than the one for the compressed network by improving the bias term. By establishing the unified frame-work, we can obtain a data dependent generalization error bound which gives a tighter evaluation than the data independent ones. ""","""This paper has a few interesting contributions: (a) a bound for un-compressed networks in terms of the compressed network (this is in contrast to some prior work, which only gives bounds on the compressed network); (b) the use of local Rademacher complexity to try to squeeze as much as possible out of the connection; (c) an application of the bound to a specific interesting favorable condition, namely low-rank structure. As a minor suggestion, I'd like to recommend that the authors go ahead and use their allowed 10th body page!"""
834,"""Closed loop deep Bayesian inversion: Uncertainty driven acquisition for fast MRI""","['Deep Bayesian Inversion', 'accelerated MRI', 'uncertainty quantification', 'sampling mask design']","""This work proposes a closed-loop, uncertainty-driven adaptive sampling frame- work (CLUDAS) for accelerating magnetic resonance imaging (MRI) via deep Bayesian inversion. By closed-loop, we mean that our samples adapt in real- time to the incoming data. To our knowledge, we demonstrate the first genera- tive adversarial network (GAN) based framework for posterior estimation over a continuum sampling rates of an inverse problem. We use this estimator to drive the sampling for accelerated MRI. Our numerical evidence demonstrates that the variance estimate strongly correlates with the expected MSE improvement for dif- ferent acceleration rates even with few posterior samples. Moreover, the resulting masks bring improvements to the state-of-the-art fixed and active mask designing approaches across MSE, posterior variance and SSIM on real undersampled MRI scans.""","""The author responses and notes to the AC are acknowledged. A fourth review was requested because this seemed like a tricky paper to review, given both the technical contribution and the application area. Overall, the reviewers were all in agreement in terms of score that the paper was just below borderline for acceptance. They found that the methodology seemed sensible and the application potentially impactful. However, a common thread was that the paper was hard to follow for non-experts on MRI and the reviewers weren't entirely convinced by the experiments (asking for additional experiments and comparison to Zhang et al.). The authors comment on the challenge of implementing Zhang is acknowledged and it's unfortunate that cluster issues prevented additional experimental results. While ICLR certainly accepts application papers and particularly ones with interesting technical contribution in machine learning, given that the reviewers struggled to follow the paper through the application specific language it does seem like this isn't the right venue for the paper as written. Thus the recommendation is to reject. Perhaps a more application specific venue would be a better fit for this work. Otherwise, making the paper more accessible to the ML audience and providing experiments to justify the methodology beyond the application would make the paper much stronger."""
835,"""Acutum: When Generalization Meets Adaptability""","['optimization', 'momentum', 'adaptive gradient methods']","""In spite of the slow convergence, stochastic gradient descent (SGD) is still the most practical optimization method due to its outstanding generalization ability and simplicity. On the other hand, adaptive methods have attracted much more attention of optimization and machine learning communities, both for the leverage of life-long information and for the deep and fundamental mathematical theory. Taking the best of both worlds is the most exciting and challenging question in the field of optimization for machine learning. In this paper, we take a small step towards such ultimate goal. We revisit existing adaptive methods from a novel point of view, which reveals a fresh understanding of momentum. Our new intuition empowers us to remove the second moments in Adam without the loss of performance. Based on our view, we propose a new method, named acute adaptive momentum (Acutum). To the best of our knowledge, Acutum is the first adaptive gradient method without second moments. Experimentally, we demonstrate that our method has a faster convergence rate than Adam/Amsgrad, and generalizes as well as SGD with momentum. We also provide a convergence analysis of our proposed method to complement our intuition. ""","""The paper addresses an important problem of finding a good trade-off between generalization and convergence speed of stochastic gradient methods for training deep nets. However, there is a consensus among the reviewers, even after rebuttals provided by the authors, that the contribution is somewhat limited and the paper may require additional work before it is ready to be published."""
836,"""DropEdge: Towards Deep Graph Convolutional Networks on Node Classification""","['graph neural network', 'over-smoothing', 'over-fitting', 'dropedge', 'graph convolutional networks']","""Over-fitting and over-smoothing are two main obstacles of developing deep Graph Convolutional Networks (GCNs) for node classification. In particular, over-fitting weakens the generalization ability on small dataset, while over-smoothing impedes model training by isolating output representations from the input features with the increase in network depth. This paper proposes DropEdge, a novel and flexible technique to alleviate both issues. At its core, DropEdge randomly removes a certain number of edges from the input graph at each training epoch, acting like a data augmenter and also a message passing reducer. Furthermore, we theoretically demonstrate that DropEdge either reduces the convergence speed of over-smoothing or relieves the information loss caused by it. More importantly, our DropEdge is a general skill that can be equipped with many other backbone models (e.g. GCN, ResGCN, GraphSAGE, and JKNet) for enhanced performance. Extensive experiments on several benchmarks verify that DropEdge consistently improves the performance on a variety of both shallow and deep GCNs. The effect of DropEdge on preventing over-smoothing is empirically visualized and validated as well. Codes are released on~pseudo-url.""","""The paper proposes a very simple but thoroughly evaluated and investigated idea for improving generalization in GCNs. Though the reviews are mixed, and in the post-rebuttal discussion the two negative reviewers stuck to their ratings, the area chair feels that there are no strong grounds for rejection in the negative reviews. Accept."""
837,"""Symmetry and Systematicity""","['symmetry', 'systematicity', 'convolution', 'symbols', 'generalisation']","""We argue that symmetry is an important consideration in addressing the problem of systematicity and investigate two forms of symmetry relevant to symbolic processes. We implement this approach in terms of convolution and show that it can be used to achieve effective generalisation in three toy problems: rule learning, composition and grammar learning.""","""Thanks for clarifying several issues raised by the reviewers, which helped us understand the paper. After all, we decided not to accept this paper due to the weakness of its contribution. I hope the updated comments by the reviewers help you strengthen your paper for potential future submission."""
838,"""Multiplicative Interactions and Where to Find Them""","['multiplicative interactions', 'hypernetworks', 'attention']","""We explore the role of multiplicative interaction as a unifying framework to describe a range of classical and modern neural network architectural motifs, such as gating, attention layers, hypernetworks, and dynamic convolutions amongst others. Multiplicative interaction layers as primitive operations have a long-established presence in the literature, though this often not emphasized and thus under-appreciated. We begin by showing that such layers strictly enrich the representable function classes of neural networks. We conjecture that multiplicative interactions offer a particularly powerful inductive bias when fusing multiple streams of information or when conditional computation is required. We therefore argue that they should be considered in many situation where multiple compute or information paths need to be combined, in place of the simple and oft-used concatenation operation. Finally, we back up our claims and demonstrate the potential of multiplicative interactions by applying them in large-scale complex RL and sequence modelling tasks, where their use allows us to deliver state-of-the-art results, and thereby provides new evidence in support of multiplicative interactions playing a more prominent role when designing new neural network architectures.""","""This paper provides a unifying perspective regarding a variety of popular DNN architectures in terms of the inclusion of multiplicative interaction layers. Such layers increase the representational power of conventional linear layers, which the paper argues can induce a useful inductive bias in practical scenarios such as when multiple streams of information are fused. Empirical support is provided to validate these claims and showcase the potential of multiplicative interactions in occupying broader practical roles. All reviewers agreed to accept this paper, although some concerns were raised in terms of novelty, clarity, and the relationship with state-of-the-art models. However, the author rebuttal and updated revision are adequate, and I believe that this paper should be accepted."""
839,"""Constrained Markov Decision Processes via Backward Value Functions""","['Reinforcement Learning', 'Constrained Markov Decision Processes', 'Deep Reinforcement Learning']","""Although Reinforcement Learning (RL) algorithms have found tremendous success in simulated domains, they often cannot directly be applied to physical systems, especially in cases where there are hard constraints to satisfy (e.g. on safety or resources). In standard RL, the agent is incentivized to explore any behavior as long as it maximizes rewards, but in the real world undesired behavior can damage either the system or the agent in a way that breaks the learning process itself. In this work, we model the problem of learning with constraints as a Constrained Markov Decision Process, and provide a new on-policy formulation for solving it. A key contribution of our approach is to translate cumulative cost constraints into state-based constraints. Through this, we define a safe policy improvement method which maximizes returns while ensuring that the constraints are satisfied at every step. We provide theoretical guarantees under which the agent converges while ensuring safety over the course of training. We also highlight computational advantages of this approach. The effectiveness of our approach is demonstrated on safe navigation tasks and in safety-constrained versions of MuJoCo environments, with deep neural networks.""","""The paper considers the setting of constrained MDPs and proposes using backward value functions to keep track of the constraints. All reviewers agreed that the idea of backward value functions is interesting, but there were a few technical concerns raised, and the reviewers remained unconvinced after the rebuttal. In particular, there were doubts whether the method actually makes sense for the considered problem (the backward VF averaging constraints over all trajectories, instead of only considering the current one), and a concern about insufficient baseline comparisons. I recommend rejection at this time, but encourage the authors to take the feedback into account, make the paper more crisp, and resubmit to a future venue."""
840,"""SELF: Learning to Filter Noisy Labels with Self-Ensembling""","['Ensemble Learning', 'Robust Learning', 'Noisy Labels', 'Labels Filtering']","""Deep neural networks (DNNs) have been shown to over-fit a dataset when being trained with noisy labels for a long enough time. To overcome this problem, we present a simple and effective method self-ensemble label filtering (SELF) to progressively filter out the wrong labels during training. Our method improves the task performance by gradually allowing supervision only from the potentially non-noisy (clean) labels and stops learning on the filtered noisy labels. For the filtering, we form running averages of predictions over the entire training dataset using the network output at different training epochs. We show that these ensemble estimates yield more accurate identification of inconsistent predictions throughout training than the single estimates of the network at the most recent training epoch. While filtered samples are removed entirely from the supervised training loss, we dynamically leverage them via semi-supervised learning in the unsupervised loss. We demonstrate the positive effect of such an approach on various image classification tasks under both symmetric and asymmetric label noise and at different noise ratios. It substantially outperforms all previous works on noise-aware learning across different datasets and can be applied to a broad set of network architectures.""","""The authors addressed the issues raised by the reviewers; I suggest to accept this paper."""
841,"""Learning Space Partitions for Nearest Neighbor Search""","['space partition', 'lsh', 'locality sensitive hashing', 'nearest neighbor search']","""Space partitions of pseudo-formula underlie a vast and important class of fast nearest neighbor search (NNS) algorithms. Inspired by recent theoretical work on NNS for general metric spaces (Andoni et al. 2018b,c), we develop a new framework for building space partitions reducing the problem to balanced graph partitioning followed by supervised classification. We instantiate this general approach with the KaHIP graph partitioner (Sanders and Schulz 2013) and neural networks, respectively, to obtain a new partitioning procedure called Neural Locality-Sensitive Hashing (Neural LSH). On several standard benchmarks for NNS (Aumuller et al. 2017), our experiments show that the partitions obtained by Neural LSH consistently outperform partitions found by quantization-based and tree-based methods as well as classic, data-oblivious LSH.""","""This paper proposes a new framework for improved nearest neighbor search by learning a space partition of the data, allowing for better scalability in distributed settings and overall better performance over existing benchmarks. The two reviewers who were most confident were both positive about the contributions and the revisions. The one reviewer who recommended reject was concerned about the metric used and whether comparison with baselines was fair. In my opinion, the authors seem to have been very receptive to reviewer comments and answered these issues to my satisfaction. After author and reviewer engagement, both R1 and myself are satisfied with the addition of the new baselines and think the authors have sufficiently addressed the major concerns. For the final version of the paper, Id urge the authors to take seriously R4s comment regarding clarity and add algorithmic details as per their suggestion. """
842,"""Salient Explanation for Fine-grained Classification""","['Visual explanation', 'XAI', 'Constitutional Neural Network']","""Explaining the prediction of deep models has gained increasing attention to increase its applicability, even spreading it to life-affecting decisions. However there has been no attempt to pinpoint only the most discriminative features contributing specifically to separating different classes in a fine-grained classification task. This paper introduces a novel notion of salient explanation and proposes a simple yet effective salient explanation method called Gaussian light and shadow (GLAS), which estimates the spatial impact of deep models by the feature perturbation inspired by light and shadow in nature. GLAS provides a useful coarse-to-fine control benefiting from scalability of Gaussian mask. We also devised the ability to identify multiple instances through recursive GLAS. We prove the effectiveness of GLAS for fine-grained classification using the fine-grained classification dataset. To show the general applicability, we also illustrate that GLAS has state-of-the-art performance at high speed (about 0.5 sec per 224 pseudo-formula 224 image) via the ImageNet Large Scale Visual Recognition Challenge. ""","""This paper is interested in finding salient areas in a deep learning image classification setting. The introduced method relies on masking images using Gaussian Gaussian light and shadow (GLAS) and estimating its impact on output. As noted by all reviewers, the paper is too weak for publication in its current form: - Novelty is very low. - Experimental section not convincing enough, in particular some metrics are missing. - The writing should be improved."""
843,"""GraphAF: a Flow-based Autoregressive Model for Molecular Graph Generation""","['Molecular graph generation', 'deep generative models', 'normalizing flows', 'autoregressive models']","""Molecular graph generation is a fundamental problem for drug discovery and has been attracting growing attention. The problem is challenging since it requires not only generating chemically valid molecular structures but also optimizing their chemical properties in the meantime. Inspired by the recent progress in deep generative models, in this paper we propose a flow-based autoregressive model for graph generation called GraphAF. GraphAF combines the advantages of both autoregressive and flow-based approaches and enjoys: (1) high model flexibility for data density estimation; (2) efficient parallel computation for training; (3) an iterative sampling process, which allows leveraging chemical domain knowledge for valency checking. Experimental results show that GraphAF is able to generate 68\% chemically valid molecules even without chemical knowledge rules and 100\% valid molecules with chemical rules. The training process of GraphAF is two times faster than the existing state-of-the-art approach GCPN. After fine-tuning the model for goal-directed property optimization with reinforcement learning, GraphAF achieves state-of-the-art performance on both chemical property optimization and constrained property optimization. ""","""All reviewers agreed that this paper is essentially a combination of existing ideas, making it a bit incremental, but is well-executed and a good contribution. Specifically, to quote R1: ""This paper proposes a generative model architecture for molecular graph generation based on autoregressive flows. The main contribution of this paper is to combine existing techniques (auto-regressive BFS-ordered generation of graphs, normalizing flows, dequantization by Gaussian noise, fine-tuning based on reinforcement learning for molecular property optimization, and validity constrained sampling). Most of these techniques are well-established either for data generation with normalizing flows or for molecular graph generation and the novelty lies in the combination of these building blocks into a framework. ... Overall, the paper is very well written, nicely structured and addresses an important problem. The framework in its entirety is novel, but the building blocks of the proposed framework are established in prior work and the idea of using normalizing flows for graph generation has been proposed in earlier work. Nonetheless, I find the paper relevant for an ICLR audience and the quality of execution and presentation of the paper is good."""""
844,"""Low Bias Gradient Estimates for Very Deep Boolean Stochastic Networks""",[],"""Stochastic neural networks with discrete random variables are an important class of models for their expressivity and interpretability. Since direct differentiation and backpropagation is not possible, Monte Carlo gradient estimation techniques have been widely employed for training such models. Efficient stochastic gradient estimators, such Straight-Through and Gumbel-Softmax, work well for shallow models with one or two stochastic layers. Their performance, however, suffers with increasing model complexity. In this work we focus on stochastic networks with multiple layers of Boolean latent variables. To analyze such such networks, we employ the framework of harmonic analysis for Boolean functions. We use it to derive an analytic formulation for the source of bias in the biased Straight-Through estimator. Based on the analysis we propose \emph{FouST}, a simple gradient estimation algorithm that relies on three simple bias reduction steps. Extensive experiments show that FouST performs favorably compared to state-of-the-art biased estimators, while being much faster than unbiased ones. To the best of our knowledge FouST is the first gradient estimator to train up very deep stochastic neural networks, with up to 80 deterministic and 11 stochastic layers. ""","""Straight-Through is a popular, yet not theoretically well-understood, biased gradient estimator for Bernoulli random variables. The low variance of this estimator makes it a highly useful tool for training large-scale models with binary latents. However, the bias of this estimator may cause divergence in training, which is a significant practical issue. The paper develops a Fourier analysis of the Straight-Through estimator and provides an expression for the bias of the estimator in terms of the Fourier coefficients of the considered function. The paper in its current form is not good enough for publication, and the reviewers believe that the paper contains significant mistakes when deriving the estimator. Furthermore, the Fourier analysis seems unnecessary. """
845,"""Autoencoder-based Initialization for Recurrent Neural Networks with a Linear Memory""","['recurrent neural networks', 'autoencoders', 'orthogonal RNNs']","""Orthogonal recurrent neural networks address the vanishing gradient problem by parameterizing the recurrent connections using an orthogonal matrix. This class of models is particularly effective to solve tasks that require the memorization of long sequences. We propose an alternative solution based on explicit memorization using linear autoencoders for sequences. We show how a recently proposed recurrent architecture, the Linear Memory Network, composed of a nonlinear feedforward layer and a separate linear recurrence, can be used to solve hard memorization tasks. We propose an initialization schema that sets the weights of a recurrent architecture to approximate a linear autoencoder of the input sequences, which can be found with a closed-form solution. The initialization schema can be easily adapted to any recurrent architecture. We argue that this approach is superior to a random orthogonal initialization due to the autoencoder, which allows the memorization of long sequences even before training. The empirical analysis show that our approach achieves competitive results against alternative orthogonal models, and the LSTM, on sequential MNIST, permuted MNIST and TIMIT.""","""The paper explores an initialization scheme for the recently introduced linear memory network (LMN) (Bacciu et al., 2019) that is better than random initialization and the approach is tested on various MNIST and TIMIT data sets with positive results. Reviewer 3 raised concerns about the breadth of experiments and novelty. Reviewer 2 recognized that the model performs well on its MNIST baselines and had concerns about applicability to larger settings. Reviewer 1 acknowledges a very well written paper, but again raises concerns about the thoroughness of the experiments. The authors responded to all three reviewers, responding that the tasks were chosen to match existing work and that the approach is complementary to LSTMs to solve different tasks. Overall the reviewers did not re-adjust their ratings. There remains questions on scalability and generality, which makes the paper not yet ready for acceptance. We hope that the reviews support the authors further research."""
846,"""Why do These Match? Explaining the Behavior of Image Similarity Models""","['explainable artificial intelligence', 'image similarity', 'artificial intelligence for fashion']","""Explaining a deep learning model can help users understand its behavior and allow researchers to discern its shortcomings. Recent work has primarily focused on explaining models for tasks like image classification or visual question answering. In this paper, we introduce an explanation approach for image similarity models, where a model's output is a score measuring the similarity of two inputs rather than a classification. In this task, an explanation depends on both of the input images, so standard methods do not apply. We propose an explanation method that pairs a saliency map identifying important image regions with an attribute that best explains the match. We find that our explanations provide additional information not typically captured by saliency maps alone, and can also improve performance on the classic task of attribute recognition. Our approach's ability to generalize is demonstrated on two datasets from diverse domains, Polyvore Outfits and Animals with Attributes 2.""","""This submission proposes an explainability method for deep visual representation models that have been trained to compute image similarity. Strengths: -The paper tackles an important and overlooked problem. -The proposed approach is novel and interesting. Weaknesses: -The evaluation is not convincing. In particular (i) the evaluation is performed only on ground-truth pairs, rather than on ground-truth pairs and predicted pairs; (ii) the user study doesnt disambiguate whether users find the SANE explanations better than the saliency map explanations or whether users tend to find text more understandable in general than heat maps. The user study should have compared their predicted attributes to the attribute prediction baseline; (iii) the explanation of Figure 4 is not convincing: the attribute is not only being removed. A new attribute is also being inserted (i.e. a new color). Therefore its not clear whether the similarity score should have increased or decreased; (iv) the proposed metric in section 4.2 is flawed: It matters whether similarity increases or decreases with insertion or deletion. The proposed metric doesnt reflect that. -Some key details, such as how the attribute insertion process was performed, havent been explained. The reviewer ratings were borderline after discussion, with some important concerns still not having been addressed after the author feedback period. Given the remaining shortcomings, AC recommends rejection."""
847,"""CAN ALTQ LEARN FASTER: EXPERIMENTS AND THEORY""","['Reinforcement Learning', 'Q-Learning', 'Adam', 'Restart', 'Convergence Analysis']","""Differently from the popular Deep Q-Network (DQN) learning, Alternating Q-learning (AltQ) does not fully fit a target Q-function at each iteration, and is generally known to be unstable and inefficient. Limited applications of AltQ mostly rely on substantially altering the algorithm architecture in order to improve its performance. Although Adam appears to be a natural solution, its performance in AltQ has rarely been studied before. In this paper, we first provide a solid exploration on how well AltQ performs with Adam. We then take a further step to improve the implementation by adopting the technique of parameter restart. More specifically, the proposed algorithms are tested on a batch of Atari 2600 games and exhibit superior performance than the DQN learning method. The convergence rate of the slightly modified version of the proposed algorithms is characterized under the linear function approximation. To the best of our knowledge, this is the first theoretical study on the Adam-type algorithms in Q-learning. ""","""The reviewers attempted to provide a fair assessment of this work, albeit with varying qualifications. Nevertheless, the depth and significance of the technical contribution was unanimously questioned, and the experimental evaluation was not considered to be convincing by any of the assessors. The criticisms are sufficient to ask the authors to further strengthen this work before it can be considered for a top conference."""
848,"""Verification of Generative-Model-Based Visual Transformations""","['robustness certification', 'formal verification', 'robustness analysis', 'latent space interpolations']","""Generative networks are promising models for specifying visual transformations. Unfortunately, certification of generative models is challenging as one needs to capture sufficient non-convexity so to produce precise bounds on the output. Existing verification methods either fail to scale to generative networks or do not capture enough non-convexity. In this work, we present a new verifier, called ApproxLine, that can certify non-trivial properties of generative networks. ApproxLine performs both deterministic and probabilistic abstract interpretation and captures infinite sets of outputs of generative networks. We show that ApproxLine can verify interesting interpolations in the network's latent space.""","""The goal of verification of properties of generative models is very interesting and the contributions of this work seem to make some progress in this context. However, the current state of the paper (particularly, its presentation) makes it difficult to recommend its acceptance."""
849,"""InfoGraph: Unsupervised and Semi-supervised Graph-Level Representation Learning via Mutual Information Maximization""","['graph-level representation learning', 'mutual information maximization']","""This paper studies learning the representations of whole graphs in both unsupervised and semi-supervised scenarios. Graph-level representations are critical in a variety of real-world applications such as predicting the properties of molecules and community analysis in social networks. Traditional graph kernel based methods are simple, yet effective for obtaining fixed-length representations for graphs but they suffer from poor generalization due to hand-crafted designs. There are also some recent methods based on language models (e.g. graph2vec) but they tend to only consider certain substructures (e.g. subtrees) as graph representatives. Inspired by recent progress of unsupervised representation learning, in this paper we proposed a novel method called InfoGraph for learning graph-level representations. We maximize the mutual information between the graph-level representation and the representations of substructures of different scales (e.g., nodes, edges, triangles). By doing so, the graph-level representations encode aspects of the data that are shared across different scales of substructures. Furthermore, we further propose InfoGraph*, an extension of InfoGraph for semisupervised scenarios. InfoGraph* maximizes the mutual information between unsupervised graph representations learned by InfoGraph and the representations learned by existing supervised methods. As a result, the supervised encoder learns from unlabeled data while preserving the latent semantic space favored by the current supervised task. Experimental results on the tasks of graph classification and molecular property prediction show that InfoGraph is superior to state-of-the-art baselines and InfoGraph* can achieve performance competitive with state-of-the-art semi-supervised models.""","""This paper proposes a graph embedding method for the whole graph under both unsupervised and semi-supervised setting. It can extract a fixed length graph-level representation with good generalization capability. All reviewers provided unanimous rating of weak accept. The reviewers praise the paper is well written and is value to different fields dealing with graph learning. There are some discussions on the novelty of the approach, which was better clarified after the response from the authors. Overall this paper presents a new effort in the active topic of graph representation learning with potential large impact to multiple fields. Therefore, the ACs recommend it to be an oral paper."""
850,"""Learning to Control PDEs with Differentiable Physics""","['Differentiable physics', 'Optimal control', 'Deep learning']","""Predicting outcomes and planning interactions with the physical world are long-standing goals for machine learning. A variety of such tasks involves continuous physical systems, which can be described by partial differential equations (PDEs) with many degrees of freedom. Existing methods that aim to control the dynamics of such systems are typically limited to relatively short time frames or a small number of interaction parameters. We present a novel hierarchical predictor-corrector scheme which enables neural networks to learn to understand and control complex nonlinear physical systems over long time frames. We propose to split the problem into two distinct tasks: planning and control. To this end, we introduce a predictor network that plans optimal trajectories and a control network that infers the corresponding control parameters. Both stages are trained end-to-end using a differentiable PDE solver. We demonstrate that our method successfully develops an understanding of complex physical systems and learns to control them for tasks involving PDEs such as the incompressible Navier-Stokes equations.""","""The paper proposes a method to control dynamical systems described by a partial differential equations (PDE). The method uses a hierarchical predictor-corrector scheme that divides the problem into smaller and simpler temporal subproblems. They illustrate the performance of their method on 1D Burgers PDE and 2D incompressible flow. The reviewers are all positive about this paper and find it well-written and potentially impactful. Hence, I recommend acceptance of this paper."""
851,"""Faster Neural Network Training with Data Echoing""","['systems', 'faster training', 'large scale']","""In the twilight of Moore's law, GPUs and other specialized hardware accelerators have dramatically sped up neural network training. However, earlier stages of the training pipeline, such as disk I/O and data preprocessing, do not run on accelerators. As accelerators continue to improve, these earlier stages will increasingly become the bottleneck. In this paper, we introduce data echoing, which reduces the total computation used by earlier pipeline stages and speeds up training whenever computation upstream from accelerators dominates the training time. Data echoing reuses (or echoes) intermediate outputs from earlier pipeline stages in order to reclaim idle capacity. We investigate the behavior of different data echoing algorithms on various workloads, for various amounts of echoing, and for various batch sizes. We find that in all settings, at least one data echoing algorithm can match the baseline's predictive performance using less upstream computation. We measured a factor of 3.25 decrease in wall-clock time for ResNet-50 on ImageNet when reading training data over a network.""","""This paper presents a simple trick of taking multiple SGD steps on the same data to improve distributed processing of data and reclaim idle capacity. The underlying ideas seems interesting enough, but the reviewers had several concerns. 1. The method is a simple trick (R2). I don't think this is a good reason to reject the paper, as R3 also noted, so I think this is fine. 2. There are not clear application cases (R3). The authors have given a reasonable response to this, in indicating that this method is likely more useful for prototyping than for well-developed applications. This makes sense to me, but both R3 and I felt that this was insufficiently discussed in the paper, despite seeming quite important to arguing the main point. 3. The results look magical, or too good to be true without additional analysis (R1 and R3). This concerns me the most, and I'm not sure that this point has been addressed by the rebuttal. In addition, it seems that extensive hyperparameter tuning has been performed, which also somewhat goes against the idea that ""this is good for prototyping"". If it's good for prototyping, then ideally it should be a method where hyperparameter tuning is not very necessary. 4. The connections with theoretical understanding of SGD are not well elucidated (R1). I also agree this is a problem, but perhaps not a fatal one -- very often simple heuristics prove effective, and then are analyzed later in follow-up papers. Honestly, this paper is somewhat borderline, but given the large number of good papers that have been submitted to ICLR this year, I'm recommending that this not be accepted at this time, but certainly hope that the authors continue to improve the paper towards a final publication at a different venue. """
852,"""Quantum Semi-Supervised Kernel Learning""","['quantum machine learning', 'semi-supervised learning', 'support vector machines']","""Quantum machine learning methods have the potential to facilitate learning using extremely large datasets. While the availability of data for training machine learning models is steadily increasing, oftentimes it is much easier to collect feature vectors that to obtain the corresponding labels. One of the approaches for addressing this issue is to use semi-supervised learning, which leverages not only the labeled samples, but also unlabeled feature vectors. Here, we present a quantum machine learning algorithm for training Semi-Supervised Kernel Support Vector Machines. The algorithm uses recent advances in quantum sample-based Hamiltonian simulation to extend the existing Quantum LS-SVM algorithm to handle the semi-supervised term in the loss, while maintaining the same quantum speedup as the Quantum LS-SVM.""","""Three reviewers have assessed this paper and they have scored it 6/6/6 after rebuttal with one reviewer hesitating about the appropriateness of this submission to ML venues. The reviewers have raised a number of criticisms such as an incremental nature of the paper (HHL and LMR algorithms) and the main contributions lying more within the field of quantum computing than ML. The paper was discussed with reviewers, buddy AC and chairs. On balance, it was concluded that this paper is minimally below the acceptance threshold. We encourage authors to consider all criticism, improve the paper and resubmit to another venue as there is some merit to the proposed idea. """
853,"""A Non-asymptotic comparison of SVRG and SGD: tradeoffs between compute and speed""","['variance reduction', 'non-asymptotic analysis', 'trade-off', 'computational cost', 'convergence speed']","""Stochastic gradient descent (SGD), which trades off noisy gradient updates for computational efficiency, is the de-facto optimization algorithm to solve large-scale machine learning problems. SGD can make rapid learning progress by performing updates using subsampled training data, but the noisy updates also lead to slow asymptotic convergence. Several variance reduction algorithms, such as SVRG, introduce control variates to obtain a lower variance gradient estimate and faster convergence. Despite their appealing asymptotic guarantees, SVRG-like algorithms have not been widely adopted in deep learning. The traditional asymptotic analysis in stochastic optimization provides limited insight into training deep learning models under a fixed number of epochs. In this paper, we present a non-asymptotic analysis of SVRG under a noisy least squares regression problem. Our primary focus is to compare the exact loss of SVRG to that of SGD at each iteration t. We show that the learning dynamics of our regression model closely matches with that of neural networks on MNIST and CIFAR-10 for both the underparameterized and the overparameterized models. Our analysis and experimental results suggest there is a trade-off between the computational cost and the convergence speed in underparametrized neural networks. SVRG outperforms SGD after a few epochs in this regime. However, SGD is shown to always outperform SVRG in the overparameterized regime.""","""Two reviewers as well as the AC are confused by the paperperhaps because the readability of it should be improved? It is clear that the page limitation of conferences are problematic, with 7 pages of appendix (not part of the review) the authors may consider another venue to publish. In its current form, the usefulness for the ICLR community seems limited."""
854,"""Meta-Learning Initializations for Image Segmentation""","['meta-learning', 'image segmentation']","""While meta-learning approaches that utilize neural network representations have made progress in few-shot image classification, reinforcement learning, and, more recently, image semantic segmentation, the training algorithms and model architectures have become increasingly specialized to the few-shot domain. A natural question that arises is how to develop learning systems that scale from few-shot to many-shot settings while yielding human level performance in both. One scalable potential approach that does not require ensembling many models nor the computational costs of relation networks, is to meta-learn an initialization. In this work, we study first-order meta-learning of initializations for deep neural networks that must produce dense, structured predictions given an arbitrary amount of train- ing data for a new task. Our primary contributions include (1), an extension and experimental analysis of first-order model agnostic meta-learning algorithms (including FOMAML and Reptile) to image segmentation, (2) a formalization of the generalization error of episodic meta-learning algorithms, which we leverage to decrease error on unseen tasks, (3) a novel neural network architecture built for parameter efficiency which we call EfficientLab, and (4) an empirical study of how meta-learned initializations compare to ImageNet initializations as the training set size increases. We show that meta-learned initializations for image segmentation smoothly transition from canonical few-shot learning problems to larger datasets, outperforming random and ImageNet-trained initializations. Finally, we show both theoretically and empirically that a key limitation of MAML-type algorithms is that when adapting to new tasks, a single update procedure is used that is not conditioned on the data. We find that our network, with an empirically estimated optimal update procedure yields state of the art results on the FSS-1000 dataset, while only requiring one forward pass through a single model at evaluation time.""","""The reviewers reached a consensus that the paper was not ready to be accepted in its current form. The main concerns were in regard to clarity, relatively limited novelty, and a relatively unsatisfying experimental evaluation. Although some of the clarity concerns were addressed during the response period, the other issues still remained, and the reviewers generally agreed that the paper should be rejected."""
855,"""Do recent advancements in model-based deep reinforcement learning really improve data efficiency?""","['deep learning', 'reinforcement learning', 'data efficiency', 'DQN', 'Rainbow', 'SimPLe']","""Reinforcement learning (RL) has seen great advancements in the past few years. Nevertheless, the consensus among the RL community is that currently used model-free methods, despite all their benefits, suffer from extreme data inefficiency. To circumvent this problem, novel model-based approaches were introduced that often claim to be much more efficient than their model-free counterparts. In this paper, however, we demonstrate that the state-of-the-art model-free Rainbow DQN algorithm can be trained using a much smaller number of samples than it is commonly reported. By simply allowing the algorithm to execute network updates more frequently we manage to reach similar or better results than existing model-based techniques, at a fraction of complexity and computational costs. Furthermore, based on the outcomes of the study, we argue that the agent similar to the modified Rainbow DQN that is presented in this paper should be used as a baseline for any future work aimed at improving sample efficiency of deep reinforcement learning.""","""The paper makes broad claims, but the depth of the experiments is very limited to a narrow combination of algorithms."""
856,"""PCMC-Net: Feature-based Pairwise Choice Markov Chains""","['choice modeling', 'pairwise choice Markov chains', 'deep learning', 'amortized inference', 'automatic differentiation', 'airline itinerary choice modeling']","""Pairwise Choice Markov Chains (PCMC) have been recently introduced to overcome limitations of choice models based on traditional axioms unable to express empirical observations from modern behavior economics like context effects occurring when a choice between two options is altered by adding a third alternative. The inference approach that estimates the transition rates between each possible pair of alternatives via maximum likelihood suffers when the examples of each alternative are scarce and is inappropriate when new alternatives can be observed at test time. In this work, we propose an amortized inference approach for PCMC by embedding its definition into a neural network that represents transition rates as a function of the alternatives' and individual's features. We apply our construction to the complex case of airline itinerary booking where singletons are common (due to varying prices and individual-specific itineraries), and context effects and behaviors strongly dependent on market segments are observed. Experiments show our network significantly outperforming, in terms of prediction accuracy and logarithmic loss, feature engineered standard and latent class Multinomial Logit models as well as recent machine learning approaches.""","""This submission proposes to use neural networks in combination with pairwise choice markov chain models for choice modelling. The deep network is used to parametrize the PCMC and in so doing improve generalization and inference. Strengths: The formulation and theoretical justifications are convincing. The improvements are non-trivial and the approach is novel. Weaknesses: The text was not always easy to follow. The experimental validation is too limited initially. This was addressed during the discussion by adding an additional experiment. All reviewers recommend acceptance. """
857,"""Kaleidoscope: An Efficient, Learnable Representation For All Structured Linear Maps""","['structured matrices', 'efficient ML', 'algorithms', 'butterfly matrices', 'arithmetic circuits']","""Modern neural network architectures use structured linear transformations, such as low-rank matrices, sparse matrices, permutations, and the Fourier transform, to improve inference speed and reduce memory usage compared to general linear maps. However, choosing which of the myriad structured transformations to use (and its associated parameterization) is a laborious task that requires trading off speed, space, and accuracy. We consider a different approach: we introduce a family of matrices called kaleidoscope matrices (K-matrices) that provably capture any structured matrix with near-optimal space (parameter) and time (arithmetic operation) complexity. We empirically validate that K-matrices can be automatically learned within end-to-end pipelines to replace hand-crafted procedures, in order to improve model quality. For example, replacing channel shuffles in ShuffleNet improves classification accuracy on ImageNet by up to 5%. K-matrices can also simplify hand-engineered pipelines---we replace filter bank feature computation in speech data preprocessing with a learnable kaleidoscope layer, resulting in only 0.4% loss in accuracy on the TIMIT speech recognition task. In addition, K-matrices can capture latent structure in models: for a challenging permuted image classification task, adding a K-matrix to a standard convolutional architecture can enable learning the latent permutation and improve accuracy by over 8 points. We provide a practically efficient implementation of our approach, and use K-matrices in a Transformer network to attain 36% faster end-to-end inference speed on a language translation task.""","""The paper generalizes several existing results for structured linear transformations in the form of K-matrices. This is an excellent paper and all reviewers confirmed that."""
858,"""PopSGD: Decentralized Stochastic Gradient Descent in the Population Model""","['Distributed machine learning', 'distributed optimization', 'decentralized parallel SGD', 'population protocols']","""The population model is a standard way to represent large-scale decentralized distributed systems, in which agents with limited computational power interact in randomly chosen pairs, in order to collectively solve global computational tasks. In contrast with synchronous gossip models, nodes are anonymous, lack a common notion of time, and have no control over their scheduling. In this paper, we examine whether large-scale distributed optimization can be performed in this extremely restrictive setting. We introduce and analyze a natural decentralized variant of stochastic gradient descent (SGD), called PopSGD, in which every node maintains a local parameter, and is able to compute stochastic gradients with respect to this parameter. Every pair-wise node interaction performs a stochastic gradient step at each agent, followed by averaging of the two models. We prove that, under standard assumptions, SGD can converge even in this extremely loose, decentralized setting, for both convex and non-convex objectives. Moreover, surprisingly, in the former case, the algorithm can achieve linear speedup in the number of nodes n. Our analysis leverages a new technical connection between decentralized SGD and randomized load balancing, which enables us to tightly bound the concentration of node parameters. We validate our analysis through experiments, showing that PopSGD can achieve convergence and speedup for large-scale distributed learning tasks in a supercomputing environment.""","""This manuscript studies scaling distributed stochastic gradient descent to a large number of nodes. Specifically, it proposes to use algorithms based on population analysis (relevant for large numbers of distributed nodes) to implement distributed training of deep neural networks. In reviews and discussions, the reviewers and AC note missing or inadequate comparisons to previous work on asynchronous SGD, and possible lack of novelty compared to previous work. The reviewers also mentioned the incomplete empirical comparison to closely related work. On the writing, reviewers mentioned that the conciseness of the manuscript could be improved. """
859,"""Coherent Gradients: An Approach to Understanding Generalization in Gradient Descent-based Optimization""","['generalization', 'deep learning']","""An open question in the Deep Learning community is why neural networks trained with Gradient Descent generalize well on real datasets even though they are capable of fitting random data. We propose an approach to answering this question based on a hypothesis about the dynamics of gradient descent that we call Coherent Gradients: Gradients from similar examples are similar and so the overall gradient is stronger in certain directions where these reinforce each other. Thus changes to the network parameters during training are biased towards those that (locally) simultaneously benefit many examples when such similarity exists. We support this hypothesis with heuristic arguments and perturbative experiments and outline how this can explain several common empirical observations about Deep Learning. Furthermore, our analysis is not just descriptive, but prescriptive. It suggests a natural modification to gradient descent that can greatly reduce overfitting.""","""The paper proposes an intuitive causal explanation for the generalization properties of GD methods. The reviewers appreciated the insights, with one reviewer claiming that there was significant overlap with existing work. I ultimately decided to accept this paper as I believe intuitive explanations are critical to the propagation of ideas. That being said, there is a tendency in this community to erase past, especially theoretical, work, for that very reason that theoretical work is less popular. Hence, I want to make it clear that the acceptance of this paper is based on the premise that the authors will incorporate all of reviewer 3's comments and give enough credit to all relevant work (namely, all the papers cited by the reviewer) with a proper discussion on the link between these."""
860,"""AssembleNet: Searching for Multi-Stream Neural Connectivity in Video Architectures""","['video representation learning', 'video understanding', 'activity recognition', 'neural architecture search']","""Learning to represent videos is a very challenging task both algorithmically and computationally. Standard video CNN architectures have been designed by directly extending architectures devised for image understanding to include the time dimension, using modules such as 3D convolutions, or by using two-stream design to capture both appearance and motion in videos. We interpret a video CNN as a collection of multi-stream convolutional blocks connected to each other, and propose the approach of automatically finding neural architectures with better connectivity and spatio-temporal interactions for video understanding. This is done by evolving a population of overly-connected architectures guided by connection weight learning. Architectures combining representations that abstract different input types (i.e., RGB and optical flow) at multiple temporal resolutions are searched for, allowing different types or sources of information to interact with each other. Our method, referred to as AssembleNet, outperforms prior approaches on public video datasets, in some cases by a great margin. We obtain 58.6% mAP on Charades and 34.27% accuracy on Moments-in-Time.""","""The submission applies architecture search to find effective architectures for video classification. The work is not terribly innovative, but the results are good. All reviewers recommend accepting the paper."""
861,"""The Frechet Distance of training and test distribution predicts the generalization gap""","['Generalization', 'Transfer learning', 'Frechet distance', 'Optimal transport', 'Domain adaptation', 'Distribution shift', 'Invariance']","""Learning theory tells us that more data is better when minimizing the generalization error of identically distributed training and test sets. However, when training and test distribution differ, this distribution shift can have a significant effect. With a novel perspective on function transfer learning, we are able to lower bound the change of performance when transferring from training to test set with the Wasserstein distance between the embedded training and test set distribution. We find that there is a trade-off affecting performance between how invariant a function is to changes in training and test distribution and how large this shift in distribution is. Empirically across several data domains, we substantiate this viewpoint by showing that test performance correlates strongly with the distance in data distributions between training and test set. Complementary to the popular belief that more data is always better, our results highlight the utility of also choosing a training data distribution that is close to the test data distribution when the learned function is not invariant to such changes.""","""The authors discuss how to predict generalization gaps. Reviews are mixed, putting the submission in the lower half of this year's submissions. I also would have liked to see a comparison with other divergence metrics, for example, L1, MMD, H-distance, discrepancy distance, and learned representations (e.g., BERT, Laser, etc., for language). Without this, the empirical evaluation of FD is a bit weak. Also, the obvious next step would be trying to minimize FD in the context of domain adaptation, and the question is if this shouldn't already be part of your paper? Suggestions: The Amazon reviews are time-stamped, enabling you to run experiments with drift over time. See [0] for an example. [0] pseudo-url"""
862,"""Siamese Attention Networks""",[],"""Attention operators have been widely applied on data of various orders and dimensions such as texts, images, and videos. One challenge of applying attention operators is the excessive usage of computational resources. This is due to the usage of dot product and softmax operator when computing similarity scores. In this work, we propose the Siamese similarity function that uses a feed-forward network to compute similarity scores. This results in the Siamese attention operator (SAO). In particular, SAO leads to a dramatic reduction in the requirement of computational resources. Experimental results show that our SAO can save 94% memory usage and speed up the computation by a factor of 58 compared to the regular attention operator. The computational advantage of SAO is even larger on higher-order and higher-dimensional data. Results on image classification and restoration tasks demonstrate that networks with SAOs are as effective as models with regular attention operator, while significantly outperform those without attention operators.""","""The submission presents a Siamese attention operator that lowers the computational costs of attention operators for applications such as image recognition. The reviews are split. R1 posted significant concerns with the content of the submission. The concerns remain after the authors' responses and revision. One of the concerns is the apparent dual submission with ""Kronecker Attention Networks"". The AC agrees with these concerns and recommends rejecting the submission."""
863,"""ASYNCHRONOUS MULTI-AGENT GENERATIVE ADVERSARIAL IMITATION LEARNING""","['Multi-agent', 'Imitation Learning', 'Inverse Reinforcement Learning']","""Imitation learning aims to inversely learn a policy from expert demonstrations, which has been extensively studied in the literature for both single-agent setting with Markov decision process (MDP) model, and multi-agent setting with Markov game (MG) model. However, existing approaches for general multi-agent Markov games are not applicable to multi-agent extensive Markov games, where agents make asynchronous decisions following a certain order, rather than simultaneous decisions. We propose a novel framework for asynchronous multi-agent generative adversarial imitation learning (AMAGAIL) under general extensive Markov game settings, and the learned expert policies are proven to guarantee subgame perfect equilibrium (SPE), a more general and stronger equilibrium than Nash equilibrium (NE). The experiment results demonstrate that compared to state-of-the-art baselines, our AMAGAIL model can better infer the policy of each expert agent using their demonstration data collected from asynchronous decision-making scenarios (i.e., extensive Markov games).""","""This paper extends multi-agent imitation learning to extensive-form games. There is a long discussion between reviewer #3 and the authors on the difference between Markov Games (MGs) and Extensive-Form Games (EFGs). The core of the discussion is on whether methods developed under the MG formalism (where agents take actions simultaneously) naturally can be applied to the EFG problem setting (where agents can take actions asynchronously). Despite the long discussion, the authors and reviewer did not come to an agreement on this point. Given that it is a crucial point for determining the significance of the contribution, my decision is to decline the paper. I suggest that the authors add a detailed discussion on why MG methods cannot be applied to EFGs in the way suggested by reviewer #3 in the next version of this work and then resubmit."""
864,"""ROS-HPL: Robotic Object Search with Hierarchical Policy Learning and Intrinsic-Extrinsic Modeling""","['Robotic Object Search', 'Hierarchical Reinforcement Learning']","""Despite significant progress in Robotic Object Search (ROS) over the recent years with deep reinforcement learning based approaches, the sparsity issue in reward setting as well as the lack of interpretability of the previous ROS approaches leave much to be desired. We present a novel policy learning approach for ROS, based on a hierarchical and interpretable modeling with intrinsic/extrinsic reward setting, to tackle these two challenges. More specifically, we train the low-level policy by deliberating between an action that achieves an immediate sub-goal and the one that is better suited for achieving the final goal. We also introduce a new evaluation metric, namely the extrinsic reward, as a harmonic measure of the object search success rate and the average steps taken. Experiments conducted with multiple settings on the House3D environment validate and show that the intelligent agent, trained with our model, can achieve a better object search performance (higher success rate with lower average steps, measured by SPL: Success weighted by inverse Path Length). In addition, we conduct studies w.r.t. the parameter that controls the weighted overall reward from intrinsic and extrinsic components. The results suggest it is critical to devise a proper trade-off strategy to perform the object search well.""","""This paper introduces a two-level hierarchical reinforcement learning approach, applied to the problem of a robot searching for an object specified by an image. The system incorporates a human-specified subgoal space, and learns low-level policies that balance the intrinsic and extrinsic rewards. The method is tested in simulations against several baselines. The reviewer discussion highlighted strengths and weaknesses of the paper. One strength is the extensive comparisons with alternative approaches on this task. The main weakness is the paper did not adequately distinguish between which aspects of the system were generic to HRL and which aspects are particular to robot object search. The paper was not general enough to be understood as a generic HRL method. It was also ignoring much relevant background knowledge (robot mapping and navigation) if the paper is intended to be primarily about robot object search. The paper did not convince the reviewers that the proposed method was desirable for either hierarchical reinforcement learning or for robot object search. This paper is not ready for publication as the contribution was not sufficiently clear to the readers. """
865,"""Weight-space symmetry in neural network loss landscapes revisited""","['Weight-space symmetry', 'neural network landscapes']","""Neural network training depends on the structure of the underlying loss landscape, i.e. local minima, saddle points, flat plateaus, and loss barriers. In relation to the structure of the landscape, we study the permutation symmetry of neurons in each layer of a deep neural network, which gives rise not only to multiple equivalent global minima of the loss function but also to critical points in between partner minima. In a network of pseudo-formula hidden layers with pseudo-formula neurons in layers = 1, \ldots, d we construct continuous paths between equivalent global minima that lead through a `permutation point' where the input and output weight vectors of two neurons in the same hidden layer pseudo-formula collide and interchange. We show that such permutation points are critical points which lie inside high-dimensional subspaces of equal loss, contributing to the global flatness of the landscape. We also find that a permutation point for the exchange of neurons pseudo-formula and pseudo-formula transits into a flat high-dimensional plateau that enables all pseudo-formula permutations of neurons in a given layer pseudo-formula at the same loss value. Moreover, we introduce higher-order permutation points by exploiting the hierarchical structure in the loss landscapes of neural networks, and find that the number of pseudo-formula -th order permutation points is much larger than the (already huge) number of equivalent global minima -- at least by a polynomial factor of order pseudo-formula . In two tasks, we demonstrate numerically with our path finding method that continuous paths between partner minima exist: first, in a toy network with a single hidden layer on a function approximation task and, second, in a multilayer network on the MNIST task. Our geometric approach yields a lower bound on the number of critical points generated by weight-space symmetries and provides a simple intuitive link between previous theoretical results and numerical observations.""","""After communicating with each reviewer about the rebuttal, there seems to be a consensus that the paper contains a number of interesting ideas, but the motivation for the paper and the relationship to the literature needs to be expanded. The reviewers have not changed their scores, and so there is not currently enough support to accept this paper."""
866,"""Distribution Matching Prototypical Network for Unsupervised Domain Adaptation""","['Deep Learning', 'Unsupervised Domain Adaptation', 'Distribution Modeling']","""State-of-the-art Unsupervised Domain Adaptation (UDA) methods learn transferable features by minimizing the feature distribution discrepancy between the source and target domains. Different from these methods which do not model the feature distributions explicitly, in this paper, we explore explicit feature distribution modeling for UDA. In particular, we propose Distribution Matching Prototypical Network (DMPN) to model the deep features from each domain as Gaussian mixture distributions. With explicit feature distribution modeling, we can easily measure the discrepancy between the two domains. In DMPN, we propose two new domain discrepancy losses with probabilistic interpretations. The first one minimizes the distances between the corresponding Gaussian component means of the source and target data. The second one minimizes the pseudo negative log likelihood of generating the target features from source feature distribution. To learn both discriminative and domain invariant features, DMPN is trained by minimizing the classification loss on the labeled source data and the domain discrepancy losses together. Extensive experiments are conducted over two UDA tasks. Our approach yields a large margin in the Digits Image transfer task over state-of-the-art approaches. More remarkably, DMPN obtains a mean accuracy of 81.4% on VisDA 2017 dataset. The hyper-parameter sensitivity analysis shows that our approach is robust w.r.t hyper-parameter changes.""","""This paper addresses the problem of unsupervised domain adaptation and proposes explicit modeling of the source and target feature distributions to aid in cross-domain alignment. The reviewers all recommended rejection of this work. Though they all understood the papers position of explicit feature distribution modeling, there was a lack of understanding as to why this explicit modeling should be superior to the common implicit modeling done in related literature. As some reviewers raised concern that the empirical performance of the proposed approach was marginally better than competing methods, this experimental evidence alone was not sufficient justification of the explicit modeling. There was also a secondary concern about whether the two proposed loss functions were simultaneously necessary. Overall, after reading the reviewers and authors comments, the AC recommends this paper not be accepted. """
867,"""Beyond GANs: Transforming without a Target Distribution""","['GAN', 'domain transfer', 'computational biology', 'latent space manipulations']","""While generative neural networks can learn to transform a specific input dataset into a specific target dataset, they require having just such a paired set of input/output datasets. For instance, to fool the discriminator, a generative adversarial network (GAN) exclusively trained to transform images of black-haired *men* to blond-haired *men* would need to change gender-related characteristics as well as hair color when given images of black-haired *women* as input. This is problematic, as often it is possible to obtain *a* pair of (source, target) distributions but then have a second source distribution where the target distribution is unknown. The computational challenge is that generative models are good at generation within the manifold of the data that they are trained on. However, generating new samples outside of the manifold or extrapolating ""out-of-sample"" is a much harder problem that has been less well studied. To address this, we introduce a technique called *neuron editing* that learns how neurons encode an edit for a particular transformation in a latent space. We use an autoencoder to decompose the variation within the dataset into activations of different neurons and generate transformed data by defining an editing transformation on those neurons. By performing the transformation in a latent trained space, we encode fairly complex and non-linear transformations to the data with much simpler distribution shifts to the neuron's activations. Our technique is general and works on a wide variety of data domains and applications. We first demonstrate it on image transformations and then move to our two main biological applications: removal of batch artifacts representing unwanted noise and modeling the effect of drug treatments to predict synergy between drugs.""","""This paper presents a new generative modeling approach to transform between data domains via a neuron editing technique. The authors address the scenario of source to target domain translation that can be applied to a new source domain. While the reviewers acknowledged that the idea of neuron editing is interesting, they have raised several concerns that were viewed by AC as critical issues: (1) given the progress that have been made in the field, an empirical comparison with SOTA GANs models is required to assess the benefits/competitiveness of the proposed approach -- see R1s comments, also [StarGAN by Choi et al, CVPR 2018], (2) the literature review is incomplete and requires a major revision -- see R1s and R3s suggestions, also [CYCADA by Hoffman et al, ICML 2018], (3) presentation clarity -- see R1s and R2s comments. AC suggests, in its current state the manuscript is not ready for a publication. We hope the detailed reviews are useful for improving and revising the paper. """
868,"""Detecting Noisy Training Data with Loss Curves""","['Deep learning', 'noisy data', 'robust training']","""This paper introduces a new method to discover mislabeled training samples and to mitigate their impact on the training process of deep networks. At the heart of our algorithm lies the Area Under the Loss (AUL) statistic, which can be easily computed for each sample in the training set. We show that the AUL can use training dynamics to differentiate between (clean) samples that benefit from generalization and (mislabeled) samples that need to be memorized. We demonstrate that the estimated AUL score conditioned on clean vs. noisy is approximately Gaussian distributed and can be well estimated with a simple Gaussian Mixture Model (GMM). The resulting GMM provides us with mixing coefficients that reveal the percentage of mislabeled samples in a data set as well as probability estimates that each individual training sample is mislabeled. We show that these probability estimates can be used to down-weight suspicious training samples and successfully alleviate the damaging impact of label noise. We demonstrate on the CIFAR10/100 datasets that our proposed approach is significantly more accurate and consistent across model architectures than all prior work.""","""The paper proposes a new, stable metric, called Area Under Loss curve (AUL) to recognize mislabeled samples in a dataset due to the different behavior of their loss function over time. The paper build on earlier observations (e.g. by Shen & Sanghavi) to propose this new metric as a concrete solution to the mislabeling problem. Although the reviewers remarked that this is an interesting approach for a relevant problems, they expressed several concerns regarding this paper. Two of them are whether the hardness of a sample would also result in high AUL scores, and another whether the results hold up under realistic mislabelings, rather than artificial label swapping / replacing. The authors did anecdotally suggest that neither of these effects has a major impact on the results. Still, I think a precise analysis of these effects would be critically important to have in the paper. Especially since there might be a complex interaction between the 'hardness' of samples and mislabelings (an MNIST 1 that looks like a 7 might be sooner mislabeled than a 1 that doesn't look like a 7). The authors show some examples of 'real' mislabeled sentences recognized by the model but it is still unclear whether downweighting these helped final test set performance in this case. Because of these issues, I cannot recommend acceptance of the paper in its current state. However, based on the identified relevance of the problem tackled and the identified potential for significant impact I do think this could be a great paper in a next iteration. """
869,"""BETANAS: Balanced Training and selective drop for Neural Architecture Search""","['neural architecture search', 'weight sharing', 'auto machine learning', 'deep learning', 'CNN']","""Automatic neural architecture search techniques are becoming increasingly important in machine learning area recently. Especially, weight sharing methods have shown remarkable potentials on searching good network architectures with few computational resources. However, existing weight sharing methods mainly suffer limitations on searching strategies: these methods either uniformly train all network paths to convergence which introduces conflicts between branches and wastes a large amount of computation on unpromising candidates, or selectively train branches with different frequency which leads to unfair evaluation and comparison among paths. To address these issues, we propose a novel neural architecture search method with balanced training strategy to ensure fair comparisons and a selective drop mechanism to reduces conflicts among candidate paths. The experimental results show that our proposed method can achieve a leading performance of 79.0% on ImageNet under mobile settings, which outperforms other state-of-the-art methods in both accuracy and efficiency.""","""This paper proposes a neural architecture search method that uses balanced sampling of architectures from the one-shot model and drops operators whose importance drops below a certain weight. The reviewers agreed that the paper's approach is intuitive, but main points of criticism were: - Lack of good baselines - Potentially unfair comparison, not using the same training pipeline - Lack of available code and thus of reproducibility. (The authors promised code in response, which is much appreciated. If the open-sourcing process has completed in time for the next version of the paper, I encourage the authors to include an anonymized version of the code in the submission to avoid this criticism.) The reviewers appreciated the authors' rebuttal, but it did not suffice for them to change their ratings. I agree with the reviewers that this work may be a solid contribution, but that additional evaluation is needed to demonstrate this. I therefore recommend rejection and encourage resubmission to a different venue after addressing the issues pointed out by the reviewers."""
870,"""Dynamic Model Pruning with Feedback""","['network pruning', 'dynamic reparameterization', 'model compression']","""Deep neural networks often have millions of parameters. This can hinder their deployment to low-end devices, not only due to high memory requirements but also because of increased latency at inference. We propose a novel model compression method that generates a sparse trained model without additional overhead: by allowing (i) dynamic allocation of the sparsity pattern and (ii) incorporating feedback signal to reactivate prematurely pruned weights we obtain a performant sparse model in one single training pass (retraining is not needed, but can further improve the performance). We evaluate the method on CIFAR-10 and ImageNet, and show that the obtained sparse models can reach the state-of-the-art performance of dense models and further that their performance surpasses all previously proposed pruning schemes (that come without feedback mechanisms).""","""The paper proposes a new, simple method for sparsifying deep neural networks. It use as temporary, pruned model to improve pruning masks via SGD, and eventually applying the SGD steps to the dense model. The paper is well written and shows SOTA results compared to prior work. The authors unanimously recommend to accept this work, based on simplicity of the proposed method and experimental results. I recommend to accept this paper, it seems to make a simple, yet effective contribution to compressing large-scale models. """
871,"""Differentially Private Meta-Learning""","['Differential Privacy', 'Meta-Learning', 'Federated Learning']","""Parameter-transfer is a well-known and versatile approach for meta-learning, with applications including few-shot learning, federated learning, with personalization, and reinforcement learning. However, parameter-transfer algorithms often require sharing models that have been trained on the samples from specific tasks, thus leaving the task-owners susceptible to breaches of privacy. We conduct the first formal study of privacy in this setting and formalize the notion of task-global differential privacy as a practical relaxation of more commonly studied threat models. We then propose a new differentially private algorithm for gradient-based parameter transfer that not only satisfies this privacy requirement but also retains provable transfer learning guarantees in convex settings. Empirically, we apply our analysis to the problems of federated learning with personalization and few-shot classification, showing that allowing the relaxation to task-global privacy from the more commonly studied notion of local privacy leads to dramatically increased performance in recurrent neural language modeling and image classification.""","""Thanks to the authors for the submission. This paper studies differentially private meta-learning, where the algorithm needs to use information across several learning tasks to protect the privacy of the data set from each task. The reviewers agree that this is a natural problem and the paper presents a solution that is essentially an adoption of differentially private SGD. There are several places the paper can improve. For the experimental evaluation, the authors should include a wider range of epsilon values in order to investigate the accuracy-privacy trade-off. The authors should also consider expanding the existing experiments with other datasets. """
872,"""Learn to Explain Efficiently via Neural Logic Inductive Learning""","['inductive logic programming', 'interpretability', 'attention']","""The capability of making interpretable and self-explanatory decisions is essential for developing responsible machine learning systems. In this work, we study the learning to explain the problem in the scope of inductive logic programming (ILP). We propose Neural Logic Inductive Learning (NLIL), an efficient differentiable ILP framework that learns first-order logic rules that can explain the patterns in the data. In experiments, compared with the state-of-the-art models, we find NLIL is able to search for rules that are x10 times longer while remaining x3 times faster. We also show that NLIL can scale to large image datasets, i.e. Visual Genome, with 1M entities.""","""This paper proposes a differentiable inductive logic programming method in the vein of recent work on the topic, with efficiency-focussed improvements. Thanks the very detailed comments and discussion with the reviewers, my view is that the paper is acceptable to ICLR. I am mindful of the reasons for reluctance from reviewer #3 while these are not enough to reject the paper, I would strongly, *STRONGLY* advise the authors to consider adding a short section providing comparison to traditional ILP methods and NLM in their camera ready."""
873,"""Generating Robust Audio Adversarial Examples using Iterative Proportional Clipping""","['audio adversarial examples', 'attack', 'machine learning']","""Audio adversarial examples, imperceptible to humans, have been constructed to attack automatic speech recognition (ASR) systems. However, the adversarial examples generated by existing approaches usually involve notable noise, especially during the periods of silence and pauses, which may lead to the detection of such attacks. This paper proposes a new approach to generate adversarial audios using Iterative Proportional Clipping (IPC), which exploits temporal dependency in original audios to significantly limit human-perceptible noise. Specifically, in every iteration of optimization, we use a backpropagation model to learn the raw perturbation on the original audio to construct our clipping. We then impose a constraint on the perturbation at the positions with lower sound intensity across the time domain to eliminate the perceptible noise during the silent periods or pauses. IPC preserves the linear proportionality between the original audio and the perturbed one to maintain the temporal dependency. We show that the proposed approach can successfully attack the latest state-of-the-art ASR model Wav2letter+, and only requires a few minutes to generate an audio adversarial example. Experimental results also demonstrate that our approach succeeds in preserving temporal dependency and can bypass temporal dependency based defense mechanisms.""","""This paper proposes a method called iterative proportional clipping (IPC) for generating adversarial audio examples that are imperceptible to humans. The efficiency of the method is demonstrated by generating adversarial examples to attack the Wav2letter+ model. Overall, the reviewers found the work interesting, but somewhat incremental and analysis of the method and generated samples incomplete, and Im thus recommending rejection."""
874,"""Avoiding Negative Side-Effects and Promoting Safe Exploration with Imaginative Planning""","['Reinforcement Learning', 'AI-Safety', 'Model-Based Reinforcement Learning', 'Safe-Exploration']",""" With the recent proliferation of the usage of reinforcement learning (RL) agents for solving real-world tasks, safety emerges as a necessary ingredient for their successful application. In this paper, we focus on ensuring the safety of the agent while making sure that the agent does not cause any unnecessary disruptions to its environment. The current approaches to this problem, such as manually constraining the agent or adding a safety penalty to the reward function, can introduce bad incentives. In complex domains, these approaches are simply intractable, as they require knowing apriori all the possible unsafe scenarios an agent could encounter. We propose a model-based approach to safety that allows the agent to look into the future and be aware of the future consequences of its actions. We learn the transition dynamics of the environment and generate a directed graph called the imaginative module. This graph encapsulates all possible trajectories that can be followed by the agent, allowing the agent to efficiently traverse through the imagined environment without ever taking any action in reality. A baseline state, which can either represent a safe or an unsafe state (based on whichever is easier to define) is taken as a human input, and the imaginative module is used to predict whether the current actions of the agent can cause it to end up in dangerous states in the future. Our imaginative module can be seen as a ``plug-and-play'' approach to ensuring safety, as it is compatible with any existing RL algorithm and any task with discrete action space. Our method induces the agent to act safely while learning to solve the task. We experimentally validate our proposal on two gridworld environments and a self-driving car simulator, demonstrating that our approach to safety visits unsafe states significantly less frequently than a baseline.""","""This paper tackles the problem of safe exploration in RL. The proposed approach uses an imaginative module to construct a connectivity graph between all states using forward predictions. The idea then consists in using this graph to plan a trajectory which avoids states labelled as ""unsafe"". Several concerns were raised and the authors did not provide any rebuttal. A major point is that the assumption that the approach has access to what are unsafe states, which is either unreasonable in practice or makes the problem much simpler. Another major point is the uniform data collection about every state-action pairs. This can be really unsafe and defeats the purpose of safe exploration following this phase. These questions may be due to a miscomprehension, indicating that the paper should be clarified, as demanded by reviewers. Finally, the experiments would benefit from additional details in order to be correctly understood. All reviewers agree that this paper should be rejected. Hence, I recommend reject."""
875,"""Permutation Equivariant Models for Compositional Generalization in Language""","['Compositionality', 'Permutation Equivariance', 'Language Processing']","""Humans understand novel sentences by composing meanings and roles of core language components. In contrast, neural network models for natural language modeling fail when such compositional generalization is required. The main contribution of this paper is to hypothesize that language compositionality is a form of group-equivariance. Based on this hypothesis, we propose a set of tools for constructing equivariant sequence-to-sequence models. Throughout a variety of experiments on the SCAN tasks, we analyze the behavior of existing models under the lens of equivariance, and demonstrate that our equivariant architecture is able to achieve the type compositional generalization required in human language understanding.""","""This paper proposes an equivariant sequence-to-sequence model for dealing with compositionality of language. They show these models are better at SCAN tasks. Reviewers expressed two major concerns: 1) Limited clarity of section 4 which makes the paper difficult to understand. 2) Whether this could generalize to more complex types of compositionality. Authors responded by revising Section 4 and answering the question of generalization. While the reviewers are not 100% satisfied, they agree there is enough novel contribution in this paper. I thank the authors for submitting and look forward to seeing a clearer revision in the conference."""
876,"""A shallow feature extraction network with a large receptive field for stereo matching tasks""","['stereo matching', 'feature extraction network', 'convolution neural network', 'receptive field']","""Stereo matching is one of the important basic tasks in the computer vision field. In recent years, stereo matching algorithms based on deep learning have achieved excellent performance and become the mainstream research direction. Existing algorithms generally use deep convolutional neural networks (DCNNs) to extract more abstract semantic information, but we believe that the detailed information of the spatial structure is more important for stereo matching tasks. Based on this point of view, this paper proposes a shallow feature extraction network with a large receptive field. The network consists of three parts: a primary feature extraction module, an atrous spatial pyramid pooling (ASPP) module and a feature fusion module. The primary feature extraction network contains only three convolution layers. This network utilizes the basic feature extraction ability of the shallow network to extract and retain the detailed information of the spatial structure. In this paper, the dilated convolution and atrous spatial pyramid pooling (ASPP) module is introduced to increase the size of receptive field. In addition, a feature fusion module is designed, which integrates the feature maps with multiscale receptive fields and mutually complements the feature information of different scales. We replaced the feature extraction part of the existing stereo matching algorithms with our shallow feature extraction network, and achieved state-of-the-art performance on the KITTI 2015 dataset. Compared with the reference network, the number of parameters is reduced by 42%, and the matching accuracy is improved by 1.9%.""","""The paper proposed the use of a shallow layers with large receptive fields for feature extraction to be used in stereo matching tasks. It showed on the KITTI2015 dataset this method leads to large model size reducetion while maintaining a comparable performance. The main conern on this paper is the lack of technical contributions: * The task of stereo matching is very specialized one, simply presenting the model size reduction and performance is not interesting to general readers. Adding more analysis that help understanding why the proposed method helps in this particular task and for what kind of tasks a shallow feature instead a deeper one is perferred. In that way, the paper would be addressing much wider audiences. * The discussions on related work is not thorough enough, lacking of analysis of pros and cons between different methods."""
877,"""Training individually fair ML models with sensitive subspace robustness""","['fairness', 'adversarial robustness']","""We consider training machine learning models that are fair in the sense that their performance is invariant under certain sensitive perturbations to the inputs. For example, the performance of a resume screening system should be invariant under changes to the gender and/or ethnicity of the applicant. We formalize this notion of algorithmic fairness as a variant of individual fairness and develop a distributionally robust optimization approach to enforce it during training. We also demonstrate the effectiveness of the approach on two ML tasks that are susceptible to gender and racial biases. ""","""The paper addresses individual fairness scenario (treating similar users similarly) and proposes a new definition of algorithmic fairness that is based on the idea of robustness, i.e. by perturbing the inputs (while keeping them close with respect to the distance function), the loss of the model cannot be significantly increased. All reviewers and AC agree that this work is clearly of interest to ICLR, however the reviewers have noted the following potential weaknesses: (1) presentation clarity -- see R3s detailed suggestions e.g. comparison to Dwork et al, see R2s comments on how to improve, (2) empirical evaluations -- see R1s question about using more complex models, see R3s question on the usefulness of the word embeddings. Pleased to report that based on the author respond with extra experiments and explanations, R3 has raised the score to weak accept. All reviewers and AC agree that the most crucial concerns have been addressed in the rebuttal, and the paper could be accepted - congratulations to the authors! The authors are strongly urged to improve presentation clarity and to include the supporting empirical evidence when preparing the final revision."""
878,"""Sparse Transformer: Concentrated Attention Through Explicit Selection""","['Attention', 'Transformer', 'Machine Translation', 'Natural Language Processing', 'Sparse', 'Sequence to sequence learning']","""Self-attention-based Transformer has demonstrated the state-of-the-art performances in a number of natural language processing tasks. Self attention is able to model long-term dependencies, but it may suffer from the extraction of irrelevant information in the context. To tackle the problem, we propose a novel model called Sparse Transformer. Sparse Transformer is able to improve the concentration of attention on the global context through an explicit selection of the most relevant segments. Extensive experimental results on a series of natural language processing tasks, including neural machine translation, image captioning, and language modeling, all demonstrate the advantages of Sparse Transformer in model performance. Sparse Transformer reaches the state-of-the-art performances in the IWSLT 2015 English-to-Vietnamese translation and IWSLT 2014 German-to-English translation. In addition, we conduct qualitative analysis to account for Sparse Transformer's superior performance. ""","""The paper proposes a variant of Sparse Transformer where only top K activations are kept in the softmax. The resulting transformer model is applied to NMT, image caption generation and language modeling, where it outperformed a vanilla Transformer. While the proposed idea is simple, easy to implement, and it does not add additional computational or memory cost, the reviewers raised several concerns in the discussion phase, including: several baselines missing from the tables; incomplete experimental details; incorrect/misleading selection of best performing model in tables of results (e.g. In Table 1, the authors boldface their results on En-De (29.4) and De-En (35.6) but in fact, the best performance on these is achieved by competing models, respectively 29.7 and 35.7. The caption claims their model ""achieves the state-of-the-art performances in En-Vi and De-En"" but this is not true for De-En (albeit by 0.1). In Table 3, they boldface their result of 1.05 but the best result is 1.02; the text says their model beats the Transf-XL ""with an advantage"" (of 0.01) but do not point out that the advantage of Adaptive-span over their model is 3 times as large (0.03)). This prevents me from recommending acceptance of this paper in its current form. I strongly encourage the authors to address these concerns in a future submission."""
879,"""Simultaneous Classification and Out-of-Distribution Detection Using Deep Neural Networks""","['Out-of-Distribution Detection', 'OOD detection', 'Outlier Exposure', 'Classification', 'Open-World Classification', 'Anomaly Detection', 'Novelty Detection', 'Calibration', 'Neural Networks']","""Deep neural networks have achieved great success in classication tasks during the last years. However, one major problem to the path towards articial intelligence is the inability of neural networks to accurately detect samples from novel class distributions and therefore, most of the existent classication algorithms assume that all classes are known prior to the training stage. In this work, we propose a methodology for training a neural network that allows it to efciently detect out-of-distribution (OOD) examples without compromising much of its classication accuracy on the test examples from known classes. Based on the Outlier Exposure (OE) technique, we propose a novel loss function that achieves state-of-the-art results in out-of-distribution detection with OE both on image and text classication tasks. Additionally, the way this method was constructed makes it suitable for training any classication algorithm that is based on Maximum Likelihood methods.""","""The paper proposes a method for out-of-distribution (OOD) detection for neural network classifiers. The reviewers raised several concerns about novelty, choice of baselines and the experimental evaluation. While the author rebuttal addressed some of these concerns, I think the paper is still not ready for acceptance as is. I encourage the authors to revise the paper and resubmit to a different venue."""
880,"""ExpandNets: Linear Over-parameterization to Train Compact Convolutional Networks""","['Compact Network Training', 'Linear Expansion', 'Over-parameterization', 'Knowledge Transfer']","""In this paper, we introduce a novel approach to training a given compact network. To this end, we build upon over-parameterization, which typically improves both optimization and generalization in neural network training, while being unnecessary at inference time. We propose to expand each linear layer of the compact network into multiple linear layers, without adding any nonlinearity. As such, the resulting expanded network can benefit from over-parameterization during training but can be compressed back to the compact one algebraically at inference. As evidenced by our experiments, this consistently outperforms training the compact network from scratch and knowledge distillation using a teacher. In this context, we introduce several expansion strategies, together with an initialization scheme, and demonstrate the benefits of our ExpandNets on several tasks, including image classification, object detection, and semantic segmentation. ""","""The paper develops linear over-parameterization methods to improve training of small neural network models. This is compared to training from scratch and other knowledge distillation methods. Reviewer 1 found the paper to be clear with good analysis, and raised concerns on generality and extensiveness of experimental work. Reviewer 2 raised concerns about the correctness of the approach and laid out several other possibilities. The authors conducted several other experiments and responded to all the feedback from the reviewers, although there was no final consensus on the scores. The review process has made this a better paper and it is of interest to the community. The paper demonstrates all the features of a good paper, but due to a large number of strong papers, was not accepted at this time."""
881,"""Unsupervised Learning of Automotive 3D Crash Simulations using LSTMs""","['LSTM', 'surface data', 'geometric deep learning', 'numerical simulation']","""Long short-term memory (LSTM) networks allow to exhibit temporal dynamic behavior with feedback connections and seem a natural choice for learning sequences of 3D meshes. We introduce an approach for dynamic mesh representations as used for numerical simulations of car crashes. To bypass the complication of using 3D meshes, we transform the surface mesh sequences into spectral descriptors that efficiently encode the shape. A two branch LSTM based network architecture is chosen to learn the representations and dynamics of the crash during the simulation. The architecture is based on unsupervised video prediction by an LSTM without any convolutional layer. It uses an encoder LSTM to map an input sequence into a fixed length vector representation. On this representation one decoder LSTM performs the reconstruction of the input sequence, while the other decoder LSTM predicts the future behavior by receiving initial steps of the sequence as seed. The spatio-temporal error behavior of the model is analysed to study how well the model can extrapolate the learned spectral descriptors into the future, that is, how well it has learned to represent the underlying dynamical structural mechanics. Considering that only a few training examples are available, which is the typical case for numerical simulations, the network performs very well.""",""" The paper proposes to train LSTMs to encode car crashes (a temporal sequence of 3D mesh representations). Decoder LSTMs can then be used to 1) reconstruct the input or 2) predict the future sequence of structural geometry. The authors propose to use a spectral feature representation based on prior work as input into the encoding LSTM. The main contribution of the paper (based on the author response) is the introduction of this spectral feature representation to the ML community. The authors used single 3D truck model to generate 205 simulations, of which 105 was used for training, and 100 for testing. The authors presented reconstruction errors and TSNE visualization of the LSTM's reconstruction weights. Discussion Summary: The paper got three weak rejects. The response provided by the authors failed to convince any of the reviewers to adjust their scores. The authors did not provide a revision based on the reviewer comments. Overall, the reviewers found the problem statement to be interesting. However, they had concerns about the following: 1. It's unclear what is the main technical contribution of the work. Several of the reviewers pointed out the lack of technical novelty. From the writing, it's unclear if the proposed spectral feature representation is taken directly from prior work or there was some additional innovation in this submission. Based on the author response, it seems the proposed feature representation is taken directly from prior work as the authors themselves acknowledge that the submission is taking two known ideas and combining them. This can be made more explicit in the paper itself. 2. Lack of comparison with existing work and experimental analysis There is no comparison against existing work on predicting 3D structure deformation over time. While the proposed representation is interesting, the is no comparison with other methods or other alternative representations. Without any comparisons it is difficult to judge how the reconstruction error corresponding to actual reconstruction quality. How much error is acceptable? The submissions also fails to elucidate when the proposed representation should be used. Is it better than alternative representations (use 3D mesh directly? use point clouds? use alternate basis functions?) 3. What is being learned by the model? R3 pointed out that the authors mention that the model is trained in just half an hour and questioned whether the dynamics function is trivial to learn and that the only two parts of the 3D structure is analyzed. The authors responded that the ""coarse"" dynamic is easier to learn than the ""fine"" scale dynamics. Is what is learned by the model sufficient? How well would a model that just modeled the car as a rigid object and predicted the position do? The lack of comparison against baselines and alternative methods/representations makes it difficult to judge usefulness of the representation/approach that is presented. 4. The paper also has minor typos. Page 5: ""treat the for beams"" --> ""treat the four beams"" Page 7: ""marrked"" --> ""marked"" Overall the paper addresses a interesting problem domain, and introduces a interesting representation to the ML community, but fails to do a proper experimental analysis showing how the representation compares to alternatives. Since the paper does not claim the novelty of the representation as its contribution, it is essential that it performs a thorough investigation of the task and perform empirical studies comparing the proposed representation/method against baselines and alternatives."""
882,"""Hypermodels for Exploration""","['exploration', 'hypermodel', 'reinforcement learning']","""We study the use of hypermodels to represent epistemic uncertainty and guide exploration. This generalizes and extends the use of ensembles to approximate Thompson sampling. The computational cost of training an ensemble grows with its size, and as such, prior work has typically been limited to ensembles with tens of elements. We show that alternative hypermodels can enjoy dramatic efficiency gains, enabling behavior that would otherwise require hundreds or thousands of elements, and even succeed in situations where ensemble methods fail to learn regardless of size. This allows more accurate approximation of Thompson sampling as well as use of more sophisticated exploration schemes. In particular, we consider an approximate form of information-directed sampling and demonstrate performance gains relative to Thompson sampling. As alternatives to ensembles, we consider linear and neural network hypermodels, also known as hypernetworks. We prove that, with neural network base models, a linear hypermodel can represent essentially any distribution over functions, and as such, hypernetworks do not extend what can be represented.""","""This paper considers ensemble of deep learning models in order to quantify their epistemic uncertainty and use this for exploration in RL. The authors first show that limiting the ensemble to a small number of models, which is typically done for computational reasons, can severely limit the approximation of the posterior, which can translate into poor learning behaviours (e.g. over-exploitation). Instead, they propose a general approach based on hypermodels which can achieve the benefits of a large ensemble of models without the computational issues. They perform experiments in the bandit setting supporting their claim. They also provide a theoretical contribution, proving that an arbitrary distribution over functions can be represented by a linear hypermodel. The decision boundary for this paper is unclear given the confidence of reviewers and their scores. However, the tackled problem is important, and the proposed approach is sound and backed up by experiments. Most of reviewers concerns seemed to be addressed by the rebuttal, with the exception of few missing references which the authors should really consider adding. I would therefore recommend acceptance."""
883,"""Compressive Transformers for Long-Range Sequence Modelling""","['memory', 'language modeling', 'transformer', 'compression']","""We present the Compressive Transformer, an attentive sequence model which compresses past memories for long-range sequence learning. We find the Compressive Transformer obtains state-of-the-art language modelling results in the WikiText-103 and Enwik8 benchmarks, achieving 17.1 ppl and 0.97bpc respectively. We also find it can model high-frequency speech effectively and can be used as a memory mechanism for RL, demonstrated on an object matching task. To promote the domain of long-range sequence learning, we propose a new open-vocabulary language modelling benchmark derived from books, PG-19.""","""The paper proposes a ""compressive transformer"", an extension of the transformer, that keeps a compressed long term memory in addition to the fixed sized memory. Both memories can be queried using attention weights. Unlike TransfomerXL that discards the oldest memories, the authors propose to ""compress"" those memories. The main contribution of this work is that that it introduces a model that can handle extremely long sequences. The authors also introduces a new language modeling dataset based on text from Project Gutenberg that has much longer sequences of words than existing datasets. They provide comprehensive experiments comparing against different compression strategies and compares against previous methods, showing that this method is able to result in lower word-level perplexity. In addition, the authors also present evaluations on speech, and image sequences for RL. Initially the paper received weak positive responses from the reviewers. The reviewers pointed out some clarity issues with details of the method and figures and some questions about design decisions. After rebuttal, all of the reviewers expressed that they were very satisfied with the authors responses and increased their scores (for a final of 2 accepts and 1 weak accept). The authors have provided a thorough and well-written paper, with comprehensive and convincing experiments. In addition, the ability to model long-range sequences and dependencies is an important problem and the AC agrees that this paper makes a solid contribution in tackling that problem. Thus, acceptance is recommended."""
884,"""Mint: Matrix-Interleaving for Multi-Task Learning""",['multi-task learning'],"""Deep learning enables training of large and flexible function approximators from scratch at the cost of large amounts of data. Applications of neural networks often consider learning in the context of a single task. However, in many scenarios what we hope to learn is not just a single task, but a model that can be used to solve multiple different tasks. Such multi-task learning settings have the potential to improve data efficiency and generalization by sharing data and representations across tasks. However, in some challenging multi-task learning settings, particularly in reinforcement learning, it is very difficult to learn a single model that can solve all the tasks while realizing data efficiency and performance benefits. Learning each of the tasks independently from scratch can actually perform better in such settings, but it does not benefit from the representation sharing that multi-task learning can potentially provide. In this work, we develop an approach that endows a single model with the ability to represent both extremes: joint training and independent training. To this end, we introduce matrix-interleaving (Mint), a modification to standard neural network models that projects the activations for each task into a different learned subspace, represented by a per-task and per-layer matrix. By learning these matrices jointly with the other model parameters, the optimizer itself can decide how much to share representations between tasks. On three challenging multi-task supervised learning and reinforcement learning problems with varying degrees of shared task structure, we find that this model consistently matches or outperforms joint training and independent training, combining the best elements of both.""","""Reviewers put this paper in the lower half and question the theoretical motivation and the experimental design. On the other hand, this seems like an alternative general framework for solving large-scale multi-task learning problems. In the future, I would encourage the authors to evaluate on multi-task benchmarks such as SuperGLUE, decaNLP and C4. Note: It seems there's more similarities with Ruder et al. (2019) [0] than the paper suggests. [0]pseudo-url"""
885,"""Vid2Game: Controllable Characters Extracted from Real-World Videos""",[],"""We extract a controllable model from a video of a person performing a certain activity. The model generates novel image sequences of that person, according to user-defined control signals, typically marking the displacement of the moving body. The generated video can have an arbitrary background, and effectively capture both the dynamics and appearance of the person. The method is based on two networks. The first maps a current pose, and a single-instance control signal to the next pose. The second maps the current pose, the new pose, and a given background, to an output frame. Both networks include multiple novelties that enable high-quality performance. This is demonstrated on multiple characters extracted from various videos of dancers and athletes.""","""This paper proposes to extract a character from a video, manually control the character, and render into the background in real time. The rendered video can have arbitrary background and capture both the dynamics and appearance of the person. All three reviewers praises the visual quality of the synthesized video and the paper is well written with extensive details. Some concerns are raised. For example, despite an excellent engineering effort, there is few things the reader would scientifically learn from this paper. Additional ablation study on each component would also help the better understanding of the approach. Given the level of efforts, the quality of the results and the reviewers comments, the ACs recommend acceptance as a poster."""
886,"""STABILITY AND CONVERGENCE THEORY FOR LEARNING RESNET: A FULL CHARACTERIZATION""","['ResNet', 'stability', 'convergence theory', 'over-parameterization']","""ResNet structure has achieved great success since its debut. In this paper, we study the stability of learning ResNet. Specifically, we consider the ResNet block = \phi(h_{l-1}+\tau\cdot g(h_{l-1})) where pseudo-formula is ReLU activation and pseudo-formula is a scalar. We show that for standard initialization used in practice, =1/\Omega(\sqrt{L}) is a sharp value in characterizing the stability of forward/backward process of ResNet, where pseudo-formula is the number of residual blocks. Specifically, stability is guaranteed for 1/\Omega(\sqrt{L}) while conversely forward process explodes when pseudo-formula for a positive constant pseudo-formula . Moreover, if ResNet is properly over-parameterized, we show for \le 1/\tilde{\Omega}(\sqrt{L}) gradient descent is guaranteed to find the global minima \footnote{We use pseudo-formula to hide logarithmic factor.}, which significantly enlarges the range of 1/\tilde{\Omega}(L) that admits global convergence in previous work. We also demonstrate that the over-parameterization requirement of ResNet only weakly depends on the depth, which corroborates the advantage of ResNet over vanilla feedforward network. Empirically, with pseudo-formula , deep ResNet can be easily trained even without normalization layer. Moreover, adding pseudo-formula can also improve the performance of ResNet with normalization layer.""","""The article studies the stability of ResNets in relation to initialisation and depth. The reviewers found that this is an interesting article with important theoretical and experimental results. However, they also pointed out that the results, while good, are based on adaptations of previous work and hence might not be particularly impactful. The reviewers found that the revision made important improvements, but not quite meeting the bar for acceptance, pointing out that the presentation and details in the proofs could still be improved. """
887,"""Style-based Encoder Pre-training for Multi-modal Image Synthesis""","['image-to_image translation', 'representation learning', 'multi-modal image synthesis', 'GANs']","""Image-to-image (I2I) translation aims to translate images from one domain to another. To tackle the multi-modal version of I2I translation, where input and output domains have a one-to-many relation, an extra latent input is provided to the generator to specify a particular output. Recent works propose involved training objectives to learn a latent embedding, jointly with the generator, that models the distribution of possible outputs. Alternatively, we study a simple, yet powerful pre-training strategy for multi-modal I2I translation. We first pre-train an encoder, using a proxy task, to encode the style of an image, such as color and texture, into a low-dimensional latent style vector. Then we train a generator to transform an input image along with a style-code to the output domain. Our generator achieves state-of-the-art results on several benchmarks with a training objective that includes just a GAN loss and a reconstruction loss, which simplifies and speeds up the training significantly compared to competing approaches. We further study the contribution of different loss terms to learning the task of multi-modal I2I translation, and finally we show that the learned style embedding is not dependent on the target domain and generalizes well to other domains.""","""The submission describes a new two-stage training scheme for multi-modal image-to-image translation. The new scheme is compared to a single-stage end-to-end baseline, and the advantage of the new scheme is demonstrated empirically. All three reviewers appreciate the proposed contribution and the quality improvement it brings over the baseline. At the same time, the reviewers see the contribution as incremental and not sufficient for an ICLR paper. The author response and paper adjustment have not changed the opinion of the reviewers, so the overall recommendation is to reject."""
888,"""PairNorm: Tackling Oversmoothing in GNNs""","['Graph Neural Network', 'oversmoothing', 'normalization']","""The performance of graph neural nets (GNNs) is known to gradually decrease with increasing number of layers. This decay is partly attributed to oversmoothing, where repeated graph convolutions eventually make node embeddings indistinguishable. We take a closer look at two different interpretations, aiming to quantify oversmoothing. Our main contribution is PairNorm, a novel normalization layer that is based on a careful analysis of the graph convolution operator, which prevents all node embeddings from becoming too similar. What is more, PairNorm is fast, easy to implement without any change to network architecture nor any additional parameters, and is broadly applicable to any GNN. Experiments on real-world graphs demonstrate that PairNorm makes deeper GCN, GAT, and SGC models more robust against oversmoothing, and significantly boosts performance for a new problem setting that benefits from deeper GNNs. Code is available at pseudo-url.""","""The paper proposes a way to tackle oversmoothing in Graph Neural Networks. The authors do a good job of motivating their approach, which is straightforward and works well. The paper is well written and the experiments are informative and well carried out. Therefore, I recommend acceptance. Please make suree thee final version reflects the discussion during the rebuttal."""
889,"""A GOODNESS OF FIT MEASURE FOR GENERATIVE NETWORKS""","['generative adversarial networks', 'goodness of fit', 'inception score', 'empirical approximation error', 'validation metric', 'frechet inception score']","""We define a goodness of fit measure for generative networks which captures how well the network can generate the training data, which is necessary to learn the true data distribution. We demonstrate how our measure can be leveraged to understand mode collapse in generative adversarial networks and provide practitioners with a novel way to perform model comparison and early stopping without having to access another trained model as with Frechet Inception Distance or Inception Score. This measure shows that several successful, popular generative models, such as DCGAN and WGAN, fall very short of learning the data distribution. We identify this issue in generative models and empirically show that overparameterization via subsampling data and using a mixture of models improves performance in terms of goodness of fit.""","""This paper proposes to measure the distance of the generator manifold to the training data. The proposed approach bears significant similarity to past studies that also sought to analyze the behavior of generative models that define a low-dimensional manifold (e.g. Webster 2019, and in particular, Xiang 2017). I recommend that the authors perform a broader literature search to better contextualize the claims and experiments put forth in the paper. The proposed method also suffers from some limitations that are not made clear in the paper. First, the measure depends only on the support of the generator, but not the density. For models that have support everywhere (exact likelihood models tend to have this property by construction), the measure is no longer meaningful. Even for VAEs, the measure is only easily applicable if the decoder is non-autoregressive so that the procedure can be applied only to the mean decoding. In this current state, I do not recommend the paper for submission. Xiang (2017). On the Effects of Batch and Weight Normalization in Generative Adversarial Networks Webster (2019). Detecting Overfitting of Deep Generative Networks via Latent Recovery """
890,"""FINBERT: FINANCIAL SENTIMENT ANALYSIS WITH PRE-TRAINED LANGUAGE MODELS""","['Financial sentiment analysis', 'financial text classification', 'transfer learning', 'pre-trained language models', 'BERT', 'NLP']","""While many sentiment classification solutions report high accuracy scores in product or movie review datasets, the performance of the methods in niche domains such as finance still largely falls behind. The reason of this gap is the domain-specific language, which decreases the applicability of existing models, and lack of quality labeled data to learn the new context of positive and negative in the specific domain. Transfer learning has been shown to be successful in adapting to new domains without large training data sets. In this paper, we explore the effectiveness of NLP transfer learning in financial sentiment classification. We introduce FinBERT, a language model based on BERT, which improved the state-of-the-art performance by 14 percentage points for a financial sentiment classification task in FinancialPhrasebank dataset.""","""This paper presents FinBERT, a BERT-based model that is further trained on a financial corpus and evaluated on Financial PhraseBank and Financial QA. The authors show that FinBERT slightly outperforms baseline methods on both tasks. The reviewers agree that the novelty is limited and this seems to be an application of BERT to financial dataset. There are many cases when it is okay to not present something entirely novel in terms of model as long as a paper still provides new insights on other things. Unfortunately, the new experiments in this paper are also not convincing. The improvements are very minor on small evaluation datasets, which makes the main contributions of the paper not enough for a venue such as ICLR. The authors did not respond to any of the reviewers' concerns. I recommend rejecting this paper."""
891,"""CRNet: Image Super-Resolution Using A Convolutional Sparse Coding Inspired Network""","['Convolutional sparse coding', 'LISTA', 'image super-resolution']","""Convolutional Sparse Coding (CSC) has been attracting more and more attention in recent years, for making full use of image global correlation to improve performance on various computer vision applications. However, very few studies focus on solving CSC based image Super-Resolution (SR) problem. As a consequence, there is no significant progress in this area over a period of time. In this paper, we exploit the natural connection between CSC and Convolutional Neural Networks (CNN) to address CSC based image SR. Specifically, Convolutional Iterative Soft Thresholding Algorithm (CISTA) is introduced to solve CSC problem and it can be implemented using CNN architectures. Then we develop a novel CSC based SR framework analogy to the traditional SC based SR methods. Two models inspired by this framework are proposed for pre-/post-upsampling SR, respectively. Compared with recent state-of-the-art SR methods, both of our proposed models show superior performance in terms of both quantitative and qualitative measurements.""","""All three reviewers agreed that the paper should not be accepted. No rebuttal was offered, thus the paper is rejected."""
892,"""Model Ensemble-Based Intrinsic Reward for Sparse Reward Reinforcement Learning""","['Reinforcement Learning', 'Intrinsic Reward', 'Dynamics Model', 'Ensemble']","""In this paper, a new intrinsic reward generation method for sparse-reward reinforcement learning is proposed based on an ensemble of dynamics models. In the proposed method, the mixture of multiple dynamics models is used to approximate the true unknown transition probability, and the intrinsic reward is designed as the minimum of the surprise seen from each dynamics model to the mixture of the dynamics models. In order to show the effectiveness of the proposed intrinsic reward generation method, a working algorithm is constructed by combining the proposed intrinsic reward generation method with the proximal policy optimization (PPO) algorithm. Numerical results show that for representative locomotion tasks, the proposed model-ensemble-based intrinsic reward generation method outperforms the previous methods based on a single dynamics model.""","""This paper considers the challenge of sparse reward reinforcement learning through intrinsic reward generation based on the deviation in predictions of an ensemble of dynamics models. This is combined with PPO and evaluated in some Mujoco domains. The main issue here was with the way the sparse rewards were provided in the experiments, which was artificial and could lead to a number of problems with the reward structure and partial observability. The work was also considered incremental in its novelty. These concerns were not adequately rebutted, and so as it stands this paper should be rejected."""
893,"""End to End Trainable Active Contours via Differentiable Rendering""",[],"""We present an image segmentation method that iteratively evolves a polygon. At each iteration, the vertices of the polygon are displaced based on the local value of a 2D shift map that is inferred from the input image via an encoder-decoder architecture. The main training loss that is used is the difference between the polygon shape and the ground truth segmentation mask. The network employs a neural renderer to create the polygon from its vertices, making the process fully differentiable. We demonstrate that our method outperforms the state of the art segmentation networks and deep active contour solutions in a variety of benchmarks, including medical imaging and aerial images.""","""The submission presents a differentiable take on classic active contour methods, which used to be popular in computer vision. The method is sensible and the results are strong. After the revision, all reviewers recommend accepting the paper."""
894,"""Adversarial AutoAugment""","['Automatic Data Augmentation', 'Adversarial Learning', 'Reinforcement Learning']","""Data augmentation (DA) has been widely utilized to improve generalization in training deep neural networks. Recently, human-designed data augmentation has been gradually replaced by automatically learned augmentation policy. Through finding the best policy in well-designed search space of data augmentation, AutoAugment (Cubuk et al., 2019) can significantly improve validation accuracy on image classification tasks. However, this approach is not computationally practical for large-scale problems. In this paper, we develop an adversarial method to arrive at a computationally-affordable solution called Adversarial AutoAugment, which can simultaneously optimize target related object and augmentation policy search loss. The augmentation policy network attempts to increase the training loss of a target network through generating adversarial augmentation policies, while the target network can learn more robust features from harder examples to improve the generalization. In contrast to prior work, we reuse the computation in target network training for policy evaluation, and dispense with the retraining of the target network. Compared to AutoAugment, this leads to about 12x reduction in computing cost and 11x shortening in time overhead on ImageNet. We show experimental results of our approach on CIFAR-10/CIFAR-100, ImageNet, and demonstrate significant performance improvements over state-of-the-art. On CIFAR-10, we achieve a top-1 test error of 1.36%, which is the currently best performing single model. On ImageNet, we achieve a leading performance of top-1 accuracy 79.40% on ResNet-50 and 80.00% on ResNet-50-D without extra data.""","""This paper proposes a method to learn data augmentation policies using an adversarial loss. In contrast to AutoAugment where an augmentation policy generator is trained by RL (computationally expensive), the authors propose to train a policy generator and the target classifier simultaneously. This is done in an adversarial fashion by computing augmentation policies which increase the loss of the classifier. The authors show that this approach leads to roughly an order of magnitude improvement in computational cost over AutoAugment, while improving the test performance. The reviewers agree that the presentation is clear and that the proposed method is sound, and that there is a significant practical benefit of using such a technique. As most of the concerns were addressed in the discussion phase, I will recommend acceptance of this paper. We ask the authors to update the manuscript to address the remaining (minor) concerns. """
895,"""Towards Simplicity in Deep Reinforcement Learning: Streamlined Off-Policy Learning""","['Deep Reinforcement Learning', 'Sample Efficiency', 'Off-Policy Algorithms']","""The field of Deep Reinforcement Learning (DRL) has recently seen a surge in the popularity of maximum entropy reinforcement learning algorithms. Their popularity stems from the intuitive interpretation of the maximum entropy objective and their superior sample efficiency on standard benchmarks. In this paper, we seek to understand the primary contribution of the entropy term to the performance of maximum entropy algorithms. For the Mujoco benchmark, we demonstrate that the entropy term in Soft Actor Critic (SAC) principally addresses the bounded nature of the action spaces. With this insight, we propose a simple normalization scheme which allows a streamlined algorithm without entropy maximization match the performance of SAC. Our experimental results demonstrate a need to revisit the benefits of entropy regularization in DRL. We also propose a simple non-uniform sampling method for selecting transitions from the replay buffer during training. We further show that the streamlined algorithm with the simple non-uniform sampling scheme outperforms SAC and achieves state-of-the-art performance on challenging continuous control tasks.""","""The paper studies the role of entropy in maximum entropy RL, particularly in soft actor-critic, and proposes an action normalization scheme that leads to a new algorithm, called Streamlined Off-Policy (SOP), that does not maximize entropy, but retains or exceeds the performance of SAC. Independently from SOP, the paper also introduces Emphasizing Recent Experience (ERE) that samples minibatches from the replay buffer by prioritizing the most recent samples. After rounds of discussion and a revised version with added experiments, the reviewers viewed ERE as the main contribution, while had doubts regarding the claimed benefits of SOP. However, the paper is currently structured around SOP, and the effectiveness of ERE, which can be applied to any off-policy algorithm, is not properly studied. Therefore, I recommend rejection, but encourage the authors to revisit the work with an emphasis on ERE."""
896,"""RTFM: Generalising to New Environment Dynamics via Reading""","['reinforcement learning', 'policy learning', 'reading comprehension', 'generalisation']","""Obtaining policies that can generalise to new environments in reinforcement learning is challenging. In this work, we demonstrate that language understanding via a reading policy learner is a promising vehicle for generalisation to new environments. We propose a grounded policy learning problem, Read to Fight Monsters (RTFM), in which the agent must jointly reason over a language goal, relevant dynamics described in a document, and environment observations. We procedurally generate environment dynamics and corresponding language descriptions of the dynamics, such that agents must read to understand new environment dynamics instead of memorising any particular information. In addition, we propose txt2, a model that captures three-way interactions between the goal, document, and observations. On RTFM, txt2 generalises to new environments with dynamics not seen during training via reading. Furthermore, our model outperforms baselines such as FiLM and language-conditioned CNNs on RTFM. Through curriculum learning, txt2 produces policies that excel on complex RTFM tasks requiring several reasoning and coreference steps.""","""This paper proposes RTFM, a new model in the field of language-conditioned policy learning. This approach is promising and important in reinforcement learning because of the difficulty to learn policies in new environments. Reviewers appreciate the importance of the problem and the effective approach. After the author response which addressed some of the major concerns, reviewers feel more positive about the paper. They comment, though, that presentation could be clearer, and the limitations of using synthetic data should be discussed in depth. I thank the authors for submitting this paper."""
897,"""Disentangling Style and Content in Anime Illustrations""","['Adversarial Training', 'Generative Models', 'Style Transfer', 'Anime']","""Existing methods for AI-generated artworks still struggle with generating high-quality stylized content, where high-level semantics are preserved, or separating fine-grained styles from various artists. We propose a novel Generative Adversarial Disentanglement Network which can disentangle two complementary factors of variations when only one of them is labelled in general, and fully decompose complex anime illustrations into style and content in particular. Training such model is challenging, since given a style, various content data may exist but not the other way round. Our approach is divided into two stages, one that encodes an input image into a style independent content, and one based on a dual-conditional generator. We demonstrate the ability to generate high-fidelity anime portraits with a fixed content and a large variety of styles from over a thousand artists, and vice versa, using a single end-to-end network and with applications in style transfer. We show this unique capability as well as superior output to the current state-of-the-art.""","""This paper proposes a two-stage adversarial training approach for learning a disentangled representation of style and content of anime images. Unlike the previous style transfer work, here style is defined as the identity of a particular anime artist, rather than a set of uninterpretable style features. This allows the trained network to generate new anime images which have a particular content and are drawn in the style of a particular artist. While the approach works well, the reviewers voiced concerns about the method (overly complicated and somewhat incremental) and the quality of the experimental section (lack of good baselines and quantitative comparisons at least in terms of the disentanglement quality). It was also mentioned that releasing the code and the dataset would strengthen the appeal of the paper. While the authors have addressed some of the reviewers concerns, unfortunately it was not enough to persuade the reviewers to change their marks. Hence, I have to recommend a rejection."""
898,"""Feature Partitioning for Efficient Multi-Task Architectures""","['multi-task learning', 'neural architecture search', 'multi-task architecture search']","""Multi-task learning promises to use less data, parameters, and time than training separate single-task models. But realizing these benefits in practice is challenging. In particular, it is difficult to define a suitable architecture that has enough capacity to support many tasks while not requiring excessive compute for each individual task. There are difficult trade-offs when deciding how to allocate parameters and layers across a large set of tasks. To address this, we propose a method for automatically searching over multi-task architectures that accounts for resource constraints. We define a parameterization of feature sharing strategies for effective coverage and sampling of architectures. We also present a method for quick evaluation of such architectures with feature distillation. Together these contributions allow us to quickly optimize for parameter-efficient multi-task models. We benchmark on Visual Decathlon, demonstrating that we can automatically search for and identify architectures that effectively make trade-offs between task resource requirements while maintaining a high level of final performance.""","""This paper considers how to create efficient architectures for multi-task neural networks. R1 recommends Weak Reject, identifying concerns about the clarity of writing, unsupported claims, and missing or unclear technical details. R2 recommends Weak Accept but calls this a ""borderline"" case, and has concerns about experiments and comparisons to baselines. R3 also has concerns about experiments and baselines, and feels the approach is somewhat ad hoc. The authors submitted a response that addressed some of these issues, but the authors chose to maintain their decisions. The AC feels the paper has merit but given these slightly negative to borderline reviews, we cannot recommend acceptance at this time. We hope the reviewer comments help the authors to prepare a revision for another venue."""
899,"""A Fine-Grained Spectral Perspective on Neural Networks""","['Neural Tangent Kernel', 'Neural Network Gaussian Process', 'Spectral theory', 'Eigenvalues', 'Harmonic analysis']","""Are neural networks biased toward simple functions? Does depth always help learn more complex features? Is training the last layer of a network as good as training all layers? These questions seem unrelated at face value, but in this work we give all of them a common treatment from the spectral perspective. We will study the spectra of the *Conjugate Kernel, CK,* (also called the *Neural Network-Gaussian Process Kernel*), and the *Neural Tangent Kernel, NTK*. Roughly, the CK and the NTK tell us respectively ``""what a network looks like at initialization"" and ""``what a network looks like during and after training."" Their spectra then encode valuable information about the initial distribution and the training and generalization properties of neural networks. By analyzing the eigenvalues, we lend novel insights into the questions put forth at the beginning, and we verify these insights by extensive experiments of neural networks. We believe the computational tools we develop here for analyzing the spectra of CK and NTK serve as a solid foundation for future studies of deep neural networks. We have open-sourced the code for it and for generating the plots in this paper at pseudo-url.""","""The authors develop a spectral analysis on the boolean cube for the neural ""conjugate kernel"" (CK) and ""tangent kernel"" (NTK). The analysis sheds light into inductive biases of neural networks, such as whether they are biased to simple functions. This work contains rigorous analysis and theory which is useful for further discussions. However, the theory and insights do not feel complete. One important drawback is that the analysis is limited by the boolean cube setting; this also means that it is more difficult to link theory to practical scenarios. This has been discussed a lot during the rebuttal and among reviewers. Empirical validation has attempted to deal with these concerns, but it would be useful to have this validation coming from theory, or at least have further relevant theoretical insights. This could happen by further building on the theorem provided in the rebuttal for eigenvalue behavior when d is large."""
900,"""Fast Neural Network Adaptation via Parameter Remapping and Architecture Search""",[],"""Deep neural networks achieve remarkable performance in many computer vision tasks. Most state-of-the-art~(SOTA) semantic segmentation and object detection approaches reuse neural network architectures designed for image classification as the backbone, commonly pre-trained on ImageNet. However, performance gains can be achieved by designing network architectures specifically for detection and segmentation, as shown by recent neural architecture search (NAS) research for detection and segmentation. One major challenge though, is that ImageNet pre-training of the search space representation (a.k.a. super network) or the searched networks incurs huge computational cost. In this paper, we propose a Fast Neural Network Adaptation (FNA) method, which can adapt both the architecture and parameters of a seed network (e.g. a high performing manually designed backbone) to become a network with different depth, width, or kernels via a Parameter Remapping technique, making it possible to utilize NAS for detection/segmentation tasks a lot more efficiently. In our experiments, we conduct FNA on MobileNetV2 to obtain new networks for both segmentation and detection that clearly out-perform existing networks designed both manually and by NAS. The total computation cost of FNA is significantly less than SOTA segmentation/detection NAS approaches: 1737 pseudo-formula less than DPC, 6.8 pseudo-formula less than Auto-DeepLab and 7.4 pseudo-formula less than DetNAS. The code is available at pseudo-url .""","""Main content: Paper proposes a fast network adaptation (FNA) method, which takes a pre-trained image classification network, and produces a network for the task of object detection/semantic segmentation Summary of discussion: reviewer1: interesting paper with good results, specifically without the need to do pre-training on Imagenet. Cons are better comparisons to existing methods and run on more datasets. reviewer2: interesting idea on adapting source network network via parameter re-mapping that offers good results in both performance and training time. reviewer3: novel method overall, though some concerns on the concrete parameter remapping scheme. Results are impressive Recommendation: Interesting idea and good results. Paper could be improved with better comparison to existing techniques. Overall recommend weak accept."""
901,"""Bootstrapping the Expressivity with Model-based Planning""","['reinforcement learning theory', 'model-based reinforcement learning', 'planning', 'expressivity', 'approximation theory', 'deep reinforcement learning theory']","""We compare the model-free reinforcement learning with the model-based approaches through the lens of the expressive power of neural networks for policies, pseudo-formula -functions, and dynamics. We show, theoretically and empirically, that even for one-dimensional continuous state space, there are many MDPs whose optimal pseudo-formula -functions and policies are much more complex than the dynamics. We hypothesize many real-world MDPs also have a similar property. For these MDPs, model-based planning is a favorable algorithm, because the resulting policies can approximate the optimal policy significantly better than a neural network parameterization can, and model-free or model-based policy optimization rely on policy parameterization. Motivated by the theory, we apply a simple multi-step model-based bootstrapping planner (BOOTS) to bootstrap a weak pseudo-formula -function into a stronger policy. Empirical results show that applying BOOTS on top of model-based or model-free policy optimization algorithms at the test time improves the performance on MuJoCo benchmark tasks. ""","""The paper provides some insight why model-based RL might be more efficient than model-free methods. It provides an example that even though the dynamics is simple, the value function is quite complicated (it is in a fractal). Even though the particular example might be novel and the construction interesting, this relation between dynamics and value function is not surprising, and perhaps part of the folklore. The paper also suggests a model-based RL methods and provides some empirical results. The reviewers find the paper interesting, but they expressed several concerns about the relevance of the particular example, the relation of the theory to empirical results, etc. The authors provided a rebuttal, but the reviewers were not convinced. Given that we have two Weak Rejects and the reviewer who is Weak Accept is not completely convinced, unfortunately I can only recommend rejection of this paper at this stage."""
902,"""Measuring Calibration in Deep Learning""","['Deep Learning', 'Multiclass Classification', 'Classification', 'Uncertainty Estimation', 'Calibration']","""Overconfidence and underconfidence in machine learning classifiers is measured by calibration: the degree to which the probabilities predicted for each class match the accuracy of the classifier on that prediction. We propose two new measures for calibration, the Static Calibration Error (SCE) and Adaptive Calibration Error (ACE). These measures take into account every prediction made by a model, in contrast to the popular Expected Calibration Error.""","""The authors propose two measures of calibration that don't simply rely on the top prediction. The reviewers gave a lot of useful feedback. Unfortunately, the authors didn't respond."""
903,"""Reinforcement Learning without Ground-Truth State""","['Self-supervised', 'goal-conditioned reinforcement learning']","""To perform robot manipulation tasks, a low-dimensional state of the environment typically needs to be estimated. However, designing a state estimator can sometimes be difficult, especially in environments with deformable objects. An alternative is to learn an end-to-end policy that maps directly from high-dimensional sensor inputs to actions. However, if this policy is trained with reinforcement learning, then without a state estimator, it is hard to specify a reward function based on high-dimensional observations. To meet this challenge, we propose a simple indicator reward function for goal-conditioned reinforcement learning: we only give a positive reward when the robot's observation exactly matches a target goal observation. We show that by relabeling the original goal with the achieved goal to obtain positive rewards (Andrychowicz et al., 2017), we can learn with the indicator reward function even in continuous state spaces. We propose two methods to further speed up convergence with indicator rewards: reward balancing and reward filtering. We show comparable performance between our method and an oracle which uses the ground-truth state for computing rewards. We show that our method can perform complex tasks in continuous state spaces such as rope manipulation from RGB-D images, without knowledge of the ground-truth state.""","""This paper considers the problem of reinforcement learning with goal-conditioned agents where the agents do not have access to the ground truth state. The paper builds on the ideas in hindsight experience replay (HER), a method that relabels past trajectories with a goal set in hindsight. This hindsight mechanism enables indicator reward functions to be useful even with image inputs. Two technical contributions are reward balancing (balancing positive and negative experience) and reward filtering (a heuristic for removing false negatives). The method is tested on multiple tasks including a novel RopePush task in simulation. The reviewers discussed strengths and limitations of the paper. One strength was that the writing was clear for the reviewers. One limitation was the paper's novelty, as most of these ideas are already present in HER with the exception of reward filtering. Another major concern was that the experiments were not sufficiently informative. The simulation tasks did not adequately distinguish the proposed method from the baseline (in two of the three tasks) and the third task (RopePush) was simplified substantially (using invisible robot arms). The real world task did not require the pixel observations. The analysis of the method was also found to be somewhat limited by the reviewers, though this was partially addressed by the authors. This paper is not yet ready for publication since the proposed method has insufficient supporting evidence. A more thorough experiment could provide stronger evidence by showing a regime where the proposed method performs better than alternatives."""
904,"""Self-labelling via simultaneous clustering and representation learning""","['self-supervision', 'feature representation learning', 'clustering']","""Combining clustering and representation learning is one of the most promising approaches for unsupervised learning of deep neural networks. However, doing so naively leads to ill posed learning problems with degenerate solutions. In this paper, we propose a novel and principled learning formulation that addresses these issues. The method is obtained by maximizing the information between labels and input data indices. We show that this criterion extends standard cross-entropy minimization to an optimal transport problem, which we solve efficiently for millions of input images and thousands of labels using a fast variant of the Sinkhorn-Knopp algorithm. The resulting method is able to self-label visual data so as to train highly competitive image representations without manual labels. Our method achieves state of the art representation learning performance for AlexNet and ResNet-50 on SVHN, CIFAR-10, CIFAR-100 and ImageNet and yields the first self-supervised AlexNet that outperforms the supervised Pascal VOC detection baseline. ""","""The paper focuses on supervised and self-supervised learning. The originality is to formulate the self-supervised criterion in terms of optimal transport, where the trained representation is required to induce pseudo-formula equidistributed clusters. The formulation is well founded; in practice, the approach proceeds by alternatively optimizing the cross-entropy loss (SGD) and the pseudo-loss, through a fast version of the Sinkhorn-Knopp algorithm, and scales up to million of samples and thousands of classes. Some concerns about the robustness w.r.t. imbalanced classes, the ability to deliver SOTA supervised performances, the computational complexity have been answered by the rebuttal and handled through new experiments. The convergence toward a local minimum is shown; however, increasing the number of pseudo-label optimization rounds might degrade the results. Overall, I recommend to accept the paper as an oral presentation. A more fancy title would do a better justice to this very nice paper (""Self-labelling learning via optimal transport"" ?). """
905,"""Ridge Regression: Structure, Cross-Validation, and Sketching""","['ridge regression', 'sketching', 'random matrix theory', 'cross-validation', 'high-dimensional asymptotics']","""We study the following three fundamental problems about ridge regression: (1) what is the structure of the estimator? (2) how to correctly use cross-validation to choose the regularization parameter? and (3) how to accelerate computation without losing too much accuracy? We consider the three problems in a unified large-data linear model. We give a precise representation of ridge regression as a covariance matrix-dependent linear combination of the true parameter and the noise. We study the bias of pseudo-formula -fold cross-validation for choosing the regularization parameter, and propose a simple bias-correction. We analyze the accuracy of primal and dual sketching for ridge regression, showing they are surprisingly accurate. Our results are illustrated by simulations and by analyzing empirical data.""","""The paper studies theoretical properties of ridge regression, and in particular how to correct for the bias of the estimator. The reviewers appreciated the contribution and the fact that you updated the manuscript to make it clearer. I however advise the authors to think about the best way to maximize impact for the ICLR audience, perhaps by providing relevant examples from the ML literature."""
906,"""SlowMo: Improving Communication-Efficient Distributed SGD with Slow Momentum""","['distributed optimization', 'decentralized training methods', 'communication-efficient distributed training with momentum', 'large-scale parallel SGD']","""Distributed optimization is essential for training large models on large datasets. Multiple approaches have been proposed to reduce the communication overhead in distributed training, such as synchronizing only after performing multiple local SGD steps, and decentralized methods (e.g., using gossip algorithms) to decouple communications among workers. Although these methods run faster than AllReduce-based methods, which use blocking communication before every update, the resulting models may be less accurate after the same number of updates. Inspired by the BMUF method of Chen & Huo (2016), we propose a slow momentum (SlowMo) framework, where workers periodically synchronize and perform a momentum update, after multiple iterations of a base optimization algorithm. Experiments on image classification and machine translation tasks demonstrate that SlowMo consistently yields improvements in optimization and generalization performance relative to the base optimizer, even when the additional overhead is amortized over many updates so that the SlowMo runtime is on par with that of the base optimizer. We provide theoretical convergence guarantees showing that SlowMo converges to a stationary point of smooth non-convex losses. Since BMUF can be expressed through the SlowMo framework, our results also correspond to the first theoretical convergence guarantees for BMUF.""","""This paper presents a new approach, SlowMo, to improve communication-efficient distribution training with SGD. The main method is based on the BMUF approach and relies on workers to periodically synchronize and perform a momentum update. This works well in practice as shown in the empirical results. Reviewers had a couple of concerns regarding the significance of the contributions. After the rebuttal period some of their doubts were clarified. Even though they find that the solutions of the paper are an incremental extension of existing work, they believe this is a useful extension. For this reason, I recommend to accept this paper."""
907,"""MANIFOLD FORESTS: CLOSING THE GAP ON NEURAL NETWORKS""","['machine learning', 'structured learning', 'projections', 'structured data', 'images', 'classification']","""Decision forests (DF), in particular random forests and gradient boosting trees, have demonstrated state-of-the-art accuracy compared to other methods in many supervised learning scenarios. In particular, DFs dominate other methods in tabular data, that is, when the feature space is unstructured, so that the signal is invariant to permuting feature indices. However, in structured data lying on a manifold---such as images, text, and speech---neural nets (NN) tend to outperform DFs. We conjecture that at least part of the reason for this is that the input to NN is not simply the feature magnitudes, but also their indices (for example, the convolution operation uses ``feature locality). In contrast, naive DF implementations fail to explicitly consider feature indices. A recently proposed DF approach demonstrates that DFs, for each node, implicitly sample a random matrix from some specific distribution. Here, we build on that to show that one can choose distributions in a manifold aware fashion. For example, for image classification, rather than randomly selecting pixels, one can randomly select contiguous patches. We demonstrate the empirical performance of data living on three different manifolds: images, time-series, and a torus. In all three cases, our Manifold Forest (Mf) algorithm empirically dominates other state-of-the-art approaches that ignore feature space structure, achieving a lower classification error on all sample sizes. This dominance extends to the MNIST data set as well. Moreover, both training and test time is significantly faster for manifold forests as compared to deep nets. This approach, therefore, has promise to enable DFs and other machine learning methods to close the gap with deep nets on manifold-valued data. ""","""This work explores how to leverage structure of this input in decision trees, the way this is done for example in convolutional networks. All reviewers agree that the experimental validation of the method as presented is extremely weak. Authors have not provided a response to answer the many concerns raised by reviewers. Therefore, we recommend rejection."""
908,"""Learning a Behavioral Repertoire from Demonstrations""","['Behavioral Repertoires', 'Imitation Learning', 'Deep Learning', 'Adaptation', 'StarCraft 2']","""Imitation Learning (IL) is a machine learning approach to learn a policy from a set of demonstrations. IL can be useful to kick-start learning before applying reinforcement learning (RL) but it can also be useful on its own, e.g. to learn to imitate human players in video games. However, a major limitation of current IL approaches is that they learn only a single ``""average"" policy based on a dataset that possibly contains demonstrations of numerous different types of behaviors. In this paper, we present a new approach called Behavioral Repertoire Imitation Learning (BRIL) that instead learns a repertoire of behaviors from a set of demonstrations by augmenting the state-action pairs with behavioral descriptions. The outcome of this approach is a single neural network policy conditioned on a behavior description that can be precisely modulated. We apply this approach to train a policy on 7,777 human demonstrations for the build-order planning task in StarCraft II. Dimensionality reduction techniques are applied to construct a low-dimensional behavioral space from the high-dimensional army unit composition of each demonstration. The results demonstrate that the learned policy can be effectively manipulated to express distinct behaviors. Additionally, by applying the UCB1 algorithm, the policy can adapt its behavior -in-between games- to reach a performance beyond that of the traditional IL baseline approach.""","""This paper proposes a way to lean context-dependent policies from demonstrations, where the context represents behavior labels obtained by annotating demonstrations with differences in behavior across dimensions and the reduced in 2 dimensions. Results are conducted in the domain of StarCraft. The main concerns from the reviewers related to the papers novelty (as pointed by R2) and experiments (particularly the lack of comparison with other methods and the evaluation of only 4 out of the 62 behaviour clusters, as pointed by R3). As such, I cannot recommend acceptance, as current results do not provide strong empirical evidence about the superiority of the method against other alternatives."""
909,"""Learning to Generate Grounded Visual Captions without Localization Supervision""","['image captioning', 'video captioning', 'self-supervised learning', 'visual grounding']","""When automatically generating a sentence description for an image or video, it often remains unclear how well the generated caption is grounded, or if the model hallucinates based on priors in the dataset and/or the language model. The most common way of relating image regions with words in caption models is through an attention mechanism over the regions that are used as input to predict the next word. The model must therefore learn to predict the attentional weights without knowing the word it should localize. This is difficult to train without grounding supervision since recurrent models can propagate past information and there is no explicit signal to force the captioning model to properly ground the individual decoded words. In this work, we help the model to achieve this via a novel cyclical training regimen that forces the model to localize each word in the image after the sentence decoder generates it, and then reconstruct the sentence from the localized image region(s) to match the ground-truth. Our proposed framework only requires learning one extra fully-connected layer (the localizer), a layer that can be removed at test time. We show that our model significantly improves grounding accuracy without relying on grounding supervision or introducing extra computation during inference for both image and video captioning tasks.""","""This paper proposes a cyclical training scheme for grounded visual captioning, where a localization model is trained to identify the regions in the image referred to by caption words, and a reconstruction step is added conditioned on this information. This extends prior work which required grounding supervision. While the proposed approach is sensible and grounding of generated captions is an important requirement, some reviewers (me included) pointed out concerns about the relevance of this paper's contributions. I found the authors explanation that the objective is not to improve the captioning accuracy but to refine its grounding performance without any localization supervision a bit unconvincing -- I would expect that better grounding would be reflected in overall better captioning performance, which seems to have happened with the supervised model of Zhou et al. (2019). In fact, even the localization gains seem rather small: The attention accuracy for localizer is 20.4% and is higher than the 19.3% from the decoder at the end of training. Overall, the proposed model is an incremental change on the training of an image captioning system, by adding a localizer component, which is not used at test time. The authors' claim that The network is implicitly regularized to update its attention mechanism to match with the localized image regions is also unclear to me -- there is nothing in the loss function that penalizes the difference between these two attentions, as the gradient doesnt backprop from one component to another. Sharing the LSTM and Language LSTM doesnt imply this, as the localizer is just providing guidance to the decoder, but there is no reason this will help the attention of the original model. Other natural questions left unanswered by this paper are: - What happens if we use the localizer also in test time (calling the decoder twice)? Will the captions improve? This experiment would be needed to assess the potential of this method to help image captioning. - Can we keep refining this iteratively? - Can we add a loss term on the disagreement of the two attentions to actually achieve the said regularisation effect? Finally, the paper [1] (cited by the authors) seems to employ a similar strategy (encoder-decoder with reconstructor) with shown benefits in video captioning. [1] Bairui Wang, Lin Ma, Wei Zhang, and Wei Liu. Reconstruction network for video captioning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 76227631, 2018. I suggest addressing some of these concerns in a revised version of the paper."""
910,"""Neural networks with motivation""","['neuroscience', 'brain', 'motivation', 'learning', 'reinforcement learning', 'recurrent neural network', 'deep learning']","""How can animals behave effectively in conditions involving different motivational contexts? Here, we propose how reinforcement learning neural networks can learn optimal behavior for dynamically changing motivational salience vectors. First, we show that Q-learning neural networks with motivation can navigate in environment with dynamic rewards. Second, we show that such networks can learn complex behaviors simultaneously directed towards several goals distributed in an environment. Finally, we show that in Pavlovian conditioning task, the responses of the neurons in our model resemble the firing patterns of neurons in the ventral pallidum (VP), a basal ganglia structure involved in motivated behaviors. We show that, similarly to real neurons, recurrent networks with motivation are composed of two oppositely-tuned classes of neurons, responding to positive and negative rewards. Our model generates predictions for the VP connectivity. We conclude that networks with motivation can rapidly adapt their behavior to varying conditions without changes in synaptic strength when expected reward is modulated by motivation. Such networks may also provide a mechanism for how hierarchical reinforcement learning is implemented in the brain.""","""This paper proposes a deep RL framework that incorporates motivation as input features, and is tested on 3 simplified domains, including one which is presented to rodents. While R2 found the paper well-written and interesting to read, a common theme among reviewer comments is that its not clear what the main contribution is, as it seems to simultaneously be claiming a ML contribution (motivation as a feature input helps with certain tasks) as well as a neuroscientific contribution (their agent exhibited representations that clustered similarly to those in animals). In trying to do both, its perhaps doing both a disservice. I think its commendable to try to bridge the fields of deep RL and neuroscience, and this is indeed an intriguing paper. However any such paper still needs to have a clear contribution. It seems that the ML contributions are too slight to be of general practical use, while the neuroscientific contributions are muddled somewhat. The authors several times mentioned the space constraints limiting their explanations. Perhaps this is an indication that they are trying to cover too much within one paper. I urge the authors to consider splitting it up into two separate works in order to give both the needed focus. I also have some concerns about the results themselves. R1 and R3 both mentioned that the comparison between the non-motivated agent and the motivated agent wasnt quite fair, since one is essentially only given partial information. Its therefore not clear how we should be interpreting the performance difference. Second, why was the non-motivated agent not analyzed in the same way as the motivated agent for the Pavlovian task? Isnt this a crucial comparison to make, if one wanted to argue that the motivational salience is key to reproducing the representational similarities of the animals? (The new experiment with the random fixed weights is interesting, I would have liked to see those results.) For these reasons and the ones laid out in the extensive comments of the reviewers, Im afraid I have to recommend reject. """
911,"""Episodic Reinforcement Learning with Associative Memory""","['Deep Reinforcement Learning', 'Episodic Control', 'Episodic Memory', 'Associative Memory', 'Non-Parametric Method', 'Sample Efficiency']","""Sample efficiency has been one of the major challenges for deep reinforcement learning. Non-parametric episodic control has been proposed to speed up parametric reinforcement learning by rapidly latching on previously successful policies. However, previous work on episodic reinforcement learning neglects the relationship between states and only stored the experiences as unrelated items. To improve sample efficiency of reinforcement learning, we propose a novel framework, called Episodic Reinforcement Learning with Associative Memory (ERLAM), which associates related experience trajectories to enable reasoning effective strategies. We build a graph on top of states in memory based on state transitions and develop a reverse-trajectory propagation strategy to allow rapid value propagation through the graph. We use the non-parametric associative memory as early guidance for a parametric reinforcement learning model. Results on navigation domain and Atari games show our framework achieves significantly higher sample efficiency than state-of-the-art episodic reinforcement learning models.""","""The submission tackles the problem of data efficiency in RL by building a graph on top of the replay memory and propagate values based on this representation of states and transitions. The method is evaluated on Atari games and is shown to outperform other episodic RL methods. The reviews were mixed initially but have been brought up by the revisions to the paper and the authors' rebuttal. In particular, there was a concern about theoretical support and the authors added a proof of convergence. They have also added additional experiments and explanations. Given the positive reviews and discussion, the recommendation is to accept this paper."""
912,"""Adaptive Correlated Monte Carlo for Contextual Categorical Sequence Generation""","['binary softmax', 'discrete variables', 'policy gradient', 'pseudo actions', 'reinforcement learning', 'variance reduction']","""Sequence generation models are commonly refined with reinforcement learning over user-defined metrics. However, high gradient variance hinders the practical use of this method. To stabilize this method, we adapt to contextual generation of categorical sequences a policy gradient estimator, which evaluates a set of correlated Monte Carlo (MC) rollouts for variance control. Due to the correlation, the number of unique rollouts is random and adaptive to model uncertainty; those rollouts naturally become baselines for each other, and hence are combined to effectively reduce gradient variance. We also demonstrate the use of correlated MC rollouts for binary-tree softmax models, which reduce the high generation cost in large vocabulary scenarios by decomposing each categorical action into a sequence of binary actions. We evaluate our methods on both neural program synthesis and image captioning. The proposed methods yield lower gradient variance and consistent improvement over related baselines. ""","""The paper presents a novel reinforcement learning-based algorithm for contextual sequence generation. Specifically, the paper presents experimental results on the application of the gradient ARSM estimator of Yin et al. (2019) to challenging structured prediction problems (neural program synthesis and image captioning). The method consists in performing correlated Monte Carlo rollouts starting from each token in the generated sequence, and using the multiple rollouts to reduce gradient variance. Numerical experiments are presented with promising performance. Reviewers were in agreement that this is a non-trivial extension of previous work with broad potential application. Some concerns about better framing of contributions were mostly resolved during the author rebuttal phase. Therefore, the AC recommends publication. """
913,"""Lagrangian Fluid Simulation with Continuous Convolutions""","['particle-based physics', 'fluid mechanics', 'continuous convolutions', 'material estimation']","""We present an approach to Lagrangian fluid simulation with a new type of convolutional network. Our networks process sets of moving particles, which describe fluids in space and time. Unlike previous approaches, we do not build an explicit graph structure to connect the particles but use spatial convolutions as the main differentiable operation that relates particles to their neighbors. To this end we present a simple, novel, and effective extension of N-D convolutions to the continuous domain. We show that our network architecture can simulate different materials, generalizes to arbitrary collision geometries, and can be used for inverse problems. In addition, we demonstrate that our continuous convolutions outperform prior formulations in terms of accuracy and speed. ""","""The paper proposes an approach for N-D continuous convolution on unordered particle set and applies it to Lagrangian fluid simulation. All reviewers found the paper to be a novel and useful contribution towards the problem of N-D continuous convolution on unordered particles. I recommend acceptance. """
914,"""Multi-Agent Interactions Modeling with Correlated Policies""","['Multi-agent reinforcement learning', 'Imitation learning']","""In multi-agent systems, complex interacting behaviors arise due to the high correlations among agents. However, previous work on modeling multi-agent interactions from demonstrations is primarily constrained by assuming the independence among policies and their reward structures. In this paper, we cast the multi-agent interactions modeling problem into a multi-agent imitation learning framework with explicit modeling of correlated policies by approximating opponents policies, which can recover agents' policies that can regenerate similar interactions. Consequently, we develop a Decentralized Adversarial Imitation Learning algorithm with Correlated policies (CoDAIL), which allows for decentralized training and execution. Various experiments demonstrate that CoDAIL can better regenerate complex interactions close to the demonstrators and outperforms state-of-the-art multi-agent imitation learning methods. Our code is available at \url{pseudo-url}.""","""The paper proposes an extension to the popular Generative Adversarial Imitation Learning framework that considers multi-agent settings with ""correlated policies"", i.e., where agents' actions influence each other. The proposed approach learns opponent models to consider possible opponent actions during learning. Several questions were raised during the review phase, including clarifying questions about key components of the proposed approach and theoretical contributions, as well as concerns about related work. These were addressed by the authors and the reviewers are satisfied that the resulting paper provides a valuable contribution. I encourage the authors to continue to use the reviewers' feedback to improve the clarity of their manuscript in time for the camera ready submission."""
915,"""Leveraging inductive bias of neural networks for learning without explicit human annotations""","['dataset construction', 'deep learning', 'candidate examples']","""Classification problems today are typically solved by first collecting examples along with candidate labels, second obtaining clean labels from workers, and third training a large, overparameterized deep neural network on the clean examples. The second, labeling step is often the most expensive one as it requires manually going through all examples. In this paper we skip the labeling step entirely and propose to directly train the deep neural network on the noisy raw labels and early stop the training to avoid overfitting. With this procedure we exploit an intriguing property of large overparameterized neural networks: While they are capable of perfectly fitting the noisy data, gradient descent fits clean labels much faster than the noisy ones, thus early stopping resembles training on the clean labels. Our results show that early stopping the training of standard deep networks such as ResNet-18 on part of the Tiny Images dataset, which does not involve any human labeled data, and of which only about half of the labels are correct, gives a significantly higher test performance than when trained on the clean CIFAR-10 training dataset, which is a labeled version of the Tiny Images dataset, for the same classification problem. In addition, our results show that the noise generated through the label collection process is not nearly as adversarial for learning as the noise generated by randomly flipping labels, which is the noise most prevalent in works demonstrating noise robustness of neural networks.""","""The authors present an approach to learning from noisy labels. The reviews were mixed and several issues remain unresolved. I do not accept the following as a valid response: ""We fully agree that noisily collected labels are common for many problems other than image classification. However, the focus of our paper is image classification, and we thus concentrate on classification problems related to the widely popular CIFAR-10 and ImageNet classification problems."" ICLR is a conference on theoretical and applied ML, and the fact that a technique has not been used for image classification before, does not mean you bring something to the table by doing so. The NLP literature is abundant with interesting work on label noise and should obviously be considered related work. That said, there's also missing references directly related to the connection between early stopping/regularization and label bias correction, including: [0]pseudo-url [1]pseudo-url [2] pseudo-url See also this paper submitted to this conference: pseudo-url"""
916,"""AtomNAS: Fine-Grained End-to-End Neural Architecture Search""","['Neural Architecture Search', 'Image Classification']","""Search space design is very critical to neural architecture search (NAS) algorithms. We propose a fine-grained search space comprised of atomic blocks, a minimal search unit that is much smaller than the ones used in recent NAS algorithms. This search space allows a mix of operations by composing different types of atomic blocks, while the search space in previous methods only allows homogeneous operations. Based on this search space, we propose a resource-aware architecture search framework which automatically assigns the computational resources (e.g., output channel numbers) for each operation by jointly considering the performance and the computational cost. In addition, to accelerate the search process, we propose a dynamic network shrinkage technique which prunes the atomic blocks with negligible influence on outputs on the fly. Instead of a search-and-retrain two-stage paradigm, our method simultaneously searches and trains the target architecture. Our method achieves state-of-the-art performance under several FLOPs configurations on ImageNet with a small searching cost. We open our entire codebase at: pseudo-url.""","""Reviewer #1 noted that he wishes to change his review to weak accept post rebuttal, but did not change his score in the system. Presuming his score is weak accept, then all reviewers are unanimous for acceptance. I have reviewed the paper and find the results appear to be clear, but the magnitude of the improvement is modest. I concur with the weak accept recommendation. """
917,"""AMRL: Aggregated Memory For Reinforcement Learning""","['deep learning', 'reinforcement learning', 'rl', 'memory', 'noise', 'machine learning']","""In many partially observable scenarios, Reinforcement Learning (RL) agents must rely on long-term memory in order to learn an optimal policy. We demonstrate that using techniques from NLP and supervised learning fails at RL tasks due to stochasticity from the environment and from exploration. Utilizing our insights on the limitations of traditional memory methods in RL, we propose AMRL, a class of models that can learn better policies with greater sample efficiency and are resilient to noisy inputs. Specifically, our models use a standard memory module to summarize short-term context, and then aggregate all prior states from the standard model without respect to order. We show that this provides advantages both in terms of gradient decay and signal-to-noise ratio over time. Evaluating in Minecraft and maze environments that test long-term memory, we find that our model improves average return by 19% over a baseline that has the same number of parameters and by 9% over a stronger baseline that has far more parameters.""","""This paper introduces a way to augment memory in recurrent neural networks with order-independent aggregators. In noisy environments this results in an increase in training speed and stability. The reviewers considered this to be a strong paper with potential for impact, and were satisfied with the author response to their questions and concerns."""
918,"""Representation Quality Explain Adversarial Attacks""","['Representation Metrics', 'Adversarial Machine Learning', 'One-Pixel Attack', 'DeepFool', 'CapsNet']","""Neural networks have been shown vulnerable to adversarial samples. Slightly perturbed input images are able to change the classification of accurate models, showing that the representation learned is not as good as previously thought. To aid the development of better neural networks, it would be important to evaluate to what extent are current neural networks' representations capturing the existing features. Here we propose a way to evaluate the representation quality of neural networks using a novel type of zero-shot test, entitled Raw Zero-Shot. The main idea lies in the fact that some features are present on unknown classes and that unknown classes can be defined as a combination of previous learned features without representation bias (a bias towards representation that maps only current set of input-outputs and their boundary). To evaluate the soft-labels of unknown classes, two metrics are proposed. One is based on clustering validation techniques (Davies-Bouldin Index) and the other is based on soft-label distance of a given correct soft-label. Experiments show that such metrics are in accordance with the robustness to adversarial attacks and might serve as a guidance to build better models as well as be used in loss functions to create new types of neural networks. Interestingly, the results suggests that dynamic routing networks such as CapsNet have better representation while current deeper DNNs are trading off representation quality for accuracy.""","""The reviewers found the aim of the paper interesting (to connect representation quality with adversarial examples). However, the reviewers consistently pointed out writing issues, such as inaccurate or unsubstantiated claims, which are not appropriate for a scientific venue. The reviewers also found the experiments, which are on simple datasets, unconvincing."""
919,"""Pretrained Encyclopedia: Weakly Supervised Knowledge-Pretrained Language Model""",[],""" Recent breakthroughs of pretrained language models have shown the effectiveness of self-supervised learning for a wide range of natural language processing (NLP) tasks. In addition to standard syntactic and semantic NLP tasks, pretrained models achieve strong improvements on tasks that involve real-world knowledge, suggesting that large-scale language modeling could be an implicit method to capture knowledge. In this work, we further investigate the extent to which pretrained models such as BERT capture knowledge using a zero-shot fact completion task. Moreover, we propose a simple yet effective weakly supervised pretraining objective, which explicitly forces the model to incorporate knowledge about real-world entities. Models trained with our new objective yield significant improvements on the fact completion task. When applied to downstream tasks, our model consistently outperforms BERT on four entity-related question answering datasets (i.e., WebQuestions, TriviaQA, SearchQA and Quasar-T) with an average 2.7 F1 improvements and a standard fine-grained entity typing dataset (i.e., FIGER) with 5.7 accuracy gains.""","""This submission proposes a secondary objective when learning language models like BERT that improves the ability of such models to learn entity-centric information. This additional objective involves predicting whether an entity has been replaced. Replacement entities are mined using wikidata. Strengths: -The proposed method is simple and shows significant performance improvements for various tasks including fact completion and question answering. Weaknesses: -The experimental settings and data splits were not always clear. This was sufficiently addressed in a revised version. -The paper could have probed performance on tasks involving less common entities. The reviewer consensus was to accept this submission. """
920,"""On Bonus Based Exploration Methods In The Arcade Learning Environment""","['exploration', 'arcade learning environment', 'bonus-based methods']","""Research on exploration in reinforcement learning, as applied to Atari 2600 game-playing, has emphasized tackling difficult exploration problems such as Montezuma's Revenge (Bellemare et al., 2016). Recently, bonus-based exploration methods, which explore by augmenting the environment reward, have reached above-human average performance on such domains. In this paper we reassess popular bonus-based exploration methods within a common evaluation framework. We combine Rainbow (Hessel et al., 2018) with different exploration bonuses and evaluate its performance on Montezuma's Revenge, Bellemare et al.'s set of hard of exploration games with sparse rewards, and the whole Atari 2600 suite. We find that while exploration bonuses lead to higher score on Montezuma's Revenge they do not provide meaningful gains over the simpler epsilon-greedy scheme. In fact, we find that methods that perform best on that game often underperform epsilon-greedy on easy exploration Atari 2600 games. We find that our conclusions remain valid even when hyperparameters are tuned for these easy-exploration games. Finally, we find that none of the methods surveyed benefit from additional training samples (1 billion frames, versus Rainbow's 200 million) on Bellemare et al.'s hard exploration games. Our results suggest that recent gains in Montezuma's Revenge may be better attributed to architecture change, rather than better exploration schemes; and that the real pace of progress in exploration research for Atari 2600 games may have been obfuscated by good results on a single domain.""","""This paper presents a detailed comparison of different bonus-based exploration methods on a common evaluation framework (Rainbow) when used with the ATARI game suite. They find that while these bonuses help on Montezuma's Revenge (MR), they underperform relative to epsilon-greedy on other games. This suggests that architectural changes may be a more important factor than bonus-based exploration in recent advances on MR. The reviewers commented that this paper makes no effort to present new techniques, and the insights discovered could be expanded on. Despite this, it is an interesting paper that is generally well argued and would be a useful contribution to the field. I recommend acceptance."""
921,"""Learning to Coordinate Manipulation Skills via Skill Behavior Diversification""","['reinforcement learning', 'hierarchical reinforcement learning', 'modular framework', 'skill coordination', 'bimanual manipulation']","""When mastering a complex manipulation task, humans often decompose the task into sub-skills of their body parts, practice the sub-skills independently, and then execute the sub-skills together. Similarly, a robot with multiple end-effectors can perform complex tasks by coordinating sub-skills of each end-effector. To realize temporal and behavioral coordination of skills, we propose a modular framework that first individually trains sub-skills of each end-effector with skill behavior diversification, and then learns to coordinate end-effectors using diverse behaviors of the skills. We demonstrate that our proposed framework is able to efficiently coordinate skills to solve challenging collaborative control tasks such as picking up a long bar, placing a block inside a container while pushing the container with two robot arms, and pushing a box with two ant agents. Videos and code are available at pseudo-url""","""This paper deals with multi-agent hierarchical reinforcement learning. A discrete set of pre-specified low-level skills are modulated by a conditioning vector and trained in a fashion reminiscent of Diversity Is All You Need, and then combined via a meta-policy which coordinates multiple agents in pursuit of a goal. The idea is that fine control over primitive skills is beneficial for achieving coordinated high-level behaviour. The paper improved considerably in its completeness and in the addition of baselines, notably DIAYN without discrete, mutually exclusive skills. Reviewers agreed that the problem is interesting and the method, despite involving a degree of hand-crafting, showed promise for informing future directions. On the basis that this work addresses an interesting problem setting with a compelling set of experiments, I recommend acceptance."""
922,"""Tranquil Clouds: Neural Networks for Learning Temporally Coherent Features in Point Clouds""","['point clouds', 'spatio-temporal representations', 'Lagrangian data', 'temporal coherence', 'super-resolution', 'denoising']","""Point clouds, as a form of Lagrangian representation, allow for powerful and flexible applications in a large number of computational disciplines. We propose a novel deep-learning method to learn stable and temporally coherent feature spaces for points clouds that change over time. We identify a set of inherent problems with these approaches: without knowledge of the time dimension, the inferred solutions can exhibit strong flickering, and easy solutions to suppress this flickering can result in undesirable local minima that manifest themselves as halo structures. We propose a novel temporal loss function that takes into account higher time derivatives of the point positions, and encourages mingling, i.e., to prevent the aforementioned halos. We combine these techniques in a super-resolution method with a truncation approach to flexibly adapt the size of the generated positions. We show that our method works for large, deforming point sets from different sources to demonstrate the flexibility of our approach.""","""This paper provides an improved method for deep learning on point clouds. Reviewers are unanimous that this paper is acceptable, and the AC concurs. """
923,"""On the Linguistic Capacity of Real-time Counter Automata""","['formal language theory', 'counter automata', 'natural language processing', 'deep learning']","""While counter machines have received little attention in theoretical computer science since the 1960s, they have recently achieved a newfound relevance to the field of natural language processing (NLP). Recent work has suggested that some strong-performing recurrent neural networks utilize their memory as counters. Thus, one potential way to understand the sucess of these networks is to revisit the theory of counter computation. Therefore, we choose to study the abilities of real-time counter machines as formal grammars. We first show that several variants of the counter machine converge to express the same class of formal languages. We also prove that counter languages are closed under complement, union, intersection, and many other common set operations. Next, we show that counter machines cannot evaluate boolean expressions, even though they can weakly validate their syntax. This has implications for the interpretability and evaluation of neural network systems: successfully matching syntactic patterns does not guarantee that a counter-like model accurately represents underlying semantic structures. Finally, we consider the question of whether counter languages are semilinear. This work makes general contributions to the theory of formal languages that are of particular interest for the interpretability of recurrent neural networks.""","""This paper presents an analysis of the languages that can be accepted by a counter machine, motivated by recent work that suggests that counter machines might be a good formal model from which to approach the analysis of LSTM representations. This is one of the trickiest papers in my batch. Reviewers agree that it represents an interesting and provocative direction, and I suspect that it could yield valuable discussion at the conference. However, reviewers were not convinced that the claims made (or implied) _about LSTMs_ are motivated, given the imperfect analogy between them and counter machines. The authors promise some empirical evidence that might mitigate these concerns to some extent, but the paper has not yet been updated, so I cannot take that into account. As a very secondary point, which is only relevant because this paper is borderline, LSTMs are no longer widely used for language tasks, so discussion about the capacity of LSTMs _for language_ seems like an imperfect fit for an machine learning conference with a fairly applied bent."""
924,"""Subjective Reinforcement Learning for Open Complex Environments""","['reinforcement learning theory', 'subjective learning']","""Solving tasks in open environments has been one of the long-time pursuits of reinforcement learning researches. We propose that data confusion is the core underlying problem. Although there exist methods that implicitly alleviate it from different perspectives, we argue that their solutions are based on task-specific prior knowledge that is constrained to certain kinds of tasks and lacks theoretical guarantees. In this paper, Subjective Reinforcement Learning Framework is proposed to state the problem from a broader and systematic view, and subjective policy is proposed to represent existing related algorithms in general. Theoretical analysis is given about the conditions for the superiority of a subjective policy, and the relationship between model complexity and the overall performance. Results are further applied as guidance for algorithm designing without task-specific prior knowledge about tasks. ""","""The authors propose a learning framework to reframe non-stationary MDPs as smaller stationary MDPs, thus hopefully addressing problems with contradictory or continually changing environments. A policy is learned for each sub-MDP, and the authors present theoretical guarantees that the reframing does not inhibit agent performance. The reviewers discussed the paper and the authors' rebuttal. They were mainly concerned that the submission offered no practical implementation or demonstration of feasibility, and secondarily concerned that the paper was unclearly written and motivated. The authors' rebuttal did not resolve these issues. My recommendation is to reject the submission and encourage the authors to develop an empirical validation of their method before resubmitting."""
925,"""On the Variance of the Adaptive Learning Rate and Beyond""","['warmup', 'adam', 'adaptive learning rate', 'variance']","""The learning rate warmup heuristic achieves remarkable success in stabilizing training, accelerating convergence and improving generalization for adaptive stochastic optimization algorithms like RMSprop and Adam. Pursuing the theory behind warmup, we identify a problem of the adaptive learning rate -- its variance is problematically large in the early stage, and presume warmup works as a variance reduction technique. We provide both empirical and theoretical evidence to verify our hypothesis. We further propose Rectified Adam (RAdam), a novel variant of Adam, by introducing a term to rectify the variance of the adaptive learning rate. Experimental results on image classification, language modeling, and neural machine translation verify our intuition and demonstrate the efficacy and robustness of RAdam. ""","""The paper considers an important topic of the warmup in deep learning, and investigates the problem of the adaptive learning rate. While the paper is somewhat borderline, the reviewers agree that it might be useful to present it to the ICLR community."""
926,"""Dynamic Time Lag Regression: Predicting What & When""","['Dynamic Time-Lag Regression', 'Time Delay', 'Regression', 'Time Series']","""This paper tackles a new regression problem, called Dynamic Time-Lag Regression (DTLR), where a cause signal drives an effect signal with an unknown time delay. The motivating application, pertaining to space weather modelling, aims to predict the near-Earth solar wind speed based on estimates of the Sun's coronal magnetic field. DTLR differs from mainstream regression and from sequence-to-sequence learning in two respects: firstly, no ground truth (e.g., pairs of associated sub-sequences) is available; secondly, the cause signal contains much information irrelevant to the effect signal (the solar magnetic field governs the solar wind propagation in the heliosphere, of which the Earth's magnetosphere is but a minuscule region). A Bayesian approach is presented to tackle the specifics of the DTLR problem, with theoretical justifications based on linear stability analysis. A proof of concept on synthetic problems is presented. Finally, the empirical results on the solar wind modelling task improve on the state of the art in solar wind forecasting.""","""The paper proposes a Bayesian approach for time-series regression when the explanatory time-series influences the response time-series with a time lag. The time lag is unknown and allowed to be non-stationary process. Reviewers have appreciated the significance of the problem and novelty of the proposed method, and also highlighted the importance of the application domain considered by the paper. """
927,"""The intriguing role of module criticality in the generalization of deep networks""","['Module Criticality Phenomenon', 'Complexity Measure', 'Deep Learning']","""We study the phenomenon that some modules of deep neural networks (DNNs) are more critical than others. Meaning that rewinding their parameter values back to initialization, while keeping other modules fixed at the trained parameters, results in a large drop in the network's performance. Our analysis reveals interesting properties of the loss landscape which leads us to propose a complexity measure, called module criticality, based on the shape of the valleys that connect the initial and final values of the module parameters. We formulate how generalization relates to the module criticality, and show that this measure is able to explain the superior generalization performance of some architectures over others, whereas, earlier measures fail to do so.""","""The paper analyses the importance of different DNN modules for generalization performance, explaining why certain architectures may be much better performing than others. All reviewers agree that this is an interesting paper with a novel and important contribution. """
928,"""Semi-Implicit Back Propagation""","['Optimization', 'Neural Network', 'Proximal mapping', 'Back propagation', 'Implicit']","""Neural network has attracted great attention for a long time and many researchers are devoted to improve the effectiveness of neural network training algorithms. Though stochastic gradient descent (SGD) and other explicit gradient-based methods are widely adopted, there are still many challenges such as gradient vanishing and small step sizes, which leads to slow convergence and instability of SGD algorithms. Motivated by error back propagation (BP) and proximal methods, we propose a semi-implicit back propagation method for neural network training. Similar to BP, the difference on the neurons are propagated in a backward fashion and the parameters are updated with proximal mapping. The implicit update for both hidden neurons and parameters allows to choose large step size in the training algorithm. Finally, we also show that any fixed point of convergent sequences produced by this algorithm is a stationary point of the objective loss function. The experiments on both MNIST and CIFAR-10 demonstrate that the proposed semi-implicit BP algorithm leads to better performance in terms of both loss decreasing and training/validation accuracy, compared to SGD and a similar algorithm ProxBP.""","""The reviewers equivocally reject the paper, which is mostly experimental and the results of which are limited. The authors do not react to the reviewers' comments."""
929,"""Learning to Remember from a Multi-Task Teacher""","['Meta-learning', 'sequential learning', 'catastrophic forgetting']","""Recent studies on catastrophic forgetting during sequential learning typically focus on fixing the accuracy of the predictions for a previously learned task. In this paper we argue that the outputs of neural networks are subject to rapid changes when learning a new data distribution, and networks that appear to ""forget"" everything still contain useful representation towards previous tasks. We thus propose to enforce the output accuracy to stay the same, we should aim to reduce the effect of catastrophic forgetting on the representation level, as the output layer can be quickly recovered later with a small number of examples. Towards this goal, we propose an experimental setup that measures the amount of representational forgetting, and develop a novel meta-learning algorithm to overcome this issue. The proposed meta-learner produces weight updates of a sequential learning network, mimicking a multi-task teacher network's representation. We show that our meta-learner can improve its learned representations on new tasks, while maintaining a good representation for old tasks.""","""The paper addresses the setting of continual learning. Instead of focusing on catastrophic forgetting measured in terms of the output performance of the previous tasks, the authors tackle forgetting that happens at the level of the feature representation via a meta-learning approach. As rightly acknowledged by R2, from a meta-learning perspective the work is quite interesting and demonstrates a number of promising results. However the reviewers have raised several important concerns that placed this work below the acceptance bar: (1) the current manuscript lacks convincing empirical evaluations that clearly show the benefits of the proposed approach over SOTA continual learning methods; specifically the generalization of the proposed strategy to more than two sequential tasks is essential; also see R1s detailed suggestions that would strengthen the contributions of this approach in light of continual learning; (2) training a meta-learner to predict the weight updates with supervision from a multi-task teacher network as an oracle, albeit nicely motivated, is unrealistic in the continual learning setting -- see R1s detailed comments on this issue. (3) R2 and R3 expressed concerns regarding i) stronger baselines that are tuned to take advantage of the meta-learning data and ii) transferability to the different new tasks, i.e. dissimilarity of the meta-train and meta-test settings. Pleased to report that the authors showed and discussed in their response some initial qualitative results regarding these issues. An analysis on the performance of the proposed method when the meta-training and testing datasets are made progressively dissimilar would strengthen the evaluation the proposed meta-learning approach. There is a reviewer disagreement on this paper. AC can confirm that all three reviewers have read the rebuttal and have contributed to a long discussion. Among the aforementioned concerns, (3) did not have a decisive impact on the decision, but would be helpful to address in a subsequent revision. However, (1) and (2) make it very difficult to assess the benefits of the proposed approach, and were viewed by AC as critical issues. AC suggests, that in its current state the manuscript is not ready for a publication and needs a major revision before submitting for another round of reviews. We hope the reviews are useful for improving and revising the paper. """
930,"""Learning RNNs with Commutative State Transitions""",[],"""Many machine learning tasks involve analysis of set valued inputs, and thus the learned functions are expected to be permutation invariant. Recent works (e.g., Deep Sets) have sought to characterize the neural architectures which result in permutation invariance. These typically correspond to applying the same pointwise function to all set components, followed by sum aggregation. Here we take a different approach to such architectures and focus on recursive architectures such as RNNs, which are not permutation invariant in general, but can implement permutation invariant functions in a very compact manner. We first show that commutativity and associativity of the state transition function result in permutation invariance. Next, we derive a regularizer that minimizes the degree of non-commutativity in the transitions. Finally, we demonstrate that the resulting method outperforms other methods for learning permutation invariant models, due to its use of recursive computation.""","""This paper examines learning problems where the network outputs are intended to be invariant to permutations of the network inputs. Some past approaches for this problem setting have enforced permutation-invariance by construction. This paper takes a different approach, using a recurrent neural network that passes over the data. The paper proves the network will be permutation invariant when the internal state transition function is associative and commutative. The paper then focuses on the commutative property by describing a regularization objective that pushes the recurrent network towards becoming commutative. Experimental results with this regularizer show potentially better performance than DeepSet, another architecture that is designed for permutation invariance. The subsequent discussion of the paper raised several concerns with the current version of the paper. The theoretical contributions for full permutation-invariance follow quickly from the prior DeepSet results. The paper's focus on commutative regularization in the absence of associative regularization is not compelling if the objective is really for permutation invariance. The experimental results were limited in scope. These results lacked error bars and an examination of the relevance of associativity. The reviewers also identified several related lines of work which could provide additional context for the results that were missing from the paper. This paper is not ready for publication due to the multiple concerns raised by the reviewers. The paper would become stronger by addressing these concerns, particularly the associativity of the transition function, empirical results, and related work. """
931,"""Bandlimiting Neural Networks Against Adversarial Attacks""","['adversarial examples', 'adversarial attack defense', 'neural network', 'Fourier analysis']","""In this paper, we study the adversarial attack and defence problem in deep learning from the perspective of Fourier analysis. We first explicitly compute the Fourier transform of deep ReLU neural networks and show that there exist decaying but non-zero high frequency components in the Fourier spectrum of neural networks. We then demonstrate that the vulnerability of neural networks towards adversarial samples can be attributed to these insignificant but non-zero high frequency components. Based on this analysis, we propose to use a simple post-averaging technique to smooth out these high frequency components to improve the robustness of neural networks against adversarial attacks. Experimental results on the ImageNet and the CIFAR-10 datasets have shown that our proposed method is universally effective to defend many existing adversarial attacking methods proposed in the literature, including FGSM, PGD, DeepFool and C&W attacks. Our post-averaging method is simple since it does not require any re-training, and meanwhile it can successfully defend over 80-96% of the adversarial samples generated by these methods without introducing significant performance degradation (less than 2%) on the original clean images.""","""The reviewers recommend rejection due to various concerns about novelty and experimental validation. The authors have not provided a response."""
932,"""Evolutionary Population Curriculum for Scaling Multi-Agent Reinforcement Learning""","['multi-agent reinforcement learning', 'evolutionary learning', 'curriculum learning']","""In multi-agent games, the complexity of the environment can grow exponentially as the number of agents increases, so it is particularly challenging to learn good policies when the agent population is large. In this paper, we introduce Evolutionary Population Curriculum (EPC), a curriculum learning paradigm that scales up Multi-Agent Reinforcement Learning (MARL) by progressively increasing the population of training agents in a stage-wise manner. Furthermore, EPC uses an evolutionary approach to fix an objective misalignment issue throughout the curriculum: agents successfully trained in an early stage with a small population are not necessarily the best candidates for adapting to later stages with scaled populations. Concretely, EPC maintains multiple sets of agents in each stage, performs mix-and-match and fine-tuning over these sets and promotes the sets of agents with the best adaptability to the next stage. We implement EPC on a popular MARL algorithm, MADDPG, and empirically show that our approach consistently outperforms baselines by a large margin as the number of agents grows exponentially. The source code and videos can be found at pseudo-url.""","""The paper proposes a curriculum approach to increasing the number of agents (and hence complexity) in MARL. The reviewers mostly agreed that this is a simple and useful idea to the MARL community. There was some initial disagreement about relationships with other RL + evolution approaches, but it got resolved in the rebuttal. Another concern was the slight differences in the environments considered by the paper compared to the literature, but the authors added an experiment with the unmodified version. Given the positive assessment and the successful rebuttal, I recommend acceptance."""
933,"""On Generalization Error Bounds of Noisy Gradient Methods for Non-Convex Learning""","['learning theory', 'generalization', 'nonconvex learning', 'stochastic gradient descent', 'Langevin dynamics']","""Generalization error (also known as the out-of-sample error) measures how well the hypothesis learned from training data generalizes to previously unseen data. Proving tight generalization error bounds is a central question in statistical learning theory. In this paper, we obtain generalization error bounds for learning general non-convex objectives, which has attracted significant attention in recent years. We develop a new framework, termed Bayes-Stability, for proving algorithm-dependent generalization error bounds. The new framework combines ideas from both the PAC-Bayesian theory and the notion of algorithmic stability. Applying the Bayes-Stability method, we obtain new data-dependent generalization bounds for stochastic gradient Langevin dynamics (SGLD) and several other noisy gradient methods (e.g., with momentum, mini-batch and acceleration, Entropy-SGD). Our result recovers (and is typically tighter than) a recent result in Mou et al. (2018) and improves upon the results in Pensia et al. (2018). Our experiments demonstrate that our data-dependent bounds can distinguish randomly labelled data from normal data, which provides an explanation to the intriguing phenomena observed in Zhang et al. (2017a). We also study the setting where the total loss is the sum of a bounded loss and an additiona l`2 regularization term. We obtain new generalization bounds for the continuous Langevin dynamic in this setting by developing a new Log-Sobolev inequality for the parameter distribution at any time. Our new bounds are more desirable when the noise level of the processis not very small, and do not become vacuous even when T tends to infinity.""","""The authors provide bounds on the expected generalization error for noisy gradient methods (such as SGLD). They do so using the information theoretic framework initiated by Russo and Zou, where the expected generalization error is controlled by the mutual information between the weights and the training data. The work builds on the approach pioneered by Pensia, Jog, and Loh, who proposed to bound the mutual information for noisy gradient methods in a step wise fashion. The main innovation of this work is that they do not implicitly condition on the minibatch sequence when bounding the mutual information. Instead, this uncertainty manifests as a mixture of gaussians. Essentially they avoid the looseness implied by an application of Jensen's inequality that they have shown was unnecessary. I think this is an interesting contribution and worth publishing. It contributes to a rapidly progressing literature on generalization bounds for SGLD that are becoming increasingly tight. I have one strong request that I will make of the authors, and I'll be quite disappointed if it is not executed faithfully. 1. The stepsize constraint and its violation in the experimental work is currently buried in the appendix. This fact must be brought into the main paper and made transparent to readers, otherwise it will pervert empirical comparisons and mask progress. 2. In fact, I would like the authors to re-run their experiments in a way that guarantees that the bounds are applicable. One approach is outline by the authors: the Lipschitz constant can be replaced by a max_i bound on the running squared gradient norms, and then gradient clipping can be used to guarantee that the step-size constraint is met. The authors might compare step sizes, allowing them to use less severe gradient clipping. The point of this exercise is to verify that the learning dynamics don't change when the bound conditions are met. If they change, it may upset the empirical phenomena they are trying to study. If this change does upset the empirical findings, then the authors should present both, and clearly explain that the bound is not strictly speaking known to be valid in one of the cases. It will be a good open problem. """
934,"""Dual Graph Representation Learning""",[],"""Graph representation learning embeds nodes in large graphs as low-dimensional vectors and benefit to many downstream applications. Most embedding frameworks, however, are inherently transductive and unable to generalize to unseen nodes or learn representations across different graphs. Inductive approaches, such as GraphSAGE, neglect different contexts of nodes and cannot learn node embeddings dually. In this paper, we present an unsupervised dual encoding framework, \textbf{CADE}, to generate context-aware representation of nodes by combining real-time neighborhood structure with neighbor-attentioned representation, and preserving extra memory of known nodes. Experimently, we exhibit that our approach is effective by comparing to state-of-the-art methods.""","""This work proposes context-aware representation of graph nodes leveraging attention over neighbors (as already done in previous work). Reviewers concerns about lack of novelty, lack of clarity of paper and lack of comparison to state of the art methods have not been addressed at all. We recommend rejection."""
935,"""Recurrent Hierarchical Topic-Guided Neural Language Models""","['Bayesian deep learning', 'recurrent gamma belief net', 'larger-context language model', 'variational inference', 'sentence generation', 'paragraph generation']","""To simultaneously capture syntax and semantics from a text corpus, we propose a new larger-context language model that extracts recurrent hierarchical semantic structure via a dynamic deep topic model to guide natural language generation. Moving beyond a conventional language model that ignores long-range word dependencies and sentence order, the proposed model captures not only intra-sentence word dependencies, but also temporal transitions between sentences and inter-sentence topic dependences. For inference, we develop a hybrid of stochastic-gradient MCMC and recurrent autoencoding variational Bayes. Experimental results on a variety of real-world text corpora demonstrate that the proposed model not only outperforms state-of-the-art larger-context language models, but also learns interpretable recurrent multilayer topics and generates diverse sentences and paragraphs that are syntactically correct and semantically coherent.""","""This paper was a very difficult case. All three original reviewers of the paper had never published in the area, and all of them advocated for acceptance of the paper. I, on the other hand, am an expert in the area who has published many papers, and I thought that while the paper is well-written and experimental evaluation is not incorrect, the method was perhaps less relevant given current state-of-the-art models. In addition, the somewhat non-standard evaluation was perhaps causing this fact to be masked. I asked the original reviewers to consider my comments multiple times both during the rebuttal period and after, and unfortunately none of them replied. Because of this, I elicited two additional reviews from people I knew were experts in the field. The reviews are below. I sent the PDF to the reviewers directly, and asked them to not look at the existing reviews (or my comments) when doing their review in order to make sure that they were making a fair assessment. Long story short, Reviewer 4 essentially agreed with my concerns and pointed out a few additional clarity issues. Reviewer 5 pointed out a number of clarity issues and was also concerned with the fact that d_j has access to all other sentences (including those following the current sentence). I know that at the end of Section 2 it is noted that at test time d_j only refers to previous sentences, but if so there is also a training-testing disconnect in model training, and it seems that this would hurt the model results. Based on this, I have decided to favor the opinions of three experts (me and the two additional reviewers) over the opinions of the original three reviewers, and not recommend the paper for acceptance at this time. In order to improve the paper I would suggest the following (1) an acknowledgement of standard methods to incorporate context by processing sequences consisting of multiple sentences simultaneously, (2) a more thorough comparison with state-of-the-art models that consider cross-sentential context on standard datasets such as WikiText or PTB. I would encourage the authors to consider this as they revise their paper. Finally, I would like to apologize to the authors that they did not get a chance to reply to the second set of reviews. As I noted above, I did try to make my best effort to encourage discussion during the rebuttal period."""
936,"""Monte Carlo Deep Neural Network Arithmetic""","['deep learning', 'quantization', 'floating point', 'monte carlo methods']","""Quantization is a crucial technique for achieving low-power, low latency and high throughput hardware implementations of Deep Neural Networks. Quantized floating point representations have received recent interest due to their hardware efficiency benefits and ability to represent a higher dynamic range than fixed point representations, leading to improvements in accuracy. We present a novel technique, Monte Carlo Deep Neural Network Arithmetic (MCA), for determining the sensitivity of Deep Neural Networks to quantization in floating point arithmetic.We do this by applying Monte Carlo Arithmetic to the inference computation and analyzing the relative standard deviation of the neural network loss. The method makes no assumptions regarding the underlying parameter distributions. We evaluate our method on pre-trained image classification models on the CIFAR10 andImageNet datasets. For the same network topology and dataset, we demonstrate the ability to gain the equivalent of bits of precision by simply choosing weight parameter sets which demonstrate a lower loss of significance from the Monte Carlo trials. Additionally, we can apply MCA to compare the sensitivity of different network topologies to quantization effects.""","""The paper studies the impact of rounding errors on deep neural networks. The authors apply Monte Carlos arithmetics to standard DNN operations. Their results indeed show catastrophic cancellation in DNNs and that the resulting loss of significance in the number representation correlates with decrease in validation performance, indicating that DNN performances are sensitive to rounding errors. Although recognizing that the paper addresses an important problem (quantized / finite precision neural networks), the reviewers point out the contribution of the paper is somewhat incremental. During the rebuttal, the authors made an effort to improve the manuscript based on reviewer suggestions, however review scores were not increased. The paper is slightly below acceptance threshold, based on reviews and my own reading, as the method is mostly restricted to diagnostics and cannot yet be used to help training low-precision neural networks."""
937,"""Amata: An Annealing Mechanism for Adversarial Training Acceleration""",[],"""Despite of the empirical success in various domains, it has been revealed that deep neural networks are vulnerable to maliciously perturbed input data that much degrade their performance. This is known as adversarial attacks. To counter adversarial attacks, adversarial training formulated as a form of robust optimization has been demonstrated to be effective. However, conducting adversarial training brings much computational overhead compared with standard training. In order to reduce the computational cost, we propose a simple yet effective modification to the commonly used projected gradient descent (PGD) adversarial training by increasing the number of adversarial training steps and decreasing the adversarial training step size gradually as training proceeds. We analyze the optimality of this annealing mechanism through the lens of optimal control theory, and we also prove the convergence of our proposed algorithm. Numerical experiments on standard datasets, such as MNIST and CIFAR10, show that our method can achieve similar or even better robustness with around 1/3 to 1/2 computation time compared with PGD.""","""The paper proposes a modification for adversarial training in order to improve the robustness of the algorithm by developing an annealing mechanism for PGD adversarial training. This mechanism gradually reduces the step size and increases the number of iterations of PGD maximization. One reviewer found the paper to be clear and competitive with existing work, but raised concerns of novelty and significance. Another reviewer noted the significant improvements in training times but had concerns about small scale datasets. The final reviewer liked the optimal control formulation, and requested further details. The authors provided detailed answers and responses to the reviews, although some of these concerns remain. The paper has improved over the course of the review, but due to a large number of stronger papers, was not accepted at this time."""
938,"""The advantage of using Student's t-priors in variational autoencoders""","['Variational Autoencoders', 'DLVMs', 'Posterior Collapse']","""Is it optimal to use the standard Gaussian prior in variational autoencoders? With Gaussian distributions, which are not weakly informative priors, variational autoencoders struggle to reconstruct the actual data. We provide numerical evidence that encourages using Student's t-distributions as default priors in variational autoencoders, and we challenge the usual setup for the variational autoencoder structure by comparing Gaussian and Student's t-distribution priors with different forms of the covariance matrix.""","""The consensus among all reviewers was to reject this paper, and the authors did not provide a rebuttal."""
939,"""Leveraging Adversarial Examples to Obtain Robust Second-Order Representations""","['Second-order representation', 'adversarial examples', 'robustness', 'gradients']","""Deep neural networks represent data as projections on trained weights in a high dimensional manifold. This is a first-order based absolute representation that is widely used due to its interpretable nature and simple mathematical functionality. However, in the application of visual recognition, first-order representations trained on pristine images have shown a vulnerability to distortions. Visual distortions including imaging acquisition errors and challenging environmental conditions like blur, exposure, snow and frost cause incorrect classification in first-order neural nets. To eliminate vulnerabilities under such distortions, we propose representing data points by their relative positioning in a high dimensional manifold instead of their absolute positions. Such a positioning scheme is based on a data points second-order property. We obtain a data points second-order representation by creating adversarial examples to all possible decision boundaries and tracking the movement of corresponding boundaries. We compare our representation against first-order methods and show that there is an increase of more than 14% under severe distortions for ResNet-18. We test the generalizability of the proposed representation on larger networks and on 19 complex and real-world distortions from CIFAR-10-C. Furthermore, we show how our proposed representation can be used as a plug-in approach on top of any network. We also provide methodologies to scale our proposed representation to larger datasets.""","""The authors propose a method to train a neural network that is robust to visual distortions of the input image. The reviewers agree that the paper lacks justification of the proposed method and experimental evidence of its performance."""
940,"""LARGE SCALE REPRESENTATION LEARNING FROM TRIPLET COMPARISONS""","['representation learning', 'triplet comparison', 'contrastive learning', 'ordinal embedding']","""In this paper, we discuss the fundamental problem of representation learning from a new perspective. It has been observed in many supervised/unsupervised DNNs that the final layer of the network often provides an informative representation for many tasks, even though the network has been trained to perform a particular task. The common ingredient in all previous studies is a low-level feature representation for items, for example, RGB values of images in the image context. In the present work, we assume that no meaningful representation of the items is given. Instead, we are provided with the answers to some triplet comparisons of the following form: Is item A more similar to item B or item C? We provide a fast algorithm based on DNNs that constructs a Euclidean representation for the items, using solely the answers to the above-mentioned triplet comparisons. This problem has been studied in a sub-community of machine learning by the name ""Ordinal Embedding"". Previous approaches to the problem are painfully slow and cannot scale to larger datasets. We demonstrate that our proposed approach is significantly faster than available methods, and can scale to real-world large datasets. Thereby, we also draw attention to the less explored idea of using neural networks to directly, approximately solve non-convex, NP-hard optimization problems that arise naturally in unsupervised learning problems.""","""The authors demonstrate how neural networks can be used to learn vectorial representations of a set of items given only triplet comparisons among those items. The reviewers had some concerns regarding the scale of the experiments and strength of the conclusions: empirically, it seemed like there should be more truly large-scale experiments considering that this is a selling point; there should have been more analysis and/or discussion of why/how the neural networks help; and the claim that deep networks are approximately solving an NP-hard problem seemed unimportant as they are routinely used for this purpose in ML problems. With a combination of improved experiments and revised discussion/analysis, I believe a revised version of this paper could make a good submission to a future conference."""
941,"""Lattice Representation Learning""","['lattices', 'representation learning', 'coding theory', 'lossy source coding', 'information theory']","""We introduce the notion of \emph{lattice representation learning}, in which the representation for some object of interest (e.g. a sentence or an image) is a lattice point in an Euclidean space. Our main contribution is a result for replacing an objective function which employs lattice quantization with an objective function in which quantization is absent, thus allowing optimization techniques based on gradient descent to apply; we call the resulting algorithms \emph{dithered stochastic gradient descent} algorithms as they are designed explicitly to allow for an optimization procedure where only local information is employed. We also argue that a technique commonly used in Variational Auto-Encoders (Gaussian priors and Gaussian approximate posteriors) is tightly connected with the idea of lattice representations, as the quantization error in good high dimensional lattices can be modeled as a Gaussian distribution. We use a traditional encoder/decoder architecture to explore the idea of latticed valued representations, and provide experimental evidence of the potential of using lattice representations by modifying the \texttt{OpenNMT-py} generic \texttt{seq2seq} architecture so that it can implement not only Gaussian dithering of representations, but also the well known straight-through estimator and its application to vector quantization. ""","""This paper presents a new view of latent variable learning as learning lattice representations. Overall, the reviewers thought the underlying ideas were interesting, but both the description and the experimentation in the paper were not quite sufficient at this time. I'd encourage the authors to continue on this path and take into account the extensive review feedback in improving the paper!"""
942,"""LAVAE: Disentangling Location and Appearance""","['structured scene representations', 'compositional representations', 'generative models', 'unsupervised learning']","""We propose a probabilistic generative model for unsupervised learning of structured, interpretable, object-based representations of visual scenes. We use amortized variational inference to train the generative model end-to-end. The learned representations of object location and appearance are fully disentangled, and objects are represented independently of each other in the latent space. Unlike previous approaches that disentangle location and appearance, ours generalizes seamlessly to scenes with many more objects than encountered in the training regime. We evaluate the proposed model on multi-MNIST and multi-dSprites data sets.""","""This paper presents a VAE approach where the model learns representation while disentangling the location and appearance information. The reviewers found issues with the experimental evaluation of the paper, and have given many useful feedback. None of the reviewers were willing to change their score during the discussion period. with the current score, the paper does not make the cut for ICLR, and I recommend to reject this paper. """
943,"""Trajectory growth through random deep ReLU networks""","['Deep networks', 'expressivity', 'trajectory growth', 'sparse neural networks']","""This paper considers the growth in the length of one-dimensional trajectories as they are passed through deep ReLU neural networks, which, among other things, is one measure of the expressivity of deep networks. We generalise existing results, providing an alternative, simpler method for lower bounding expected trajectory growth through random networks, for a more general class of weights distributions, including sparsely connected networks. We illustrate this approach by deriving bounds for sparse-Gaussian, sparse-uniform, and sparse-discrete-valued random nets. We prove that trajectory growth can remain exponential in depth with these new distributions, including their sparse variants, with the sparsity parameter appearing in the base of the exponent.""","""This article studies the length of one-dimensional trajectories as they are mapped through the layers of a ReLU network, simplifying proof methods and generalising previous results on networks with random weights to cover different classes of weight distributions including sparse ones. It is observed that the behaviour is similar for different distributions, suggesting a type of universality. The reviewers found that the paper is well written and appreciated the clear description of the places where the proofs deviate from previous works. However, they found that the results, although adding interesting observations in the sparse setting, are qualitatively very close to previous works and possibly not substantial enough for publication in ICLR. The revision includes some experiments with trained networks and updates the title to better reflect the contribution. However, the reviewers did not find this convincing enough. The article would benefit from a deeper theory clarifying the observations that have been made so far, and more extensive experiments connecting to practice. """
944,"""Hyperparameter Tuning and Implicit Regularization in Minibatch SGD""","['SGD', 'momentum', 'batch size', 'learning rate', 'noise', 'temperature', 'implicit regularization', 'optimization', 'generalization']","""This paper makes two contributions towards understanding how the hyperparameters of stochastic gradient descent affect the final training loss and test accuracy of neural networks. First, we argue that stochastic gradient descent exhibits two regimes with different behaviours; a noise dominated regime which typically arises for small or moderate batch sizes, and a curvature dominated regime which typically arises when the batch size is large. In the noise dominated regime, the optimal learning rate increases as the batch size rises, and the training loss and test accuracy are independent of batch size under a constant epoch budget. In the curvature dominated regime, the optimal learning rate is independent of batch size, and the training loss and test accuracy degrade as the batch size rises. We support these claims with experiments on a range of architectures including ResNets, LSTMs and autoencoders. We always perform a grid search over learning rates at all batch sizes. Second, we demonstrate that small or moderately large batch sizes continue to outperform very large batches on the test set, even when both models are trained for the same number of steps and reach similar training losses. Furthermore, when training Wide-ResNets on CIFAR-10 with a constant batch size of 64, the optimal learning rate to maximize the test accuracy only decays by a factor of 2 when the epoch budget is increased by a factor of 128, while the optimal learning rate to minimize the training loss decays by a factor of 16. These results confirm that the noise in stochastic gradients can introduce beneficial implicit regularization.""","""Authors provide an empirical evaluation of batch size and learning rate selection and its effect on training and generalization performance. As the authors and reviewers note, this is an active area of research with many closely related results to the contributions of this paper already existing in the literature. In light of this work, reviewers felt that this paper did not clearly place itself in the appropriate context to make its contributions clear. Following the rebuttal, reviewers minds remained unchanged. """
945,"""Feature Map Transform Coding for Energy-Efficient CNN Inference""","['compression', 'efficient inference', 'quantization', 'memory bandwidth', 'entropy']",""" Convolutional neural networks (CNNs) achieve state-of-the-art accuracy in a variety of tasks in computer vision and beyond. One of the major obstacles hindering the ubiquitous use of CNNs for inference on low-power edge devices is their high computational complexity and memory bandwidth requirements. The latter often dominates the energy footprint on modern hardware. In this paper, we introduce a lossy transform coding approach, inspired by image and video compression, designed to reduce the memory bandwidth due to the storage of intermediate activation calculation results. Our method does not require fine-tuning the network weights and halves the data transfer volumes to the main memory by compressing feature maps, which are highly correlated, with variable length coding. Our method outperform previous approach in term of the number of bits per value with minor accuracy degradation on ResNet-34 and MobileNetV2. We analyze the performance of our approach on a variety of CNN architectures and demonstrate that FPGA implementation of ResNet-18 with our approach results in a reduction of around 40% in the memory energy footprint, compared to quantized network, with negligible impact on accuracy. When allowing accuracy degradation of up to 2%, the reduction of 60% is achieved. A reference implementation}accompanies the paper.""","""The paper proposed the use of a lossy transform coding approach to to reduce the memory bandwidth brought by the storage of intermediate activations. It has shown the proposed method can bring good memory usage while maintaining the the accuracy. The main concern on this paper is the limited novelty. The lossy transform coding is borrowed from other domains and only the use of it on CNN intermediate activation is new, which seems insufficient. """
946,"""In Search for a SAT-friendly Binarized Neural Network Architecture""","['verification', 'Boolean satisfiability', 'Binarized Neural Networks']","""Analyzing the behavior of neural networks is one of the most pressing challenges in deep learning. Binarized Neural Networks are an important class of networks that allow equivalent representation in Boolean logic and can be analyzed formally with logic-based reasoning tools like SAT solvers. Such tools can be used to answer existential and probabilistic queries about the network, perform explanation generation, etc. However, the main bottleneck for all methods is their ability to reason about large BNNs efficiently. In this work, we analyze architectural design choices of BNNs and discuss how they affect the performance of logic-based reasoners. We propose changes to the BNN architecture and the training procedure to get a simpler network for SAT solvers without sacrificing accuracy on the primary task. Our experimental results demonstrate that our approach scales to larger deep neural networks compared to existing work for existential and probabilistic queries, leading to significant speed ups on all tested datasets. ""","""This paper studies how the architecture and training procedure of binarized neural networks can be changed in order to make it easier for SAT solvers to verify certain properties of them. All of the reviewers were positive about the paper, and their questions were addressed to their satisfaction, so all reviewers are in favor of accepting the paper. I therefore recommend acceptance."""
947,"""Network Randomization: A Simple Technique for Generalization in Deep Reinforcement Learning""","['Deep reinforcement learning', 'Generalization in visual domains']","""Deep reinforcement learning (RL) agents often fail to generalize to unseen environments (yet semantically similar to trained agents), particularly when they are trained on high-dimensional state spaces, such as images. In this paper, we propose a simple technique to improve a generalization ability of deep RL agents by introducing a randomized (convolutional) neural network that randomly perturbs input observations. It enables trained agents to adapt to new domains by learning robust features invariant across varied and randomized environments. Furthermore, we consider an inference method based on the Monte Carlo approximation to reduce the variance induced by this randomization. We demonstrate the superiority of our method across 2D CoinRun, 3D DeepMind Lab exploration and 3D robotics control tasks: it significantly outperforms various regularization and data augmentation methods for the same purpose.""","""This submission proposes an RL method for learning policies that generalize better in novel visual environments. The authors propose to introduce some noise in the feature space rather than in the input space as is typically done for visual inputs. They also propose an alignment loss term to enforce invariance to the random perturbation. Reviewers agreed that the experimental results were extensive and that the proposed method is novel and works well. One reviewer felt that the experiments didnt sufficiently demonstrate invariance to additional potential domain shifts. AC believes that additional experiments to probe this would indeed be interesting but that the demonstrated improvements when compared to existing image perturbation methods and existing regularization methods is sufficient experimental justification of the usefulness of the approach. Two reviewers felt that the method should be more extensively compared to data augmentation methods for computer vision tasks. AC believes that the proposed method is not only a data augmentation method given that the added loss tries to enforce representation invariance to perturbations as well. As such comparisons to feature adaptation techniques to tackle domain shift would be appropriate but it is reasonable to consider this line of comparison beyond the scope of this particular work. Ac agrees with the majority opinion that the submission should be accepted."""
948,"""Translation Between Waves, wave2wave""","['sequence to sequence model', 'signal to signal', 'deep learning', 'RNN', 'encoder-decoder model']","""The understanding of sensor data has been greatly improved by advanced deep learning methods with big data. However, available sensor data in the real world are still limited, which is called the opportunistic sensor problem. This paper proposes a new variant of neural machine translation seq2seq to deal with continuous signal waves by introducing the window-based (inverse-) representation to adaptively represent partial shapes of waves and the iterative back-translation model for high-dimensional data. Experimental results are shown for two real-life data: earthquake and activity translation. The performance improvements of one-dimensional data was about 46 % in test loss and that of high-dimensional data was about 1625 % in perplexity with regard to the original seq2seq. ""","""The paper considers the task of sequence to sequence modelling with multivariate, real-valued time series. The authors propose an encoder-decoder based architecture that operates on fixed windows of the original signals. The reviewers unanimously criticise the lack of novelty in this paper and the lack of comparison to existing baselines. While Rev #1 positively highlights human evaluation contained in the experiments, they nevertheless do not think this paper is good enough for publication as is. The authors did not submit a rebuttal. I therefore recommend to reject the paper."""
949,"""Maximum Likelihood Constraint Inference for Inverse Reinforcement Learning""","['learning from demonstration', 'inverse reinforcement learning', 'constraint inference']","""While most approaches to the problem of Inverse Reinforcement Learning (IRL) focus on estimating a reward function that best explains an expert agents policy or demonstrated behavior on a control task, it is often the case that such behavior is more succinctly represented by a simple reward combined with a set of hard constraints. In this setting, the agent is attempting to maximize cumulative rewards subject to these given constraints on their behavior. We reformulate the problem of IRL on Markov Decision Processes (MDPs) such that, given a nominal model of the environment and a nominal reward function, we seek to estimate state, action, and feature constraints in the environment that motivate an agents behavior. Our approach is based on the Maximum Entropy IRL framework, which allows us to reason about the likelihood of an expert agents demonstrations given our knowledge of an MDP. Using our method, we can infer which constraints can be added to the MDP to most increase the likelihood of observing these demonstrations. We present an algorithm which iteratively infers the Maximum Likelihood Constraint to best explain observed behavior, and we evaluate its efficacy using both simulated behavior and recorded data of humans navigating around an obstacle.""","""The paper introduces a novel way of doing IRL based on learning constraints. The topic of IRL is an important one in RL and the approach introduced is interesting and forms a fundamental contribution that could lead to relevant follow-up work."""
950,"""Count-guided Weakly Supervised Localization Based on Density Map""","['Semi-supervised Learning', 'Weakly Supervised Localization', 'Variational Autoencoder', 'Density Map', 'Counting']","""Weakly supervised localization (WSL) aims at training a model to find the positions of objects by providing it with only abstract labels. For most of the existing WSL methods, the labels are the class of the main object in an image. In this paper, we generalize WSL to counting machines that apply convolutional neural networks (CNN) and density maps for counting. We show that given only ground-truth count numbers, the density map as a hidden layer can be trained for localizing objects and detecting features. Convolution and pooling are the two major building blocks of CNNs. This paper discusses their impacts on an end-to-end WSL network. The learned features in a density map present in the form of dots. In order to make these features interpretable for human beings, this paper proposes a Gini impurity penalty to regularize the density map. Furthermore, it will be shown that this regularization is similar to the variational term of the pseudo-formula -variational autoencoder. The details of this algorithm are demonstrated through a simple bubble counting task. Finally, the proposed methods are applied to the widely used crowd counting dataset the Mall to learn discriminative features of human figures.""","""This work proposes a new regularization method for weakly supervised localization based on counting. Reviewers agree that this is an interesting topic but the experimental validation is weak (qualitative, lack of baselines), and the contribution too incremental. Therefore, we recommend rejection."""
951,"""Selective sampling for accelerating training of deep neural networks""",[],"""We present a selective sampling method designed to accelerate the training of deep neural networks. To this end, we introduce a novel measurement, the {\it minimal margin score} (MMS), which measures the minimal amount of displacement an input should take until its predicted classification is switched. For multi-class linear classification, the MMS measure is a natural generalization of the margin-based selection criterion, which was thoroughly studied in the binary classification setting. In addition, the MMS measure provides an interesting insight into the progress of the training process and can be useful for designing and monitoring new training regimes. Empirically we demonstrate a substantial acceleration when training commonly used deep neural network architectures for popular image classification tasks. The efficiency of our method is compared against the standard training procedures, and against commonly used selective sampling alternatives: Hard negative mining selection, and Entropy-based selection. Finally, we demonstrate an additional speedup when we adopt a more aggressive learning-drop regime while using the MMS selective sampling method.""","""The paper proposes a method to speed up training of deep nets by re-weighting samples based on their distance to the decision boundary. However, they paper seems hastily written and the method is not backed by sufficient experimental evidence."""
952,"""Interpretable Complex-Valued Neural Networks for Privacy Protection""","['Deep Learning', 'Privacy Protection', 'Complex-Valued Neural Networks']","""Previous studies have found that an adversary attacker can often infer unintended input information from intermediate-layer features. We study the possibility of preventing such adversarial inference, yet without too much accuracy degradation. We propose a generic method to revise the neural network to boost the challenge of inferring input attributes from features, while maintaining highly accurate outputs. In particular, the method transforms real-valued features into complex-valued ones, in which the input is hidden in a randomized phase of the transformed features. The knowledge of the phase acts like a key, with which any party can easily recover the output from the processing result, but without which the party can neither recover the output nor distinguish the original input. Preliminary experiments on various datasets and network structures have shown that our method significantly diminishes the adversary's ability in inferring about the input while largely preserves the resulting accuracy.""","""The reviewers are unanimous in their opinion that this paper offers a novel approach to secure edge learning. I concur. Reviewers mention clarity, but I find the latest paper clear enough."""
953,"""Neural Text Generation With Unlikelihood Training""","['language modeling', 'machine learning']","""Neural text generation is a key tool in natural language applications, but it is well known there are major problems at its core. In particular, standard likelihood training and decoding leads to dull and repetitive outputs. While some post-hoc fixes have been proposed, in particular top-k and nucleus sampling, they do not address the fact that the token-level probabilities predicted by the model are poor. In this paper we show that the likelihood objective itself is at fault, resulting in a model that assigns too much probability to sequences containing repeats and frequent words, unlike those from the human training distribution. We propose a new objective, unlikelihood training, which forces unlikely generations to be assigned lower probability by the model. We show that both token and sequence level unlikelihood training give less repetitive, less dull text while maintaining perplexity, giving superior generations using standard greedy or beam search. According to human evaluations, our approach with standard beam search also outperforms the currently popular decoding methods of nucleus sampling or beam blocking, thus providing a strong alternative to existing techniques.""","""This paper introduces a new objective for text generation with neural nets. The main insight is that the standard likelihood objective assigns excessive probability to sequences containing repeated and frequent words. The paper proposes an objective that penalizes these patterns. This technique yields better text generation than alternative methods according to human evaluations. The reviewers found the paper to be written clearly. They found the problem to be relevant and found the proposed solution method to be both novel and simple. The experiments were carefully designed and the results were convincing. The reviewers raised several concerns on particular details of the method. These concerns were largely addressed by the authors in their response. Overall, the reviewers did not find the weaknesses of the paper to be serious flaws. This paper should be published. The paper provides a clearly presented solution for a relevant problem, along with careful experiments. """
954,"""Amharic Text Normalization with Sequence-to-Sequence Models""","['Text Normalization', 'Sequence-to-Sequence Model', 'Encoder-Decoder']","""All areas of language and speech technology, directly or indirectly, require handling of real text. In addition to ordinary words and names, the real text contains non-standard words (NSWs), including numbers, abbreviations, dates, currency, amounts, and acronyms. Typically, one cannot find NSWs in a dictionary, nor can one find their pronunciation by an application of ordinary letter-to-sound rules. It is desirable to normalize text by replacing such non-standard words with a consistently formatted and contextually appropriate variant in several NLP applications. To address this challenge, in this paper, we model the problem as character-level sequence-to-sequence learning where we map a sequence of input characters to a sequence of output words. It consists of two neural networks, the encoder network, and the decoder network. The encoder maps the input characters to a fixed dimensional vector and the decoder generates the output words. We have achieved an accuracy of 94.8 % which is promising given the resource we use.""","""The paper proposes a text normalisation model for Amharic text. The model uses word classification, followed by a character-based GRU attentive encoder-decoder model. The paper is very short and does not present reproducible experiments. It also does not conform to the style guidelines of the conference. There has been no discussion of this paper beyond the initial reviews, all of which reject it with a score of 1. It is not ready to publish and the authors should consider a more NLP focussed venue for future research of this kind. """
955,"""SemanticAdv: Generating Adversarial Examples via Attribute-Conditional Image Editing""","['adversarial examples', 'semantic attack']","""Deep neural networks (DNNs) have achieved great success in various applications due to their strong expressive power. However, recent studies have shown that DNNs are vulnerable to adversarial examples which are manipulated instances targeting to mislead DNNs to make incorrect predictions. Currently, most such adversarial examples try to guarantee subtle perturbation"" by limiting the Lp norm of the perturbation. In this paper, we aim to explore the impact of semantic manipulation on DNNs predictions by manipulating the semantic attributes of images and generate unrestricted adversarial examples"". Such semantic based perturbation is more practical compared with the Lp bounded perturbation. In particular, we propose an algorithm SemanticAdv which leverages disentangled semantic factors to generate adversarial perturbation by altering controlled semantic attributes to fool the learner towards various adversarial"" targets. We conduct extensive experiments to show that the semantic based adversarial examples can not only fool different learning tasks such as face verification and landmark detection, but also achieve high targeted attack success rate against real-world black-box services such as Azure face verification service based on transferability. To further demonstrate the applicability of SemanticAdv beyond face recognition domain, we also generate semantic perturbations on street-view images. Such adversarial examples with controlled semantic manipulation can shed light on further understanding about vulnerabilities of DNNs as well as potential defensive approaches.""","""I had a little bit of difficulty with my recommendation here, but in the end I don't feel confident in recommending this paper for acceptance, with my concerns largely boiling down to the lack of clear description of the overall motivation. Standard adversarial attacks are meant to be *imperceptible* changes that do not change the underlying semantics of the input to the human eye. In other words, the goal of the current work, generating ""semantically meaningful"" perturbations goes against the standard definition of adversarial attacks. This left me with two questions: 1. Under the definition of semantic adversarial attacks, what is to prevent someone from swapping out the current image with an entirely different image? From what I saw in the evaluation measures utilized in the paper, such a method would be judged as having performed a successful attack, and given no constraints there is nothing stopping this. 2. In what situation would such an attack method would be practically useful? Even the reviewers who reviewed the paper favorably were not able to provide answers to these questions, and I was not able to resolve this from my reading of the paper as well. I do understand that there is a challenge on this by Google. In my opinion, even this contest is somewhat ill-defined, but it also features extensive human evaluation to evaluate the validity of the perturbations, which is not featured in the experimental evaluation here. While I think this work is potentially interesting, it seems that there are too many open questions that are not resolved yet to recommend acceptance at this time, but I would encourage the authors to tighten up the argumentation/evaluation in this regard and revise the paper to be better accordingly!"""
956,"""Subgraph Attention for Node Classification and Hierarchical Graph Pooling""","['Graph Neural Network', 'Graph Attention', 'Graph Pooling', 'Node Classification', 'Graph Classification', 'Network Representation Learning']","""Graph neural networks have gained significant interest from the research community for both node classification within a graph and graph classification within a set of graphs. Attention mechanism applied on the neighborhood of a node improves the performance of graph neural networks. Typically, it helps to identify a neighbor node which plays more important role to determine the label of the node under consideration. But in real world scenarios, a particular subset of nodes together, but not the individual nodes in the subset, may be important to determine the label of a node. To address this problem, we introduce the concept of subgraph attention for graphs. To show the efficiency of this, we use subgraph attention with graph convolution for node classification. We further use subgraph attention for the entire graph classification by proposing a novel hierarchical neural graph pooling architecture. Along with attention over the subgraphs, our pooling architecture also uses attention to determine the important nodes within a level graph and attention to determine the important levels in the whole hierarchy. Competitive performance over the state-of-the-arts for both node and graph classification shows the efficiency of the algorithms proposed in this paper.""","""Initially, two reviewers gave high scores to this paper while they both admitted that they know little about this field. The other review raised significant concerns on novelty while claiming high confidence. During discussions, one of the high-scoring reviewers lowered his/her score. Thus a reject is recommended."""
957,"""Adapt-to-Learn: Policy Transfer in Reinforcement Learning""","['Transfer Learning', 'Reinforcement Learning', 'Adaptation']","""Efficient and robust policy transfer remains a key challenge in reinforcement learning. Policy transfer through warm initialization, imitation, or interacting over a large set of agents with randomized instances, have been commonly applied to solve a variety of Reinforcement Learning (RL) tasks. However, this is far from how behavior transfer happens in the biological world: Humans and animals are able to quickly adapt the learned behaviors between similar tasks and learn new skills when presented with new situations. Here we seek to answer the question: Will learning to combine adaptation reward with environmental reward lead to a more efficient transfer of policies between domains? We introduce a principled mechanism that can \textbf{``Adapt-to-Learn""}, that is adapt the source policy to learn to solve a target task with significant transition differences and uncertainties. We show through theory and experiments that our method leads to a significantly reduced sample complexity of transferring the policies between the tasks.""","""This paper considers inter-domain policy transfer in reinforcement learning. The proposed approach involves adapting existing policies from a source task to a target task by adding a cost related to the difference between the dynamics and trajectory likelihoods of the two tasks. There are three major problems with this paper as it stands, as pointed out by the reviewers. Firstly, the ""KL divergence"" is not a real KL divergence and seems to be only empirically motivated. Then, there are issues with the derivative of the policy gradient. Finally, the theory is not well connected to the proposed algorithm. The rebuttals not only failed to convince the reviewer that raised these issues, but another reviewer lowered their score as a result of these raised points. This is a really interesting idea with compelling experiments, but must be rejected atthis point for the aforementioned reasons."""
958,"""Match prediction from group comparison data using neural networks""","['Neural networks', 'Group comparison', 'Match prediction', 'Rank aggregation']","""We explore the match prediction problem where one seeks to estimate the likelihood of a group of M items preferred over another, based on partial group comparison data. Challenges arise in practice. As existing state-of-the-art algorithms are tailored to certain statistical models, we have different best algorithms across distinct scenarios. Worse yet, we have no prior knowledge on the underlying model for a given scenario. These call for a unified approach that can be universally applied to a wide range of scenarios and achieve consistently high performances. To this end, we incorporate deep learning architectures so as to reflect the key structural features that most state-of-the-art algorithms, some of which are optimal in certain settings, share in common. This enables us to infer hidden models underlying a given dataset, which govern in-group interactions and statistical patterns of comparisons, and hence to devise the best algorithm tailored to the dataset at hand. Through extensive experiments on synthetic and real-world datasets, we evaluate our framework in comparison to state-of-the-art algorithms. It turns out that our framework consistently leads to the best performance across all datasets in terms of cross entropy loss and prediction accuracy, while the state-of-the-art algorithms suffer from inconsistent performances across different datasets. Furthermore, we show that it can be easily extended to attain satisfactory performances in rank aggregation tasks, suggesting that it can be adaptable for other tasks as well.""","""This paper investigates neural networks for group comparison -- i.e., deciding if one group of objects would be preferred over another. The paper received 4 reviews (we requested an emergency review because of a late review that eventually did arrive). R1 recommends Weak Reject, based primarily on unclear presentation, missing details, and concerns about experiments. R2 recommends Reject, also based on concerns about writing, unclear notation, weak baselines, and unclear technical details. In a short review, R3 recommends Weak Accept and suggests some additional experiments, but also indicates that their familiarity with this area is not strong. R4 also recommends Weak Accept and suggests some clarifications in the writing (e.g. additional motivation future work). The authors submitted a response and revision that addresses many of these concerns. Given the split decision, the AC also read the paper; while we see that it has significant merit, we agree with R1 and R2's concerns, and feel the paper needs another round of peer review to address the remaining concerns."""
959,"""Novelty Detection Via Blurring""","['novelty', 'anomaly', 'uncertainty']",""" Conventional out-of-distribution (OOD) detection schemes based on variational autoencoder or Random Network Distillation (RND) are known to assign lower uncertainty to the OOD data than the target distribution. In this work, we discover that such conventional novelty detection schemes are also vulnerable to the blurred images. Based on the observation, we construct a novel RND-based OOD detector, SVD-RND, that utilizes blurred images during training. Our detector is simple, efficient in test time, and outperforms baseline OOD detectors in various domains. Further results show that SVD-RND learns a better target distribution representation than the baselines. Finally, SVD-RND combined with geometric transform achieves near-perfect detection accuracy in CelebA domain.""","""The paper proposes a new method for out-of-distribution detection by combining random network distillation (RND) and blurring (via SVD). The proposed idea is very simple but achieves strong empirical performance, outperforming baseline methods in several OOD detection benchmarks. There were many detailed questions raised by the reviewers but they got mostly resolved, and all reviewers recommend acceptance, and this AC agrees that it is an interesting and effective method worth presenting at ICLR. """
960,"""Harnessing Structures for Value-Based Planning and Reinforcement Learning""","['Deep reinforcement learning', 'value-based reinforcement learning']","""Value-based methods constitute a fundamental methodology in planning and deep reinforcement learning (RL). In this paper, we propose to exploit the underlying structures of the state-action value function, i.e., Q function, for both planning and deep RL. In particular, if the underlying system dynamics lead to some global structures of the Q function, one should be capable of inferring the function better by leveraging such structures. Specifically, we investigate the low-rank structure, which widely exists for big data matrices. We verify empirically the existence of low-rank Q functions in the context of control and deep RL tasks. As our key contribution, by leveraging Matrix Estimation (ME) techniques, we propose a general framework to exploit the underlying low-rank structure in Q functions. This leads to a more efficient planning procedure for classical control, and additionally, a simple scheme that can be applied to value-based RL techniques to consistently achieve better performance on ""low-rank"" tasks. Extensive experiments on control tasks and Atari games confirm the efficacy of our approach.""","""The paper shows empirical evidence that the the optimal action-value function Q* often has a low-rank structure. It uses ideas from the matrix estimation/completion literature to provide a modification of value iteration that benefits from such a low-rank structure. The reviewers are all positive about this paper. They find the idea novel and the writing clear. There have been some questions about the relation of this concept of rank to other definitions and usage of rank in the RL literature. The authors rebuttal seem to be satisfactory to the reviewers. Given these, I recommend acceptance of this paper."""
961,"""Learning Similarity Metrics for Numerical Simulations""","['metric learning', 'CNNs', 'PDEs', 'numerical simulation', 'perceptual evaluation', 'physics simulation']","""We propose a novel approach to compute a stable and generalizing metric (LNSM) with convolutional neural networks (CNN) to compare field data from a variety of numerical simulation sources. Our method employs a Siamese network architecture that is motivated by the mathematical properties of a metric and is known to work well for finding similarities of other data modalities. We leverage a controllable data generation setup with partial differential equation (PDE) solvers to create increasingly different outputs from a reference simulation. In addition, the data generation allows for adjusting the difficulty of the resulting learning task. A central component of our learned metric is a specialized loss function, that introduces knowledge about the correlation between single data samples into the training process. To demonstrate that the proposed approach outperforms existing simple metrics for vector spaces and other learned, image based metrics we evaluate the different methods on a large range of test data. Additionally, we analyze generalization benefits of using the proposed correlation loss and the impact of an adjustable training data difficulty.""","""The authors present a Siamese neural net architecture for learning similarities among field data generated by numerical simulations of partial differential equations. The goal would be to find which two field data are more similar to each. One use case mentioned is the debugging of new numerical simulators, by comparing them with existing ones. The reviewers had mixed opinions on the paper. I agree with a negative comment of all three reviewers that the paper lacks a bit on the originality of the technique and the justification of the new loss proposed, as well as the fact that no strong explicit real world use case was given. I find this problematic especially given that similarities of solutions to PDEs is not a mainstream topic of the conference. Hence a good real world example use of the method would be more convincing."""
962,"""A Graph Neural Network Assisted Monte Carlo Tree Search Approach to Traveling Salesman Problem""","['Traveling Salesman Problem', 'Graph Neural Network', 'Monte Carlo Tree Search']","""We present a graph neural network assisted Monte Carlo Tree Search approach for the classical traveling salesman problem (TSP). We adopt a greedy algorithm framework to construct the optimal solution to TSP by adding the nodes successively. A graph neural network (GNN) is trained to capture the local and global graph structure and give the prior probability of selecting each vertex every step. The prior probability provides a heuristics for MCTS, and the MCTS output is an improved probability for selecting the successive vertex, as it is the feedback information by fusing the prior with the scouting procedure. Experimental results on TSP up to 100 nodes demonstrate that the proposed method obtains shorter tours than other learning-based methods.""","""The paper is a contribution to the recently emerging literature on learning based approaches to combinatorial optimization. The authors propose to pre-train a policy network to imitate SOTA solvers for TSPs. At test time, this policy is then improved, in an alpha-go like manner, with MCTS, using beam-search rollouts to estimate bootstrap values. The main concerns raised by the reviewers is lack of novelty (the proposed algorithm is a straight forward application of graph NNs to MCTS) as well a the experimental results. Although comparing well to other learning based methods, the algorithm is far away from the performance of SOTA solvers. Although well written, the paper is below acceptance threshold. The methodological novelty is low. The reported results are an order of magnitude away from SOTA solvers, while previous work has already reported the general feasibility of learned solvers to TPSs. Furthermore, the overall contribution is somewhat unclear as the policy relies on pre-training with solutions form existing solvers. """
963,"""Asynchronous Stochastic Subgradient Methods for General Nonsmooth Nonconvex Optimization""","['optimziation', 'stochastic optimization', 'asynchronous parallel architecture', 'deep neural networks']","""Asynchronous distributed methods are a popular way to reduce the communication and synchronization costs of large-scale optimization. Yet, for all their success, little is known about their convergence guarantees in the challenging case of general non-smooth, non-convex objectives, beyond cases where closed-form proximal operator solutions are available. This is all the more surprising since these objectives are the ones appearing in the training of deep neural networks. In this paper, we introduce the first convergence analysis covering asynchronous methods in the case of general non-smooth, non-convex objectives. Our analysis applies to stochastic sub-gradient descent methods both with and without block variable partitioning, and both with and without momentum. It is phrased in the context of a general probabilistic model of asynchronous scheduling accurately adapted to modern hardware properties. We validate our analysis experimentally in the context of training deep neural network architectures. We show their overall successful asymptotic convergence as well as exploring how momentum, synchronization, and partitioning all affect performance.""","""This paper considers an interesting theoretical question. However, it would add to the strength of the paper if it was able to meaningfully connect the considered model as well as derived methodology to the challenges and performance that arise in practice. """
964,"""Is Deep Reinforcement Learning Really Superhuman on Atari? Leveling the playing field""","['Reinforcement Learning', 'Deep Learning', 'Atari benchmark', 'Reproducibility']","""Consistent and reproducible evaluation of Deep Reinforcement Learning (DRL) is not straightforward. In the Arcade Learning Environment (ALE), small changes in environment parameters such as stochasticity or the maximum allowed play time can lead to very different performance. In this work, we discuss the difficulties of comparing different agents trained on ALE. In order to take a step further towards reproducible and comparable DRL, we introduce SABER, a Standardized Atari BEnchmark for general Reinforcement learning algorithms. Our methodology extends previous recommendations and contains a complete set of environment parameters as well as train and test procedures. We then use SABER to evaluate the current state of the art, Rainbow. Furthermore, we introduce a human world records baseline, and argue that previous claims of expert or superhuman performance of DRL might not be accurate. Finally, we propose Rainbow-IQN by extending Rainbow with Implicit Quantile Networks (IQN) leading to new state-of-the-art performance. Source code is available for reproducibility.""","""This paper proposes a new benchmark that compares performance of deep reinforcement learning algorithms on the Atari Learning Environment to the best human players. The paper identifies limitations of past evaluations of deep RL agents on Atari. The human baseline scores commonly used in deep RL are not the highest known human scores. To enable learning agents to reach these high scores, the paper recommends allowing the learning agents to play without a time limit. The time limit in Atari is not always consistent across papers, and removing the time limit requires additional software fixes due to some bugs in the game software. These ideas form the core of the paper's proposed new benchmark (SABER). The paper also proposes a new deep RL algorithm that combines earlier ideas. The reviews and the discussion with the authors brought out several strengths and weaknesses of the proposal. One strength was identifying the best known human performance in these Atari games. However, the reviewers were not convinced that this new benchmark is useful. The reviewers raised concerns about using clipped rewards, using games that received substantially different amounts of human effort, comparing learning algorithms to human baselines instead of other learning algorithms, and also the continued use of the Atari environment. Given all these many concerns about a new benchmark, the newly proposed algorithm was not viewed as a distraction. This paper is not ready for publication. The new benchmark proposed for deep reinforcement learning on Atari was not convincing to the reviewers. The paper requires further refinement of the benchmark or further justification for the new benchmark."""
965,"""Enhancing Adversarial Defense by k-Winners-Take-All""","['adversarial defense', 'activation function', 'winner takes all']","""We propose a simple change to existing neural network structures for better defending against gradient-based adversarial attacks. Instead of using popular activation functions (such as ReLU), we advocate the use of k-Winners-Take-All (k-WTA) activation, a C0 discontinuous function that purposely invalidates the neural network models gradient at densely distributed input data points. The proposed k-WTA activation can be readily used in nearly all existing networks and training methods with no significant overhead. Our proposal is theoretically rationalized. We analyze why the discontinuities in k-WTA networks can largely prevent gradient-based search of adversarial examples and why they at the same time remain innocuous to the network training. This understanding is also empirically backed. We test k-WTA activation on various network structures optimized by a training method, be it adversarial training or not. In all cases, the robustness of k-WTA networks outperforms that of traditional networks under white-box attacks.""","""This paper presents new non-linearity function which specially affects regions of the model which are densely valued. The non-linearity is simple: it retains only top-k highest units from the input, while truncating the rest to zero. This also makes the models more robust to adversarial defense which depend on the gradients. The non-linearity function is shown to have better adversarial robustness on CIFAR-10 and SVHN datasets. The paper also presents theoretical analysis for why the non-linearity is a good function. The authors have already incorporated major suggestions by the reviewers and the paper can make significant impact on the community. Thus, I recommend its acceptance."""
966,"""Domain-Agnostic Few-Shot Classification by Learning Disparate Modulators""","['Meta-learning', 'few-shot learning', 'multi-domain']","""Although few-shot learning research has advanced rapidly with the help of meta-learning, its practical usefulness is still limited because most of the researches assumed that all meta-training and meta-testing examples came from a single domain. We propose a simple but effective way for few-shot classification in which a task distribution spans multiple domains including previously unseen ones during meta-training. The key idea is to build a pool of embedding models which have their own metric spaces and to learn to select the best one for a particular task through multi-domain meta-learning. This simplifies task-specific adaptation over a complex task distribution as a simple selection problem rather than modifying the model with a number of parameters at meta-testing time. Inspired by common multi-task learning techniques, we let all models in the pool share a base network and add a separate modulator to each model to refine the base network in its own way. This architecture allows the pool to maintain representational diversity and each model to have domain-invariant representation as well. Experiments show that our selection scheme outperforms other few-shot classification algorithms when target tasks could come from many different domains. They also reveal that aggregating outputs from all constituent models is effective for tasks from unseen domains showing the effectiveness of our framework.""","""This paper addresses the problem of few-shot classification across multiple domains. The main algorithmic contribution consists of a selection criteria to choose the best source domain embedding for a given task using a multi-domain modulator. All reviewers were in agreement that this paper is not ready for publication. Some key concerns were the lack of scalability (though the authors argue that this may not be a concern as all models are only stored during meta-training, still if you want to incorporate many training settings it may become challenging) and low algorithmic novelty. The issue with novelty is that there is inconclusive experimental evidence to justify the selection criteria over simple methods like averaging, especially when considering novel test time domains. The authors argue that since their approach chooses the single best training domain it may not be best suited to generalize to a novel test time domain. Based on the reviews and discussions the AC does not recommend acceptance. The authors should consider revisions for clarity and to further polish their claims providing any additional experiments to justify where appropriate. """
967,"""Constant Time Graph Neural Networks""","['graph neural networks', 'constant time algorithm']","""The recent advancements in graph neural networks (GNNs) have led to state-of-the-art performances in various applications, including chemo-informatics, question-answering systems, and recommender systems. However, scaling up these methods to huge graphs such as social network graphs and web graphs still remains a challenge. In particular, the existing methods for accelerating GNNs are either not theoretically guaranteed in terms of approximation error, or they require at least a linear time computation cost. In this study, we analyze the neighbor sampling technique to obtain a constant time approximation algorithm for GraphSAGE, the graph attention networks (GAT), and the graph convolutional networks (GCN). The proposed approximation algorithm can theoretically guarantee the precision of approximation. The key advantage of the proposed approximation algorithm is that the complexity is completely independent of the numbers of the nodes, edges, and neighbors of the input and depends only on the error tolerance and confidence probability. To the best of our knowledge, this is the first constant time approximation algorithm for GNNs with a theoretical guarantee. Through experiments using synthetic and real-world datasets, we demonstrate the speed and precision of the proposed approximation algorithm and validate our theoretical results.""","""There was some interest in the ideas presented, but this paper was on the borderline and ultimately not able to be accepted for publication at ICLR. The primary reviewer concern was about the level of novelty and significance of the contribution. This was not sufficiently demonstrated."""
968,"""Learning to Retrieve Reasoning Paths over Wikipedia Graph for Question Answering""","['Multi-hop Open-domain Question Answering', 'Graph-based Retrieval', 'Multi-step Retrieval']","""Answering questions that require multi-hop reasoning at web-scale necessitates retrieving multiple evidence documents, one of which often has little lexical or semantic relationship to the question. This paper introduces a new graph-based recurrent retrieval approach that learns to retrieve reasoning paths over the Wikipedia graph to answer multi-hop open-domain questions. Our retriever model trains a recurrent neural network that learns to sequentially retrieve evidence paragraphs in the reasoning path by conditioning on the previously retrieved documents. Our reader model ranks the reasoning paths and extracts the answer span included in the best reasoning path. Experimental results show state-of-the-art results in three open-domain QA datasets, showcasing the effectiveness and robustness of our method. Notably, our method achieves significant improvement in HotpotQA, outperforming the previous best model by more than 14 points.""","""The paper proposed a multi-hop machine reading method for hotpotqa and squad-open datasets. The reviewers agreed that it is very interesting to learn to retrieve, and the paper presents an interesting solution. Some additional experiments as suggested by the reviewers will help improve the paper further. """
969,"""Word embedding re-examined: is the symmetrical factorization optimal?""","['word embedding', 'matrix factorization', 'linear transformation', 'neighborhood structure']","""As observed in previous works, many word embedding methods exhibit two interesting properties: (1) words having similar semantic meanings are embedded closely; (2) analogy structure exists in the embedding space, such that ''emph{Paris} is to \emph{France} as \emph{Berlin} is to \emph{Germany}''. We theoretically analyze the inner mechanism leading to these nice properties. Specifically, the embedding can be viewed as a linear transformation from the word-context co-occurrence space to the embedding space. We reveal how the relative distances between nodes change during this transforming process. Such linear transformation will result in these good properties. Based on the analysis, we also provide the answer to a question whether the symmetrical factorization (e.g., \texttt{word2vec}) is better than traditional SVD method. We propose a method to improve the embedding further. The experiments on real datasets verify our analysis.""","""The paper studies word embeddings using the matrix factorization framework introduced by Levy et al 2015. The authors provide a theoretical explanation for how the hyperparameter alpha controls the distance between words in the embedding and a method to estimate the optimal alpha. The authors also provide experiments showing the alpha found using their method is close to the alpha that gives the highest performance on the word-similarity task on several datasets. The paper received 2 weak rejects and 1 weak accept. The reviews were unchanged after the rebuttal, with even the review for weak accept (R2) indicating that they felt the submission to be of low quality. Initially, reviewers commented that while the work seemed solid and provided insights into the problem of learning word embeddings, the paper needed to improve their positioning with respect to prior work on word embeddings and add missing citations. In the revision, the authors improved the related work, but removed the conclusion. The current version of the paper is still low quality and has the following issues 1. The paper exposition still needs improvement and it would benefit from another review pass Following R3's suggestions, the authors have made various improvements to the paper, including modifying the terminology and contextualizing the work. However, as R3 suggests, the paper still needs more rewriting to clearly articulate the contribution and how it relates to prior work throughout the paper. In addition, the conclusion was removed and the paper still needs an editing pass as there are still many language/grammar issues. Page 5: ""inherites"" -> ""inherits"" Page 5: ""top knn"" -> ""top k"" 2. More experimental evaluation is needed. For instance, R1 suggested that the authors perform additional experiments on other tasks (e.g. NER, POS Tagging). The authors indicated that this was not a focus of their work as other works have already looked at the impact of alpha on other task. While prior works has looked at the correlation of alpha vs performance on the task, they have not looked at whether alpha estimated the method proposed by the author will give good performance on these tasks as well. Including such analysis will make this a stronger paper. Overall, there are some promising elements in the paper but the quality of the paper needs to be improved. The authors are encouraged to improve the paper by adding more experimental evaluation on other tasks, improving the writing, as well as incorporating other reviewer comments and resubmit to an appropriate venue. """
970,"""Analysis and Interpretation of Deep CNN Representations as Perceptual Quality Features""","['interpretation', 'perceptual quality', 'perceptual loss', 'image-restoration.']","""Pre-trained Deep Convolutional Neural Network (CNN) features have popularly been used as full-reference perceptual quality features for CNN based image quality assessment, super-resolution, image restoration and a variety of image-to-image translation problems. In this paper, to get more insight, we link basic human visual perception to characteristics of learned deep CNN representations as a novel and first attempt to interpret them. We characterize the frequency and orientation tuning of channels in trained object detection deep CNNs (e.g., VGG-16) by applying grating stimuli of different spatial frequencies and orientations as input. We observe that the behavior of CNN channels as spatial frequency and orientation selective filters can be used to link basic human visual perception models to their characteristics. Doing so, we develop a theory to get more insight into deep CNN representations as perceptual quality features. We conclude that sensitivity to spatial frequencies that have lower contrast masking thresholds in human visual perception and a definite and strong orientation selectivity are important attributes of deep CNN channels that deliver better perceptual quality features. ""","""This paper aims to analyze CNN representations in terms of how well they measure the perceptual severity of image distortions. In particularly, (a) sensitivity to changes in visual frequency and (b) orientation selectivity was used. Although the reviewers agree that this paper presents some interesting initial findings with a promising direction, the majority of the reviewers (three out of four) find that the paper is incomplete, raising concerns in terms of experimental settings and results. Multiple reviewers explicitly asked for additional experiments to confirm whether the presented empirical results can be used to improve results of an image generation. Responding to the reviews, the authors added a super-resolution experiment in the appendix, which the reviewers believe is the right direction but is still preliminary. Overall, we believe the paper reports interesting findings but it will require a series of additional work to make it ready for the publication."""
971,"""Pretraining boosts out-of-domain robustness for pose estimation""","['pose estimation', 'robustness', 'out-of-domain', 'transfer learning']","""Deep neural networks are highly effective tools for human and animal pose estimation. However, robustness to out-of-domain data remains a challenge. Here, we probe the transfer and generalization ability for pose estimation with two architecture classes (MobileNetV2s and ResNets) pretrained on ImageNet. We generated a novel dataset of 30 horses that allowed for both within-domain and out-of-domain (unseen horse) testing. We find that pretraining on ImageNet strongly improves out-of-domain performance. Moreover, we show that for both pretrained and networks trained from scratch, better ImageNet-performing architectures perform better for pose estimation, with a substantial improvement on out-of-domain data when pretrained. Collectively, our results demonstrate that transfer learning is particularly beneficial for out-of-domain robustness.""","""The paper presents a new dataset, containing around 8k pictures of 30 horses in different poses. This is used to study the benefits of pretraining for in- and out-of-domain images. The paper is somewhat lacking in novelty. Others have studied the same type of pre-training in the past using other datasets, which makes the dataset the main novelty. But reviewers raised many questions about the dataset, in particular about how many of the frames of the same horse might be similar, and of how few horses there are; few enough to potentially not make the results statistically meaningful. The authors replied to these questions more by appealing to standards in other fields than by explaining why this is a good choice. Apart from these crucial weaknesses, however, the research appears good. This is a pretty clear reject based on lack of novelty and oddities with the dataset."""
972,"""Topological Autoencoders""","['Topology', 'Deep Learning', 'Autoencoders', 'Persistent Homology', 'Representation Learning', 'Dimensionality Reduction', 'Topological Machine Learning', 'Topological Data Analysis']","""We propose a novel approach for preserving topological structures of the input space in latent representations of autoencoders. Using persistent homology, a technique from topological data analysis, we calculate topological signatures of both the input and latent space to derive a topological loss term. Under weak theoretical assumptions, we can construct this loss in a differentiable manner, such that the encoding learns to retain multi-scale connectivity information. We show that our approach is theoretically well-founded and that it exhibits favourable latent representations on a synthetic manifold as well as on real-world image data sets, while preserving low reconstruction errors.""","""This paper introduces a new variant of autoencoders with an topological loss term. The reviewers appreciated part of the paper and it is borderline. However, there are enough reservations to argue for it will be better for the paper to updated and submitted to next conference. Rejection is recommended. """
973,"""Learn Interpretable Word Embeddings Efficiently with von Mises-Fisher Distribution""","['word embedding', 'natural language processing']","""Word embedding plays a key role in various tasks of natural language processing. However, the dominant word embedding models don't explain what information is carried with the resulting embeddings. To generate interpretable word embeddings we intend to replace the word vector with a probability density distribution. The insight here is that if we regularize the mixture distribution of all words to be uniform, then we can prove that the inner product between word embeddings represent the point-wise mutual information between words. Moreover, our model can also handle polysemy. Each word's probability density distribution will generate different vectors for its various meanings. We have evaluated our model in several word similarity tasks. Results show that our model can outperform the dominant models consistently in these tasks.""","""The paper presents an approach to learning interpretable word embeddings. The reviewers put this in the lower half of the submissions. One reason seems to be the size of the training corpora used in the experiments, as well as the limited number of experiments; another that the claim of interpretability seems over-stated. There's also a lack of comparison to related work. I also think it would be interesting to move beyond the standard benchmarks - and either use word embeddings downstream or learn word embeddings for multiple languages [you should do this, regardless] and use Procrustes analysis or the like to learn a mapping: A good embedding algorithm should induce more linearly alignable embedding spaces. NB: While the authors cite other work by these authors, [0] seems relevant, too. Other related work: [1-4]. [0] pseudo-url [1] pseudo-url [2] pseudo-url [3] pseudo-url [4]pseudo-url"""
974,"""Learning to Learn Kernels with Variational Random Features""","['Meta-learning', 'few-shot learning', 'Random Fourier Feature', 'Kernel learning']","""Meta-learning for few-shot learning involves a meta-learner that acquires shared knowledge from a set of prior tasks to improve the performance of a base-learner on new tasks with a small amount of data. Kernels are commonly used in machine learning due to their strong nonlinear learning capacity, which have not yet been fully investigated in the meta-learning scenario for few-shot learning. In this work, we explore kernel approximation with random Fourier features in the meta-learning framework for few-shot learning. We propose learning adaptive kernels by meta variational random features (MetaVRF), which is formulated as a variational inference problem. To explore shared knowledge across diverse tasks, our MetaVRF deploys an LSTM inference network to generate informative features, which can establish kernels of highly representational power with low spectral sampling rates, while also being able to quickly adapt to specific tasks for improved performance. We evaluate MetaVRF on a variety of few-shot learning tasks for both regression and classification. Experimental results demonstrate that our MetaVRF can deliver much better or competitive performance than recent meta-learning algorithms.""","""The paper looks at meta learning using random Fourier features for kernel approximations. The idea is to learn adaptive kernels by inferring Fourier bases from related tasks that can be used for the new task. A key insight of the paper is to use an LSTM to share knowledge across tasks. The paper tackles an interesting problem, and the idea to use a meta learning setting for transfer learning within a kernel setting is quite interesting. It may be worthwhile relating this work to this paper by Titsias et al. (pseudo-url), which looks at a slightly different setting (continual learning with Gaussian processes, where information is shared through inducing variables). Having read the paper, I have some comments/questions: 1. log-likelihood should be called log-marginal likelihood (wherever the ELBO shows up) 2. The derivation of the ELBO confuses me (section 3.1). First, I don't know whether this ELBO is at training time or at test time. If it was at training time, then I agree with Reviewer #1 in the sense that pseudo-formula should not depend on either pseudo-formula or {S} If it is at test time, the log-likelihood term should not depend on pseudo-formula (which is the training set), because S is taken care of by S) However, critically, S) should not depend on pseudo-formula . I agree with Reviewer #1 that this part is confusing, and the authors' response has not helped me to diffuse this confusion (e.g., priors should not be conditioned on any data). 3. The tasks are indirectly represented by a set of basis functions, which are represented by pseudo-formula for task pseudo-formula . In the paper, these tasks are then inferred using variational inference and an LSTM. It may be worthwhile relating this to the latent-variable approach by Saemundsson et al. (pseudo-url) for meta learning. 4. The expression ""meta ELBO"" is inappropriate. This is a simple ELBO, nothing meta about it. If we think of the tasks as latent variables (which the paper also states), this ELBO in equation (9) is a vanilla ELBO that is used in variational inference. 5. For the LSTM, does it make a difference how the tasks are ordered? 6. Experiments: Figure 3 clearly needs error bars, and MSEs need to be reported with error bars as well; 6a) Figures 4 and 5 need error bars. 6b) Error bars should also be based on different random initializations of the learning procedure to evaluate the robustness of the methods (use at least 20 random seeds). I don't think any of the results is based on more than one random seed (at least I could not find any statement regarding this). 7. Table 1 and 2: The highlighting in bold is unclear. If it is supposed to highlight the best methods, then the highlighting is dishonest in the sense that methods, which perform similarly, are not highlighted. For example, in Table 1, VERSA or MetaVRF (w/o LSTM) could be highlighted for all tasks because the error bars are so huge (similar in Table 2). 8. One of the things I'm missing completely is a discussion about computational demand: How efficiently can we train the model, and how long does it take to make predictions? It would be great to have some discussion about this in the paper and relate this to other approaches. 9. The paper evaluates also the effect of having an LSTM that correlates tasks in the posterior. The analysis shows that there are some marginal gains, but none of the is statistically significant. I would have liked to see much more analysis of the effect/benefit of the LSTM. Summary: The paper addresses an interesting problem. However, I have reservations regarding some theoretical bits and regarding the quality of the evaluation. Given that this paper also exceeds the 8 pages (default) limit, we are supposed to ask for higher acceptance standards than for an 8-pages paper. Hence, putting everything together, I recommend to reject this paper."""
975,"""Efficacy of Pixel-Level OOD Detection for Semantic Segmentation""","['Out-of-Distribution Detection', 'Semantic Segmentation', 'Deep Learning']","""The detection of out of distribution samples for image classification has been widely researched. Safety critical applications, such as autonomous driving, would benefit from the ability to localise the unusual objects causing the image to be out of distribution. This paper adapts state-of-the-art methods for detecting out of distribution images for image classification to the new task of detecting out of distribution pixels, which can localise the unusual objects. It further experimentally compares the adapted methods on two new datasets derived from existing semantic segmentation datasets using PSPNet and DeeplabV3+ architectures, as well as proposing a new metric for the task. The evaluation shows that the performance ranking of the compared methods does not transfer to the new task and every method performs significantly worse than their image-level counterparts.""","""This paper studies the problem of out-of-distribution (OOD) detection for semantic segmentation. Reviewers and AC agree that the problem might be important and interesting, but the paper is not ready to publish in various aspects, e.g., incremental contribution and less-motivated/convincing experimental setups/results. Hence, I recommend rejection."""
976,"""Finding Deep Local Optima Using Network Pruning""","['network pruning', 'non-convex optimization']","""Artificial neural networks (ANNs) are very popular nowadays and offer reliable solutions to many classification problems. However, training deep neural networks (DNN) is time-consuming due to the large number of parameters. Recent research indicates that these DNNs might be over-parameterized and different solutions have been proposed to reduce the complexity both in the number of parameters and in the training time of the neural networks. Furthermore, some researchers argue that after reducing the neural network complexity via connection pruning, the remaining weights are irrelevant and retraining the sub-network would obtain a comparable accuracy with the original one. This may hold true in most vision problems where we always enjoy a large number of training samples and research indicates that most local optima of the convolutional neural networks may be equivalent. However, in non-vision sparse datasets, especially with many irrelevant features where a standard neural network would overfit, this might not be the case and there might be many non-equivalent local optima. This paper presents empirical evidence for these statements and an empirical study of the learnability of neural networks (NNs) on some challenging non-linear real and simulated data with irrelevant variables. Our simulation experiments indicate that the cross-entropy loss function on XOR-like data has many local optima, and the number of local optima grows exponentially with the number of irrelevant variables. We also introduce a connection pruning method to improve the capability of NNs to find a deep local minimum even when there are irrelevant variables. Furthermore, the performance of the discovered sparse sub-network degrades considerably either by retraining from scratch or the corresponding original initialization, due to the existence of many bad optima around. Finally, we will show that the performance of neural networks for real-world experiments on sparse datasets can be recovered or even improved by discovering a good sub-network architecture via connection pruning.""","""This paper provides empirical evidence on synthetic examples with a focus on understanding the relationship between the number of good local minima and number of irrelevant features. The reviewers find the problem discussed to be important. One of the reviewers has pointed out that the paper does not present deep insights and is more suitable for workshops. The authors did not provide a rebuttal, and it appears that the reviewers opinion has not changed. The current score is clearly not sufficient to accept this paper in its current form. Due to this reason, I recommend to reject this paper. """
977,"""Regularly varying representation for sentence embedding""","['extreme value theory', 'classification', 'supvervised learning', 'data augmentation', 'representation learning']","""The dominant approaches to sentence representation in natural language rely on learning embeddings on massive corpuses. The obtained embeddings have desirable properties such as compositionality and distance preservation (sentences with similar meanings have similar representations). In this paper, we develop a novel method for learning an embedding enjoying a dilation invariance property. We propose two algorithms: Orthrus, a classification algorithm, constrains the distribution of the embedded variable to be regularly varying, i.e. multivariate heavy-tail. and uses Extreme Value Theory (EVT) to tackle the classification task on two separate regions: the tail and the bulk. Hydra, a text generation algorithm for dataset augmentation, leverages the invariance property of the embedding learnt by Orthrus to generate coherent sentences with controllable attribute, e.g. positive or negative sentiment. Numerical experiments on synthetic and real text data demonstrate the relevance of the proposed framework. ""","""Three reviewers recommend rejection. After a good rebuttal, the first reviewer is more positive about the paper yet still feels the paper is not ready for publication. The authors are encouraged to strengthen their work and resubmit to a future venue."""
978,"""BREAKING CERTIFIED DEFENSES: SEMANTIC ADVERSARIAL EXAMPLES WITH SPOOFED ROBUSTNESS CERTIFICATES""",[],"""Defenses against adversarial attacks can be classified into certified and non-certified. Certifiable defenses make networks robust within a certain pseudo-formula -bounded radius, so that it is impossible for the adversary to make adversarial examples in the certificate bound. We present an attack that maintains the imperceptibility property of adversarial examples while being outside of the certified radius. Furthermore, the proposed ""Shadow Attack"" can fool certifiably robust networks by producing an imperceptible adversarial example that gets misclassified and produces a strong ``spoofed'' certificate.""","""This work presents a ""shadow attack"" that fools certifiably robust networks by producing imperceptible adversarial examples by search outside of the certified radius. The reviewers are generally positive on the novelty and contribution of the work. """
979,"""At Your Fingertips: Automatic Piano Fingering Detection""","['piano', 'fingering', 'dataset']","""Automatic Piano Fingering is a hard task which computers can learn using data. As data collection is hard and expensive, we propose to automate this process by automatically extracting fingerings from public videos and MIDI files, using computer-vision techniques. Running this process on 90 videos results in the largest dataset for piano fingering with more than 150K notes. We show that when running a previously proposed model for automatic piano fingering on our dataset and then fine-tuning it on manually labeled piano fingering data, we achieve state-of-the-art results. In addition to the fingering extraction method, we also introduce a novel method for transferring deep-learning computer-vision models to work on out-of-domain data, by fine-tuning it on out-of-domain augmentation proposed by a Generative Adversarial Network (GAN). For demonstration, we anonymously release a visualization of the output of our process for a single video on pseudo-url""","""The paper shows an automatic piano fingering algorithm. The idea is good. But the reviewers find that the novelty is limited and it is an incremental work. All the reivewers agree to reject."""
980,"""Context Based Machine Translation With Recurrent Neural Network For English-Amharic Translation ""","['Context based machine translation', 'machine translation', 'Neural network machine translation', 'English to Amharic machine translation']","""The current approaches for machine translation usually require large set of parallel corpus in order to achieve fluency like in the case of neural machine translation (NMT), statistical machine translation (SMT) and example-based machine translation (EBMT). The context awareness of phrase-based machine translation (PBMT) approaches is also questionable. This research develops a system that translates English text to Amharic text using a combination of context based machine translation (CBMT) and a recurrent neural network machine translation (RNNMT). We built a bilingual dictionary for the CBMT system to use along with a large target corpus. The RNNMT model has then been provided with the output of the CBMT and a parallel corpus for training. Our combinational approach on English-Amharic language pair yields a performance improvement over the simple neural machine translation (NMT).""","""The authors propose a model which combines a neural machine translation system and a context-based machine translation model, which combines some aspects of rule and example based MT. This paper presents work based on obsolete techniques, has relatively low novelty, has problematic experimental design and lacks compelling performance improvements. The authors rebutted some of the reviewers claims, but did not convince them to change their scores. """
981,"""From English to Foreign Languages: Transferring Pre-trained Language Models""","['pretrained language model', 'zero-shot transfer', 'parsing', 'natural language inference']","""Pre-trained models have demonstrated their effectiveness in many downstream natural language processing (NLP) tasks. The availability of multilingual pre-trained models enables zero-shot transfer of NLP tasks from high resource languages to low resource ones. However, recent research in improving pre-trained models focuses heavily on English. While it is possible to train the latest neural architectures for other languages from scratch, it is undesirable due to the required amount of compute. In this work, we tackle the problem of transferring an existing pre-trained model from English to other languages under a limited computational budget. With a single GPU, our approach can obtain a foreign BERT-base model within a day and a foreign BERT-large within two days. Furthermore, evaluating our models on six languages, we demonstrate that our models are better than multilingual BERT on two zero-shot tasks: natural language inference and dependency parsing.""","""This paper proposes a method to transfer a pretrained language model in one language (English) to a new language. The method first learns word embeddings for the new language while keeping the the body of the English model fixed, and further refines it in a fine-tuning procedure as a bilingual model. Experiments on XNLI and dependency parsing demonstrate the benefit of the proposed approach. R3 pointed out that the paper is missing an important baseline, which is a bilingual BERT model. The authors acknowledged this in their rebuttal and ran a preliminary experiment to obtain a first set of results. However, since the main claim of the paper depends on this new experiment, which was not finished by the end of the rebuttal period, it is difficult to accept the paper in its current state. In an internal discussion, R1 also agreed that this baseline is critical to support the paper. As a result, I recommend to reject this paper for ICLR. I encourage the authors to update their paper with the new experiment for submission to future conferences (given consistent results)."""
982,"""N-BEATS: Neural basis expansion analysis for interpretable time series forecasting""","['time series forecasting', 'deep learning']","""We focus on solving the univariate times series point forecasting problem using deep learning. We propose a deep neural architecture based on backward and forward residual links and a very deep stack of fully-connected layers. The architecture has a number of desirable properties, being interpretable, applicable without modification to a wide array of target domains, and fast to train. We test the proposed architecture on several well-known datasets, including M3, M4 and TOURISM competition datasets containing time series from diverse domains. We demonstrate state-of-the-art performance for two configurations of N-BEATS for all the datasets, improving forecast accuracy by 11% over a statistical benchmark and by 3% over last year's winner of the M4 competition, a domain-adjusted hand-crafted hybrid between neural network and statistical time series models. The first configuration of our model does not employ any time-series-specific components and its performance on heterogeneous datasets strongly suggests that, contrarily to received wisdom, deep learning primitives such as residual blocks are by themselves sufficient to solve a wide range of forecasting problems. Finally, we demonstrate how the proposed architecture can be augmented to provide outputs that are interpretable without considerable loss in accuracy.""","""The paper received positive recommendation from all reviewers. Accept."""
983,"""Support-guided Adversarial Imitation Learning""","['Adversarial Imitation Learning', 'Reinforcement Learning', 'Learning from Demonstrations']","""We propose Support-guided Adversarial Imitation Learning (SAIL), a generic imitation learning framework that unifies support estimation of the expert policy with the family of Adversarial Imitation Learning (AIL) algorithms. SAIL addresses two important challenges of AIL, including the implicit reward bias and potential training instability. We also show that SAIL is at least as efficient as standard AIL. In an extensive evaluation, we demonstrate that the proposed method effectively handles the reward bias and achieves better performance and training stability than other baseline methods on a wide range of benchmark control tasks.""","""The submission proposes a method for adversarial imitation learning that combines two previous approaches - GAIL and RED - by simply multiplying their reward functions. The claim is that this adaptation allows for better learning - both handling reward bias and improving training stability. The reviewers were divided in their assessment of the paper, criticizing the empirical results and the claims made by the authors. In particular, the primary claims of handling reward bias and reducing variance seem to be not well justified, including results which show that training stability only substantially improves when SAIL-b, which uses reward clipping, is used. Although the paper is promising, the recommendation is for a reject at this time. The authors are encouraged to clarify their claims and supporting experiments and to validate their method on more challenging domains."""
984,"""Understanding and Robustifying Differentiable Architecture Search""","['Neural Architecture Search', 'AutoML', 'AutoDL', 'Deep Learning', 'Computer Vision']","""Differentiable Architecture Search (DARTS) has attracted a lot of attention due to its simplicity and small search costs achieved by a continuous relaxation and an approximation of the resulting bi-level optimization problem. However, DARTS does not work robustly for new problems: we identify a wide range of search spaces for which DARTS yields degenerate architectures with very poor test performance. We study this failure mode and show that, while DARTS successfully minimizes validation loss, the found solutions generalize poorly when they coincide with high validation loss curvature in the architecture space. We show that by adding one of various types of regularization we can robustify DARTS to find solutions with less curvature and better generalization properties. Based on these observations, we propose several simple variations of DARTS that perform substantially more robustly in practice. Our observations are robust across five search spaces on three image classification tasks and also hold for the very different domains of disparity estimation (a dense regression task) and language modelling.""","""This paper studies the properties of Differentiable Architecture Search, and in particular when it fails, and then proposes modifications that improve its performance for several tasks. The reviews were all very supportive with three Accept opinions, and authors have addressed their comments and suggestions. Given the unanimous reviews, this appears to be a clear Accept. """
985,"""Universal Adversarial Attack Using Very Few Test Examples""","['universal', 'adversarial', 'SVD']","""Adversarial attacks such as Gradient-based attacks, Fast Gradient Sign Method (FGSM) by Goodfellow et al.(2015) and DeepFool by Moosavi-Dezfooli et al. (2016) are input-dependent, small pixel-wise perturbations of images which fool state of the art neural networks into misclassifying images but are unlikely to fool any human. On the other hand a universal adversarial attack is an input-agnostic perturbation. The same perturbation is applied to all inputs and yet the neural network is fooled on a large fraction of the inputs. In this paper, we show that multiple known input-dependent pixel-wise perturbations share a common spectral property. Using this spectral property, we show that the top singular vector of input-dependent adversarial attack directions can be used as a very simple universal adversarial attack on neural networks. We evaluate the error rates and fooling rates of three universal attacks, SVD-Gradient, SVD-DeepFool and SVD-FGSM, on state of the art neural networks. We show that these universal attack vectors can be computed using a small sample of test inputs. We establish our results both theoretically and empirically. On VGG19 and VGG16, the fooling rate of SVD-DeepFool and SVD-Gradient perturbations constructed from observing less than 0.2% of the validation set of ImageNet is as good as the universal attack of Moosavi-Dezfooli et al. (2017a). To prove our theoretical results, we use matrix concentration inequalities and spectral perturbation bounds. For completeness, we also discuss another recent approach to universal adversarial perturbations based on (p, q)-singular vectors, proposed independently by Khrulkov & Oseledets (2018), and point out the simplicity and efficiency of our universal attack as the key difference.""","""The paper proposes to get universal adversarial examples using few test samples. The approach is very close to the Khrulkov & Oseledets, and the abstract for some reason claims that it was proposed independently, which looks like a very strange claim. Overall, all reviewers recommend rejection, and I agree with them."""
986,"""Learning from Rules Generalizing Labeled Exemplars""","['Learning from Rules', 'Learning from limited labeled data', 'Weakly Supervised Learning']","""In many applications labeled data is not readily available, and needs to be collected via pain-staking human supervision. We propose a rule-exemplar method for collecting human supervision to combine the efficiency of rules with the quality of instance labels. The supervision is coupled such that it is both natural for humans and synergistic for learning. We propose a training algorithm that jointly denoises rules via latent coverage variables, and trains the model through a soft implication loss over the coverage and label variables. The denoised rules and trained model are used jointly for inference. Empirical evaluation on five different tasks shows that (1) our algorithm is more accurate than several existing methods of learning from a mix of clean and noisy supervision, and (2) the coupled rule-exemplar supervision is effective in denoising rules.""","""The paper addresses the problem of costly human supervision for training supervised learning methods. The authors propose a joint approach for more effectively collecting supervision data from humans, by extracting rules and their exemplars, and a model for training on this data. They demonstrate the effectiveness of their approach on multiple datasets by comparing to a range of baselines. Based on the reviews and my own reading I recommend to accept this paper. The approach makes intuitively a lot of sense and is well explained. The experimental results are convincing. """
987,"""Geom-GCN: Geometric Graph Convolutional Networks""","['Deep Learning', 'Graph Convolutional Network', 'Network Geometry']","""Message-passing neural networks (MPNNs) have been successfully applied in a wide variety of applications in the real world. However, two fundamental weaknesses of MPNNs' aggregators limit their ability to represent graph-structured data: losing the structural information of nodes in neighborhoods and lacking the ability to capture long-range dependencies in disassortative graphs. Few studies have noticed the weaknesses from different perspectives. From the observations on classical neural network and network geometry, we propose a novel geometric aggregation scheme for graph neural networks to overcome the two weaknesses. The behind basic idea is the aggregation on a graph can benefit from a continuous space underlying the graph. The proposed aggregation scheme is permutation-invariant and consists of three modules, node embedding, structural neighborhood, and bi-level aggregation. We also present an implementation of the scheme in graph convolutional networks, termed Geom-GCN, to perform transductive learning on graphs. Experimental results show the proposed Geom-GCN achieved state-of-the-art performance on a wide range of open datasets of graphs.""","""This paper is consistently supported by all three reviewers and thus an accept is recommended."""
988,"""Out-of-Distribution Image Detection Using the Normalized Compression Distance""","['Out-of-Distribution Detection', 'Normalized Compression Distance', 'Convolutional Neural Networks']","""On detection of the out-of-distribution images, whose underlying distribution is different from that of the training dataset, we tackle to apply out-of-distribution detection methods to already deployed convolutional neural networks. Most recent approaches have to utilize out-of-distribution samples for validation or retrain the model, which makes it less practical for real-world applications. We propose a novel out-of-distribution detection method MALCOM, which neither uses any out-of-distribution samples nor retrain the model. Inspired by the method using the global average pooling on the feature maps of the convolutional neural networks, the goal of our method is to extract informative sequential patterns from the feature maps. To this end, we introduce a similarity metric which focuses on the shared patterns between two sequences. In short, MALCOM uses both the global average and spatial pattern of the feature maps to accurately identify out-of-distribution samples. ""","""This paper proposes an out-of-distribution detection (OOD) method without assuming OOD in validation. As reviewers mentioned, I think the idea is interesting and the proposed method has potential. However, I think the paper can be much improved and is not ready to publish due to the followings given reviewers' comments: (a) The prior work also has some experiments without OOD in validation, i.e., use adversarial examples (AE) instead in validation. Hence, the main motivation of this paper becomes weak unless the authors justify enough why AE is dangerous to use in validation. (b) The performance of their replication of the prior method is far lower than reported. I understand that sometimes it is not easy to reproduce the prior results. In this case, one can put the numbers in the original paper. Or, one can provide detailed analysis why the prior method should fail in some cases. (c) The authors follow exactly same experimental settings in the prior works. But, the reported score of the prior method is already very high in the settings, and the gain can be marginal. Namely, the considered settings are more or less ""easy problems"". Hence, additional harder interesting OOD settings, e.g., motivated by autonomous driving, would strength the paper. Hence, I recommend rejection."""
989,"""Simple and Effective Stochastic Neural Networks""","['stochastic neural networks', 'pruning', 'adversarial defence', 'label noise']","""Stochastic neural networks (SNNs) are currently topical, with several paradigms being actively investigated including dropout, Bayesian neural networks, variational information bottleneck (VIB) and noise regularized learning. These neural network variants impact several major considerations, including generalization, network compression, and robustness against adversarial attack and label noise. However, many existing networks are complicated and expensive to train, and/or only address one or two of these practical considerations. In this paper we propose a simple and effective stochastic neural network (SE-SNN) architecture for discriminative learning by directly modeling activation uncertainty and encouraging high activation variability. Compared to existing SNNs, our SE-SNN is simpler to implement and faster to train, and produces state of the art results on network compression by pruning, adversarial defense and learning with label noise.""","""This paper proposes to use stacked layers of Gaussian latent variables with a maxent objective function as a regulariser. I agree with the reviewers that there is very little novelty and the experiments are not very convincing."""
990,"""Meta Reinforcement Learning with Autonomous Inference of Subtask Dependencies""","['Meta reinforcement learning', 'subtask graph']","""We propose and address a novel few-shot RL problem, where a task is characterized by a subtask graph which describes a set of subtasks and their dependencies that are unknown to the agent. The agent needs to quickly adapt to the task over few episodes during adaptation phase to maximize the return in the test phase. Instead of directly learning a meta-policy, we develop a Meta-learner with Subtask Graph Inference (MSGI), which infers the latent parameter of the task by interacting with the environment and maximizes the return given the latent parameter. To facilitate learning, we adopt an intrinsic reward inspired by upper confidence bound (UCB) that encourages efficient exploration. Our experiment results on two grid-world domains and StarCraft II environments show that the proposed method is able to accurately infer the latent task parameter, and to adapt more efficiently than existing meta RL and hierarchical RL methods.""","""This work formulates and tackles a few-shot RL problem called subtask graph inference, where hierarchical tasks are characterized by a graph describing all subtasks and their dependencies. In other words, each task consists of multiple subtasks and completing a subtask provides a reward. The authors propose a meta-RL approach to meta-train a policy that infers the subtask graph from any new task data in a few shots. Empirical experiments are performed on different domains, including Startcraft II, highlighting the efficiency and scalability of the proposed approach. Most concerns of reviewers were addressed in the rebuttal. The main remaining concerns about this work are that it is mainly an extension of Sohn et al. (2018), making the contribution somewhat incremental, and that its applicability is limited to problems where subtasks are provided. However, all reviewers being positive about this paper, I would still recommend acceptance. """
991,"""Adversarial Robustness as a Prior for Learned Representations""","['adversarial robustness', 'adversarial examples', 'robust optimization', 'representation learning', 'feature visualization']","""An important goal in deep learning is to learn versatile, high-level feature representations of input data. However, standard networks' representations seem to possess shortcomings that, as we illustrate, prevent them from fully realizing this goal. In this work, we show that robust optimization can be re-cast as a tool for enforcing priors on the features learned by deep neural networks. It turns out that representations learned by robust models address the aforementioned shortcomings and make significant progress towards learning a high-level encoding of inputs. In particular, these representations are approximately invertible, while allowing for direct visualization and manipulation of salient input features. More broadly, our results indicate adversarial robustness as a promising avenue for improving learned representations.""","""The paper proposes recasting robust optimization as regularizer for learning representations by neural networks, resulting e.g. in more semantically meaningful representations. The reviewers found that the claimed contributions were well supported by the experimental evidence. The reviewers noted a few minor points regarding clarity that seem to have been addressed. The problems addressed are very relevant to the ICLR community (representation learning and adversarial robustness). However, the reviewers were not convinced by the novelty of the paper. A big part of the discussion focused on prior work by the authors that is to be published at NeurIPS. This paper was not referenced in the manuscript but does reduce the novelty of the present submission. In contrast to the current submission, that paper focuses on manipulating the learned manipulations to solve image generation tasks, whereas the current paper focuses on the underlying properties of the representation. Since the underlying phenomenon had been described in the earlier paper and the current submission does not introduce a new approach / algorithm, the paper was deemed to lack the novelty for acceptance to ICLR. """
992,"""iWGAN: an Autoencoder WGAN for Inference""","['Generative model', 'Autoencoder', 'Inference']","""Generative Adversarial Networks (GANs) have been impactful on many problems and applications but suffer from unstable training. Wasserstein GAN (WGAN) leverages the Wasserstein distance to avoid the caveats in the minmax two-player training of GANs but has other defects such as mode collapse and lack of metric to detect the convergence. We introduce a novel inference WGAN (iWGAN) model, which is a principled framework to fuse auto-encoders and WGANs. The iWGAN jointly learns an encoder network and a generative network using an iterative primal dual optimization process. We establish the generalization error bound of iWGANs. We further provide a rigorous probabilistic interpretation of our model under the framework of maximum likelihood estimation. The iWGAN, with a clear stopping criteria, has many advantages over other autoencoder GANs. The empirical experiments show that our model greatly mitigates the symptom of mode collapse, speeds up the convergence, and is able to provide a measurement of quality check for each individual sample. We illustrate the ability of iWGANs by obtaining a competitive and stable performance with state-of-the-art for benchmark datasets.""","""This paper proposes a new way to stabilise GAN training. The reviews were very mixed but taken together below acceptance threshold. Rejection is recommended with strong motivation to work on the paper for next conference. This is potentially an important contribution. """
993,"""Visual Imitation with Reinforcement Learning using Recurrent Siamese Networks""","['imitation learning', 'reinforcement learning', 'imitation from video']","""It would be desirable for a reinforcement learning (RL) based agent to learn behaviour by merely watching a demonstration. However, defining rewards that facilitate this goal within the RL paradigm remains a challenge. Here we address this problem with Siamese networks, trained to compute distances between observed behaviours and the agents behaviours. Given a desired motion such Siamese networks can be used to provide a reward signal to an RL agent via the distance between the desired motion and the agents motion. We experiment with an RNN-based comparator model that can compute distances in space and time between motion clips while training an RL policy to minimize this distance. Through experimentation, we have had also found that the inclusion of multi-task data and an additional image encoding loss helps enforce the temporal consistency. These two components appear to balance reward for matching a specific instance of a behaviour versus that behaviour in general. Furthermore, we focus here on a particularly challenging form of this problem where only a single demonstration is provided for a given task the one-shot learning setting. We demonstrate our approach on humanoid agents in both 2D with 10 degrees of freedom (DoF) and 3D with 38 DoF.""","""The main concern raised by reviewers is limited novelty, poor presentation, and limited experiments. All the reviewers appreciate the difficulty and importance of the problem. The rebuttal helped clarify novelty, but the other concerns remain."""
994,"""Enforcing Physical Constraints in Neural Neural Networks through Differentiable PDE Layer""","['PDE', 'Hard Constraints', 'Turbulence', 'Super-Resolution', 'Spectral Methods']","""Recent studies at the intersection of physics and deep learning have illustrated successes in the application of deep neural networks to partially or fully replace costly physics simulations. Enforcing physical constraints to solutions generated by neural networks remains a challenge, yet it is essential to the accuracy and trustworthiness of such model predictions. Many systems in the physical sciences are governed by Partial Differential Equations (PDEs). Enforcing these as hard constraints, we show, are inefficient in conventional frameworks due to the high dimensionality of the generated fields. To this end, we propose the use of a novel differentiable spectral projection layer for neural networks that efficiently enforces spatial PDE constraints using spectral methods, yet is fully differentiable, allowing for its use as a layer in neural networks that supports end-to-end training. We show that its computational cost is cheaper than a regular convolution layer. We apply it to an important class of physical systems incompressible turbulent flows, where the divergence-free PDE constraint is required. We train a 3D Conditional Generative Adversarial Network (CGAN) for turbulent flow super-resolution efficiently, whilst guaranteeing the spatial PDE constraint of zero divergence. Furthermore, our empirical results show that the model produces realistic flow fields with more accurate flow statistics when trained with hard constraints imposed via the proposed novel differentiable spectral projection layer, as compared to soft constrained and unconstrained counterparts.""","""This paper introduces an FFT-based loss function to enforce physical constraints in a CNN-based PDE solver. The proposed idea seems sensible, but the reviewers agreed that not enough attention was paid to baseline alternatives, and that a single example problem was not enough to understand the pros and cons of this method."""
995,"""Music Source Separation in the Waveform Domain""","['source separation', 'audio synthesis', 'deep learning']","""Source separation for music is the task of isolating contributions, or stems, from different instruments recorded individually and arranged together to form a song.Such components include voice, bass, drums and any other accompaniments. While end-to-end models that directly generate the waveform are state-of-the-art in many audio synthesis problems, the best multi-instrument source separation models generate masks on the magnitude spectrum and achieve performances far above current end-to-end, waveform-to-waveform models. We present an in-depth analysis of a new architecture, which we will refer to as Demucs, based on a (transposed) convolutional autoencoder, with a bidirectional LSTM at the bottleneck layer and skip-connections as in U-Networks (Ronneberger et al., 2015). Compared to the state-of-the-art waveform-to-waveform model, Wave-U-Net (Stoller et al., 2018), the main features of our approach in addition of the bi-LSTM are the use of trans-posed convolution layers instead of upsampling-convolution blocks, the use of gated linear units, exponentially growing the number of channels with depth and a new careful initialization of the weights. Results on the MusDB dataset show that our architecture achieves a signal-to-distortion ratio (SDR) nearly 2.2 points higher than the best waveform-to-waveform competitor (from 3.2 to 5.4 SDR). This makes our model match the state-of-the-art performances on this dataset, bridging the performance gap between models that operate on the spectrogram and end-to-end approaches.""","""The paper proposed a waveform-to-waveform music source separation system. Experimental justification shows the proposed model achieved the best SDR among all the existing waveform-to-waveform models, and obtained similar performance to spectrogram based ones. The paper is clearly written and the experimental evaluation and ablation study are thorough. But the main concern is the limited novelty, it is an improvement over the existing Wave-U-Net, it added some changes to the existing model architecture for better modeling the waveform data and compared masking vs. synthesis for music source separation. """
996,"""Convolutional Bipartite Attractor Networks""","['attractor network', 'recurrent network', 'energy function', 'convolutional network', 'image completion', 'super-resolution']","""In human perception and cognition, a fundamental operation that brains perform is interpretation: constructing coherent neural states from noisy, incomplete, and intrinsically ambiguous evidence. The problem of interpretation is well matched to an early and often overlooked architecture, the attractor network---a recurrent neural net that performs constraint satisfaction, imputation of missing features, and clean up of noisy data via energy minimization dynamics. We revisit attractor nets in light of modern deep learning methods and propose a convolutional bipartite architecture with a novel training loss, activation function, and connectivity constraints. We tackle larger problems than have been previously explored with attractor nets and demonstrate their potential for image completion and super-resolution. We argue that this architecture is better motivated than ever-deeper feedforward models and is a viable alternative to more costly sampling-based generative methods on a range of supervised and unsupervised tasks.""","""This paper proposes to reintroduce bipartite attractor networks and update them using ideas from modern deep net architectures. After some discussions, all three reviewers felt that the paper did not meet the ICLR bar, in part because of an insufficiency of quantitative results, and in part because the extension was considered pretty straightforward and the results unsurprising, and hence it did not meet the novelty bar. I therefore recommend rejection. """
997,"""Understanding the functional and structural differences across excitatory and inhibitory neurons""",['Neuroscience'],"""One of the most fundamental organizational principles of the brain is the separation of excitatory (E) and inhibitory (I) neurons. In addition to their opposing effects on post-synaptic neurons, E and I cells tend to differ in their selectivity and connectivity. Although many such differences have been characterized experimentally, it is not clear why they exist in the first place. We studied this question in deep networks equipped with E and I cells. We found that salient distinctions between E and I neurons emerge across various deep convolutional recurrent networks trained to perform standard object classification tasks. We explored the necessary conditions for the networks to develop distinct selectivity and connectivity across cell types. We found that neurons that project to higher-order areas will have greater stimulus selectivity, regardless of whether they are excitatory or not. Sparser connectivity is required for higher selectivity, but only when the recurrent connections are excitatory. These findings demonstrate that the functional and structural differences observed across E and I neurons are not independent, and can be explained using a smaller number of factors.""","""This paper explores the role of excitatory and inhibitory neurons, and how their properties might differ based on simulations. A few issues were raised during the review period, and I commend the authors for stepping up to address these comments and run additional experiments. It seems, though, that the reviewer's worries were born out in the results of the additional experiments: ""1. The object classification task is not really relevant to elicit the observed behavior and 2. Inhibitory neurons are not essential (at least when training with batch norm)."" I hope the authors can make improvements in light of these observations, and discuss their implications in a future version of this paper. """
998,"""Behaviour Suite for Reinforcement Learning""","['reinforcement learning', 'benchmark', 'core issues', 'scalability', 'reproducibility']","""This paper introduces the Behaviour Suite for Reinforcement Learning, or bsuite for short. bsuite is a collection of carefully-designed experiments that investigate core capabilities of reinforcement learning (RL) agents with two objectives. First, to collect clear, informative and scalable problems that capture key issues in the design of general and efficient learning algorithms. Second, to study agent behaviour through their performance on these shared benchmarks. To complement this effort, we open source this http URL, which automates evaluation and analysis of any agent on bsuite. This library facilitates reproducible and accessible research on the core issues in RL, and ultimately the design of superior learning algorithms. Our code is Python, and easy to use within existing projects. We include examples with OpenAI Baselines, Dopamine as well as new reference implementations. Going forward, we hope to incorporate more excellent experiments from the research community, and commit to a periodic review of bsuite from a committee of prominent researchers.""","""This paper proposes a platform for benchmarking and evaluating reinforcement learning algorithms. While reviewers had some concerns about whether such a tool was necessary given existing tools, reviewers who interacted with the tool found it easy to use and useful. Making such tools is often an engineering task and rarely aligned with typical research value systems, despite potentially acting as a public good. The success or failure of similar tools rely on community acceptance and it is my belief that this tool surpasses the bar to be promoted to the community at a top tier venue. """
999,"""Regularizing Trajectories to Mitigate Catastrophic Forgetting""","['Continual Learning', 'Regularization', 'Adaptation', 'Natural Gradient']","""Regularization-based continual learning approaches generally prevent catastrophic forgetting by augmenting the training loss with an auxiliary objective. However in most practical optimization scenarios with noisy data and/or gradients, it is possible that stochastic gradient descent can inadvertently change critical parameters. In this paper, we argue for the importance of regularizing optimization trajectories directly. We derive a new co-natural gradient update rule for continual learning whereby the new task gradients are preconditioned with the empirical Fisher information of previously learnt tasks. We show that using the co-natural gradient systematically reduces forgetting in continual learning. Moreover, it helps combat overfitting when learning a new task in a low resource scenario.""","""The submission proposes a 'co-natural' gradient update rule to precondition the optimization trajectory using a Fisher information estimate acquired from previous experience. This results in reduced sensitivity and forgetting when new tasks are learned. The reviews were mixed on this paper, and unfortunately not all reviewers had enough expertise in the field. After reading the paper carefully, I believe that the paper has significance and relevance to the field of continual learning, however it will benefit from more careful positioning with respect to other work as well as more empirical support. The application to the low-data-regime is interesting and could be expanded and refined in a future submission. The recommendation is for rejection."""
1000,"""Implicit -Jeffreys Autoencoders: Taking the Best of Both Worlds""","['Variational Inference', 'Generative Adversarial Networks']","""We propose a new form of an autoencoding model which incorporates the best properties of variational autoencoders (VAE) and generative adversarial networks (GAN). It is known that GAN can produce very realistic samples while VAE does not suffer from mode collapsing problem. Our model optimizes -Jeffreys divergence between the model distribution and the true data distribution. We show that it takes the best properties of VAE and GAN objectives. It consists of two parts. One of these parts can be optimized by using the standard adversarial training, and the second one is the very objective of the VAE model. However, the straightforward way of substituting the VAE loss does not work well if we use an explicit likelihood such as Gaussian or Laplace which have limited flexibility in high dimensions and are unnatural for modelling images in the space of pixels. To tackle this problem we propose a novel approach to train the VAE model with an implicit likelihood by an adversarially trained discriminator. In an extensive set of experiments on CIFAR-10 and TinyImagent datasets, we show that our model achieves the state-of-the-art generation and reconstruction quality and demonstrate how we can balance between mode-seeking and mode-covering behaviour of our model by adjusting the weight in our objective. ""","""The paper received Weak Reject scores from all three reviewers. The AC has read the reviews and lengthy discussions and examined the paper. AC feels that there is a consensus that the paper does not quite meet the acceptance threshold and thus cannot be accepted. Hopefully the authors can use the feedback to improve their paper and resubmit to another venue."""
1001,"""iSparse: Output Informed Sparsification of Neural Networks""","['dropout', 'dropconnect', 'sparsification', 'deep learning', 'neural network']","""Deep neural networks have demonstrated unprecedented success in various knowledge management applications. However, the networks created are often very complex, with large numbers of trainable edges which require extensive computational resources. We note that many successful networks nevertheless often contain large numbers of redundant edges. Moreover, many of these edges may have negligible contributions towards the overall network performance. In this paper, we propose a novel iSparse framework and experimentally show, that we can sparsify the network, by 30-50%, without impacting the network performance. iSparse leverages a novel edge significance score, E, to determine the importance of an edge with respect to the final network output. Furthermore, iSparse can be applied both while training a model or on top of a pre-trained model, making it a retraining-free approach - leading to a minimal computational overhead. Comparisons of iSparse against PFEC, NISP, DropConnect, and Retraining-Free on benchmark datasets show that iSparse leads to effective network sparsifications.""","""Thank you very much for your feedback to the reviewers, which helped us a lot to better understand your paper. However, the paper is still premature to be accepted to ICLR2020. We hope that the detailed reviewers' comments help you improve your paper for potential future submission. """
1002,"""Proactive Sequence Generator via Knowledge Acquisition""","['neural machine translation', 'knowledge distillation', 'exposure bias', 'reinforcement learning']","""Sequence-to-sequence models such as transformers, which are now being used in a wide variety of NLP tasks, typically need to have very high capacity in order to perform well. Unfortunately, in production, memory size and inference speed are all strictly constrained. To address this problem, Knowledge Distillation (KD), a technique to train small models to mimic larger pre-trained models, has drawn lots of attention. The KD approach basically attempts to maximize recall, i.e., ranking Top-ktokens in teacher models as higher as possible, however, whereas precision is more important for sequence generation because of exposure bias. Motivated by this, we develop Knowledge Acquisition (KA) where student models receive log q(y_t|y_{ 2$ we obtain a quadratic saving in memory. We empirically validate our results, demonstrating also our improvements in practice. ""","""This paper theoretically analyzes the use of an oracle to predict various quantities in data stream models. Building upon Hsu et al., (2019), the overriding goal is to examine the degree to which such an oracle is can provide memory and time improvements across broad streaming regimes. In doing so, optimal bounds are derived in conjunction with a heavy hitter oracle. Although the rebuttal and discussion period did not lead to a consensus in the scoring of this paper, two reviewers were highly supportive. However, the primary criticism from the lone dissenting reviewer was based on the high-level presentation and motivation, and in particular, the impression that the paper read more like a STOC theory paper. In this regard though, my belief is that the authors can easily tailor a revision to increase the accessibility to a wider ICLR audience."""
1443,"""Automated Relational Meta-learning""","['meta-learning', 'task heterogeneity', 'meta-knowledge graph']","""In order to efficiently learn with small amount of data on new tasks, meta-learning transfers knowledge learned from previous tasks to the new ones. However, a critical challenge in meta-learning is the task heterogeneity which cannot be well handled by traditional globally shared meta-learning methods. In addition, current task-specific meta-learning methods may either suffer from hand-crafted structure design or lack the capability to capture complex relations between tasks. In this paper, motivated by the way of knowledge organization in knowledge bases, we propose an automated relational meta-learning (ARML) framework that automatically extracts the cross-task relations and constructs the meta-knowledge graph. When a new task arrives, it can quickly find the most relevant structure and tailor the learned structure knowledge to the meta-learner. As a result, the proposed framework not only addresses the challenge of task heterogeneity by a learned meta-knowledge graph, but also increases the model interpretability. We conduct extensive experiments on 2D toy regression and few-shot image classification and the results demonstrate the superiority of ARML over state-of-the-art baselines.""","""This paper proposes to deal with task heterogeneity in meta-learning by extracting cross-task relations and constructing a meta-knowledge graph, which can then quickly adapt to new tasks. The authors present a comprehensive set of experiments, which show consistent performance gains over baseline methods, on a 2D regression task and a series of few-shot classification tasks. They further conducted some ablation studies and additional analyses/visualization to aid interpretation. Two of the reviewers were very positive, indicating that they found the paper well-written, motivated, novel, and thorough, assessments that I also share. The authors were very responsive to reviewer comments and implemented all actionable revisions, as far as I can see. The paper looks to be in great shape. Im therefore recommending acceptance. """
1444,"""Infinite-horizon Off-Policy Policy Evaluation with Multiple Behavior Policies""","['off-policy policy evaluation', 'multiple importance sampling', 'kernel method', 'variance reduction']","""We consider off-policy policy evaluation when the trajectory data are generated by multiple behavior policies. Recent work has shown the key role played by the state or state-action stationary distribution corrections in the infinite horizon context for off-policy policy evaluation. We propose estimated mixture policy (EMP), a novel class of partially policy-agnostic methods to accurately estimate those quantities. With careful analysis, we show that EMP gives rise to estimates with reduced variance for estimating the state stationary distribution correction while it also offers a useful induction bias for estimating the state-action stationary distribution correction. In extensive experiments with both continuous and discrete environments, we demonstrate that our algorithm offers significantly improved accuracy compared to the state-of-the-art methods.""","""The authors present a method to address off-policy policy evaluation in the infinite horizon case, when the available data comes from multiple unknown behavior policies. Their solution -- the estimated mixture policy -- combines recent ideas from both infinite horizon OPE and regression importance sampling, a recent importance sampling based method. At first, the reviewers were concerned about writing clarity, feasibility in the continuous case, and comparisons to contemporary methods like DualDICE. After the rebuttal period, the reviewers agreed that all the major issues had been addressed through clarifications, rewriting, code release, and additional empirical comparisons. Thus, I recommend to accept this paper."""
1445,"""Generating valid Euclidean distance matrices""","['euclidean distance matrices', 'wgan', 'point clouds', 'molecular structures']","""Generating point clouds, e.g., molecular structures, in arbitrary rotations, translations, and enumerations remains a challenging task. Meanwhile, neural networks utilizing symmetry invariant layers have been shown to be able to optimize their training objective in a data-efficient way. In this spirit, we present an architecture which allows to produce valid Euclidean distance matrices, which by construction are already invariant under rotation and translation of the described object. Motivated by the goal to generate molecular structures in Cartesian space, we use this architecture to construct a Wasserstein GAN utilizing a permutation invariant critic network. This makes it possible to generate molecular structures in a one-shot fashion by producing Euclidean distance matrices which have a three- dimensional embedding.""","""This paper proposes a parametrisation of Euclidean distance matrices amenable to be used within a differentiable generative model. The resulting model is used in a WGAN architecture and demonstrated empirically in the generation of molecular structures. Reviewers were positive about the motivation from a specific application area (generation of molecular structures). However, they raised some concerns about the actual significance of the approach. The AC shares these concerns; the methodology essentially amounts to constraining the output of a neural network to be symmetric and positive semidefinite, which is in turn equivalent to producing a non-negative diagonal matrix (corresponding to the eigenvalues). As a result, the AC recommends rejection, and encourages the authors to include simple baselines in the next iteration. """
1446,"""On unsupervised-supervised risk and one-class neural networks""","['unsupervised training', 'one-class models']","""Most unsupervised neural networks training methods concern generative models, deep clustering, pretraining or some form of representation learning. We rather deal in this work with unsupervised training of the final classification stage of a standard deep learning stack, with a focus on two types of methods: unsupervised-supervised risk approximations and one-class models. We derive a new analytical solution for the former and identify and analyze its similarity with the latter. We apply and validate the proposed approach on multiple experimental conditions, in particular on four challenging recent Natural Language Processing tasks as well as on an anomaly detection task, where it improves over state-of-the-art models.""","""This paper makes a connection between one-class neural networks and the unsupervised approximation of the binary classifier risk under the hinge loss. An important contribution of the paper is the algorithm to train a binary classifier without supervision by using class prior and the hypothesis that class conditional classifier scores have normal distribution. The technical contribution of the paper is novel and brings an increased understanding into one-class neural networks. The equations and the modeling present in the paper are sound and the paper is well-written. However, in its current form, as pointed out by the reviewers, the experimental section is rather weak and can be substantially improved by adding extra experiments as suggested by reviewers #1, #2. Since its submission the paper has not yet been updated to incorporate these comments. Thus, for now, I recommend rejection of this paper, however on improvements I'm sure it can be a good contribution in other conferences."""
1447,"""Unaligned Image-to-Sequence Transformation with Loop Consistency""",[],"""We tackle the problem of modeling sequential visual phenomena. Given examples of a phenomena that can be divided into discrete time steps, we aim to take an input from any such time and realize this input at all other time steps in the sequence. Furthermore, we aim to do this \textit{without} ground-truth aligned sequences --- avoiding the difficulties needed for gathering aligned data. This generalizes the unpaired image-to-image problem from generating pairs to generating sequences. We extend cycle consistency to \textit{loop consistency} and alleviate difficulties associated with learning in the resulting long chains of computation. We show competitive results compared to existing image-to-image techniques when modeling several different data sets including the Earth's seasons and aging of human faces.""","""The main concern raised by the reviewers is limited experimental work, and there is no rebuttal."""
1448,"""Risk Averse Value Expansion for Sample Efficient and Robust Policy Learning""","['reinforcement learning', 'model-based RL', 'risk-sensitive', 'sample efficiency']","""Model-based Reinforcement Learning(RL) has shown great advantage in sample-efficiency, but suffers from poor asymptotic performance and high inference cost. A promising direction is to combine model-based reinforcement learning with model-free reinforcement learning, such as model-based value expansion(MVE). However, the previous methods do not take into account the stochastic character of the environment, thus still suffers from higher function approximation errors. As a result, they tend to fall behind the best model-free algorithms in some challenging scenarios. We propose a novel Hybrid-RL method, which is developed from MVE, namely the Risk Averse Value Expansion(RAVE). In the proposed method, we use an ensemble of probabilistic models for environment modeling to generate imaginative rollouts, based on which we further introduce the aversion of risks by seeking the lower confidence bound of the estimation. Experiments on different environments including MuJoCo and robo-school show that RAVE yields state-of-the-art performance. Also we found that it greatly prevented some catastrophic consequences such as falling down and thus reduced the variance of the rewards.""","""The authors propose to extend model-based/model-free hybrid methods (e.g., MVE, STEVE) to stochastic environments. They use an ensemble of probabilistic models to model the environment and use a lower confidence bound of the estimate to avoid risk. They found that their proposed method yields state-of-the-art performance over previous methods. The valid concerns by Reviewers 1 & 4 were not addressed by the authors and although the authors responded to Reviewer 3, they did not revise the paper to address their concerns. The ideas and results in this paper are interesting, but without addressing the valid concerns raised by reviewers, I cannot recommend acceptance."""
1449,"""Scoring-Aggregating-Planning: Learning task-agnostic priors from interactions and sparse rewards for zero-shot generalization""","['learning priors from exploration data', 'policy zero-shot generalization', 'reward shaping', 'model-based']","""Humans can learn task-agnostic priors from interactive experience and utilize the priors for novel tasks without any finetuning. In this paper, we propose Scoring-Aggregating-Planning (SAP), a framework that can learn task-agnostic semantics and dynamics priors from arbitrary quality interactions as well as the corresponding sparse rewards and then plan on unseen tasks in zero-shot condition. The framework finds a neural score function for local regional state and action pairs that can be aggregated to approximate the quality of a full trajectory; moreover, a dynamics model that is learned with self-supervision can be incorporated for planning. Many of previous works that leverage interactive data for policy learning either need massive on-policy environmental interactions or assume access to expert data while we can achieve a similar goal with pure off-policy imperfect data. Instantiating our framework results in a generalizable policy to unseen tasks. Experiments demonstrate that the proposed method can outperform baseline methods on a wide range of applications including gridworld, robotics tasks and video games.""","""The paper proposes an algorithm for zero-shot generalization in RL via learning a scoring a function from. The reviewers had mixed feelings, and many were not from the area. A shared theme was doubts about the significance of the experimental setting, and also the generality of the approach. As this is my field, I read the paper, and recommend rejection at this time. The proposed method is quite laborious and requires quite a bit of assumptions on the environments to work, as well as fine tuning parameters for each considered task (number of regions, etc). I also agree that the evaluation is not convincing -- stronger baselines need to be considered and the experiments to better address the zero-shot transfer aspect that the paper is motivated by. I encourage the authors to take the review feedback into account and submit a future version to another venue."""
1450,"""Measuring and Improving the Use of Graph Information in Graph Neural Networks""",[],"""Graph neural networks (GNNs) have been widely used for representation learning on graph data. However, there is limited understanding on how much performance GNNs actually gain from graph data. This paper introduces a context-surrounding GNN framework and proposes two smoothness metrics to measure the quantity and quality of information obtained from graph data. A new, improved GNN model, called CS-GNN, is then devised to improve the use of graph information based on the smoothness values of a graph. CS-GNN is shown to achieve better performance than existing methods in different types of real graphs. ""","""Two reviewers are positive about this paper while the other reviewer is negative. The low-scoring reviewer did not respond to discussions. I also read the paper and found it interesting. Thus an accept is recommended."""
1451,"""Blending Diverse Physical Priors with Neural Networks""","['Physics-based learning', 'Physics-aware learning']","""Rethinking physics in the era of deep learning is an increasingly important topic. This topic is special because, in addition to data, one can leverage a vast library of physical prior models (e.g. kinematics, fluid flow, etc) to perform more robust inference. The nascent sub-field of physics-based learning (PBL) studies this problem of blending neural networks with physical priors. While previous PBL algorithms have been applied successfully to specific tasks, it is hard to generalize existing PBL methods to a wide range of physics-based problems. Such generalization would require an architecture that can adapt to variations in the correctness of the physics, or in the quality of training data. No such architecture exists. In this paper, we aim to generalize PBL, by making a first attempt to bring neural architecture search (NAS) to the realm of PBL. We introduce a new method known as physics-based neural architecture search (PhysicsNAS) that is a top-performer across a diverse range of quality in the physical model and the dataset. ""","""This paper constitutes interesting progress on an important topic; the reviewers identify certain improvements and directions for future work, and I urge the authors to continue to develop refinements and extensions."""
1452,"""On the Need for Topology-Aware Generative Models for Manifold-Based Defenses""","['Manifold-based Defense', 'Robust Learning', 'Adversarial Attacks']","""ML algorithms or models, especially deep neural networks (DNNs), have shown significant promise in several areas. However, recently researchers have demonstrated that ML algorithms, especially DNNs, are vulnerable to adversarial examples (slightly perturbed samples that cause mis-classification). Existence of adversarial examples has hindered deployment of ML algorithms in safety-critical sectors, such as security. Several defenses for adversarial examples exist in the literature. One of the important classes of defenses are manifold-based defenses, where a sample is ""pulled back"" into the data manifold before classifying. These defenses rely on the manifold assumption (data lie in a manifold of lower dimension than the input space). These defenses use a generative model to approximate the input distribution. This paper asks the following question: do the generative models used in manifold-based defenses need to be topology-aware? Our paper suggests the answer is yes. We provide theoretical and empirical evidence to support our claim.""","""This paper studies the role of topology in designing adversarial defenses. Specifically , the authors study defense strategies that rely on the assumption that data lies on a low-dimensional manifold, and show theoretical and empirical evidence that such defenses need to build a topological understanding of the data. Reviewers were initially positive, but had some concerns pertaining to clarity and limited experimental setup. After a productive rebuttal phase, now reviewers are mostly in favor of acceptance, thanks to the improved readibility and clarity. Despite the small-scale experimental validation, ultimately both reviewers and AC conclude this paper is worthy of publication. """
1453,"""OvA-INN: Continual Learning with Invertible Neural Networks""","['Deep Learning', 'Continual Learning', 'Invertible Neural Networks']","""In the field of Continual Learning, the objective is to learn several tasks one after the other without access to the data from previous tasks. Several solutions have been proposed to tackle this problem but they usually assume that the user knows which of the tasks to perform at test time on a particular sample, or rely on small samples from previous data and most of them suffer of a substantial drop in accuracy when updated with batches of only one class at a time. In this article, we propose a new method, OvA-INN, which is able to learn one class at a time and without storing any of the previous data. To achieve this, for each class, we train a specific Invertible Neural Network to output the zero vector for its class. At test time, we can predict the class of a sample by identifying which network outputs the vector with the smallest norm. With this method, we show that we can take advantage of pretrained models by stacking an invertible network on top of a features extractor. This way, we are able to outperform state-of-the-art approaches that rely on features learning for the Continual Learning of MNIST and CIFAR-100 datasets. In our experiments, we are reaching 72% accuracy on CIFAR-100 after training our model one class at a time.""","""This paper is board-line but in the end below the standards for ICLR. Firstly this paper could use significant polishing. The text has significant grammar and style issues: incorrect words, phrases and tenses; incomplete sentences; entire sections of the paper containing only lists, etc. The paper is in need of significant editing. This of course is not enough to merit rejection, but there are concerns about the contribution of the new method, experiment details, and the topic of study. The results are reported from either a single run or unknown number of runs of the learning system, which is not acceptable even if the we suspect the variance is low. The proposed approach relies on pre-training a feature extractor which in many ways side-steps the forgetting/interference problem rather than what we really need: new algorithms that processes the training data in ways the mitigate interference by learning representations. In general the reviewers found it very difficult to access the fairness of the comparisons dues do differences between how different methods make use of stored data and pre-training. The reviewers highlighted the similarity between the propose approach and recent work in angle of generative modeling / out of distribution (OOD) detection which suggests that the proposed approach has limited utility (as detailed by R1) and that OOD baselines were missing. Finally, the CL problem formulation explored here, where task identifiers are available during training and data is i.i.d, is of limited utility. Its hard to imagine how approaches that learn individual networks for each task could scale to more realistic problem formulations. All reviewers agreed the paper's experiments were borderline and the paper has substantial issues. There are too many revisions to be done."""
1454,"""Neural Outlier Rejection for Self-Supervised Keypoint Learning""","['Self-Supervised Learning', 'Keypoint Detection', 'Outlier Rejection', 'Deep Learning']","""Identifying salient points in images is a crucial component for visual odometry, Structure-from-Motion or SLAM algorithms. Recently, several learned keypoint methods have demonstrated compelling performance on challenging benchmarks. However, generating consistent and accurate training data for interest-point detection in natural images still remains challenging, especially for human annotators. We introduce IO-Net (i.e. InlierOutlierNet), a novel proxy task for the self-supervision of keypoint detection, description and matching. By making the sampling of inlier-outlier sets from point-pair correspondences fully differentiable within the keypoint learning framework, we show that are able to simultaneously self-supervise keypoint description and improve keypoint matching. Second, we introduce KeyPointNet, a keypoint-network architecture that is especially amenable to robust keypoint detection and description. We design the network to allow local keypoint aggregation to avoid artifacts due to spatial discretizations commonly used for this task, and we improve fine-grained keypoint descriptor performance by taking advantage of efficient sub-pixel convolutions to upsample the descriptor feature-maps to a higher operating resolution. Through extensive experiments and ablative analysis, we show that the proposed self-supervised keypoint learning method greatly improves the quality of feature matching and homography estimation on challenging benchmarks over the state-of-the-art.""","""This paper proposes a solid (if somewhat incremental) improvement on an interesting and well-studied problem. I suggest accepting it."""
1455,"""Scalable Generative Models for Graphs with Graph Attention Mechanism""","['Graph Generative Model', 'Attention Mechanism']","""Graphs are ubiquitous real-world data structures, and generative models that approximate distributions over graphs and derive new samples from them have significant importance. Among the known challenges in graph generation tasks, scalability handling of large graphs and datasets is one of the most important for practical applications. Recently, an increasing number of graph generative models have been proposed and have demonstrated impressive results. However, scalability is still an unresolved problem due to the complex generation process or difficulty in training parallelization. In this paper, we first define scalability from three different perspectives: number of nodes, data, and node/edge labels. Then, we propose GRAM, a generative model for graphs that is scalable in all three contexts, especially in training. We aim to achieve scalability by employing a novel graph attention mechanism, formulating the likelihood of graphs in a simple and general manner. Also, we apply two techniques to reduce computational complexity. Furthermore, we construct a unified and non-domain-specific evaluation metric in node/edge-labeled graph generation tasks by combining a graph kernel and Maximum Mean Discrepancy. Our experiments on synthetic and real-world graphs demonstrated the scalability of our models and their superior performance compared with baseline methods.""","""The paper proposed an efficient way of generating graphs. Although the paper claims to propose simplified mechanism, the reviewers find that the generation task to be relatively very complex, and the use of certain module seems ad-hoc. Furthermore, the results on the new metric is at times inconsistent with other prior metrics. The paper can be improved by addressing those concerns concerns. """
1456,"""The fairness-accuracy landscape of neural classifiers""",[],"""That machine learning algorithms can demonstrate bias is well-documented by now. This work confronts the challenge of bias mitigation in feedforward fully-connected neural nets from the lens of causal inference and multiobjective optimisation. Regarding the former, a new causal notion of fairness is introduced that is particularly suited to giving a nuanced treatment of datasets collected under unfair practices. In particular, special attention is paid to subjects whose covariates could appear with substantial probability in either value of the sensitive attribute. Next, recognising that fairness and accuracy are competing objectives, the proposed methodology uses techniques from multiobjective optimisation to ascertain the fairness-accuracy landscape of a neural net classifier. Experimental results suggest that the proposed method produces neural net classifiers that distribute evenly across the Pareto front of the fairness-accuracy space and is more efficient at finding non-dominated points than an adversarial approach.""","""This manuscript investigates and characterizes the tradeoff between fairness and accuracy in neural network models. The primary empirical contribution is to investigate this tradeoff for a variety of datasets. The reviewers and AC agree that the problem studied is timely and interesting. However, this manuscript also received quite divergent reviews, resulting from differences in opinion about the novelty of the results. IN particular, it is not clear that the idea of a fairness/performance tradeoff is a new one. In reviews and discussion, the reviewers also noted issues with clarity of the presentation. In the opinion of the AC, the manuscript is not appropriate for publication in its current state. """
1457,"""LocalGAN: Modeling Local Distributions for Adversarial Response Generation""","['neural response generation', 'adversarial learning', 'local distribution', 'energy-based distribution modeling']","""This paper presents a new methodology for modeling the local semantic distribution of responses to a given query in the human-conversation corpus, and on this basis, explores a specified adversarial learning mechanism for training Neural Response Generation (NRG) models to build conversational agents. The proposed mechanism aims to address the training instability problem and improve the quality of generated results of Generative Adversarial Nets (GAN) in their utilizations in the response generation scenario. Our investigation begins with the thorough discussions upon the objective function brought by general GAN architectures to NRG models, and the training instability problem is proved to be ascribed to the special local distributions of conversational corpora. Consequently, an energy function is employed to estimate the status of a local area restricted by the query and its responses in the semantic space, and the mathematical approximation of this energy-based distribution is finally found. Building on this foundation, a local distribution oriented objective is proposed and combined with the original objective, working as a hybrid loss for the adversarial training of response generation models, named as LocalGAN. Our experimental results demonstrate that the reasonable local distribution modeling of the query-response corpus is of great importance to adversarial NRG, and our proposed LocalGAN is promising for improving both the training stability and the quality of generated results. ""","""This paper tackles neural response generation with Generative Adversarial Nets (GANs), and to address the training instability problem with GANs, it proposes a local distribution oriented objective. The new objective is combined with the original objective, and used as a hybrid loss for the adversarial training of response generation models, named as LocalGAN. Authors responded with concerns about reviewer 3's comments, and I agree with the authors explanation, so I am disregarding review 3, and am relying on my read through of the latest version of the paper. The other reviewers think the paper has good contributions, however they are not convinced about the clarity of the presentations and made many suggestions (even after the responses from the authors). I suggest a reject, as the paper should include a clear presentation of the approach and technical formulation (as also suggested by the reviewers)."""
1458,"""Robust Reinforcement Learning for Continuous Control with Model Misspecification""","['reinforcement learning', 'robustness']","""We provide a framework for incorporating robustness -- to perturbations in the transition dynamics which we refer to as model misspecification -- into continuous control Reinforcement Learning (RL) algorithms. We specifically focus on incorporating robustness into a state-of-the-art continuous control RL algorithm called Maximum a-posteriori Policy Optimization (MPO). We achieve this by learning a policy that optimizes for a worst case, entropy-regularized, expected return objective and derive a corresponding robust entropy-regularized Bellman contraction operator. In addition, we introduce a less conservative, soft-robust, entropy-regularized objective with a corresponding Bellman operator. We show that both, robust and soft-robust policies, outperform their non-robust counterparts in nine Mujoco domains with environment perturbations. In addition, we show improved robust performance on a challenging, simulated, dexterous robotic hand. Finally, we present multiple investigative experiments that provide a deeper insight into the robustness framework; including an adaptation to another continuous control RL algorithm. Performance videos can be found online at pseudo-url.""","""The authors provide a framework for improving robustness (if the model of the dynamics is perturbed) into the RL methods, and provide nice experimental results, especially in the updated version. I am happy to see that the discussion for this paper went in a totally positive and constructive way which lead to a) constructive criticism of the reviewers b) significant changes in the paper c) corresponding better scores by the reviewer. Good work and obvious accept."""
1459,"""Neural Approximation of an Auto-Regressive Process through Confidence Guided Sampling""","['Neural approximation method', 'Auto-regressive model', 'Sequential sample generation']","""We propose a generic confidence-based approximation that can be plugged in and simplify an auto-regressive generation process with a proved convergence. We first assume that the priors of future samples can be generated in an independently and identically distributed (i.i.d.) manner using an efficient predictor. Given the past samples and future priors, the mother AR model can post-process the priors while the accompanied confidence predictor decides whether the current sample needs a resampling or not. Thanks to the i.i.d. assumption, the post-processing can update each sample in a parallel way, which remarkably accelerates the mother model. Our experiments on different data domains including sequences and images show that the proposed method can successfully capture the complex structures of the data and generate the meaningful future samples with lower computational cost while preserving the sequential relationship of the data.}""","""The paper presents a technique for approximately sampling from autoregressive models using something like a a proposal distribution and a critic. The idea is to chunk the output into blocks and, for each block, predict each element in the block independently from a proposal network, ask a critic network whether the block looks sensible and, if not, resampling the block using the autoregressive model itself. The idea in the paper is interesting, but the paper would benefit from - a better relation to existing methods - a better experimental section, which details the hyper-parameters of the algorithm (and how they were chosen) and which provides error bars on all plots (and tables)"""
1460,"""Off-Policy Actor-Critic with Shared Experience Replay""","['Reinforcement Learning', 'Off-Policy Learning', 'Experience Replay']","""We investigate the combination of actor-critic reinforcement learning algorithms with uniform large-scale experience replay and propose solutions for two challenges: (a) efficient actor-critic learning with experience replay (b) stability of very off-policy learning. We employ those insights to accelerate hyper-parameter sweeps in which all participating agents run concurrently and share their experience via a common replay module. To this end we analyze the bias-variance tradeoffs in V-trace, a form of importance sampling for actor-critic methods. Based on our analysis, we then argue for mixing experience sampled from replay with on-policy experience, and propose a new trust region scheme that scales effectively to data distributions where V-trace becomes unstable. We provide extensive empirical validation of the proposed solution. We further show the benefits of this setup by demonstrating state-of-the-art data efficiency on Atari among agents trained up until 200M environment frames.""","""The paper presents an off-policy actor-critic scheme where i) a buffer storing the trajectories from several agents is used (off-policy replay) and mixed with the on-line data from the current agent; ii) a trust-region estimator is used to select trajectories that are sufficiently close to the current policy (e.g. in the sense of a KL divergence). As noted by the reviews, the results are impressive. Quite a few concerns still remain: * After Fig. 1 (revised version), what matters is the shared replay, where the agent actually benefits from the experience of 9 other different agents; this implies that the population based training observes 9x more frames than the no-shared version, and the question whether the comparison is fair is raised; * the trust-region estimator might reduce the data seen by the agent, leading it to overfit the past (Fig. 3, left); * the influence of the pseudo-formula hyper-parameter (the trust threshold) is not discussed. In standard trust region-based optimization methods, the trust region is gradually narrowed, suggesting that parameter pseudo-formula here should evolve along time. """
1461,"""Machine Truth Serum""","['Ensemble method', 'Classification', 'Machine Truth Serum', 'Minority', 'Machine Learning']","""Wisdom of the crowd revealed a striking fact that the majority answer from a crowd is often more accurate than any individual expert. We observed the same story in machine learning - ensemble methods leverage this idea to combine multiple learning algorithms to obtain better classification performance. Among many popular examples is the celebrated Random Forest, which applies the majority voting rule in aggregating different decision trees to make the final prediction. Nonetheless, these aggregation rules would fail when the majority is more likely to be wrong. In this paper, we extend the idea proposed in Bayesian Truth Serum that ""a surprisingly more popular answer is more likely the true answer"" to classification problems. The challenge for us is to define or detect when an answer should be considered as being ""surprising"". We present two machine learning aided methods which aim to reveal the truth when it is minority instead of majority who has the true answer. Our experiments over real-world datasets show that better classification performance can be obtained compared to always trusting the majority voting. Our proposed methods also outperform popular ensemble algorithms. Our approach can be generically applied as a subroutine in ensemble methods to replace majority voting rule. ""","""This paper proposes a family of new methods, based on Bayesian Truth Serum, that are meant to build better ensembles from a fixed set of constituent models. Reviewers found the problem and the general research direction interesting, but none of the three of them were convinced that the proposed methods are effective in the ways that the paper claims, even after some discussion. It seems as though this paper is dealing with a problem that doesn't generally lend itself to large improvements in results, but reviewers weren't satisfied that the small observed improvements were real, and urged the authors to explore additional settings and baselines, and to offer a full significance test."""
1462,"""Demystifying Graph Neural Network Via Graph Filter Assessment""","['Graph Neural Networks', 'Graph convolutional filter analysis', 'representational power']","""Graph Neural Networks (GNNs) have received tremendous attention recently due to their power in handling graph data for different downstream tasks across different application domains. The key of GNN is its graph convolutional filters, and recently various kinds of filters are designed. However, there still lacks in-depth analysis on (1) Whether there exists a best filter that can perform best on all graph data; (2) Which graph properties will influence the optimal choice of graph filter; (3) How to design appropriate filter adaptive to the graph data. In this paper, we focus on addressing the above three questions. We first propose a novel assessment tool to evaluate the effectiveness of graph convolutional filters for a given graph. Using the assessment tool, we find out that there is no single filter as a `silver bullet' that perform the best on all possible graphs. In addition, different graph structure properties will influence the optimal graph convolutional filter's design choice. Based on these findings, we develop Adaptive Filter Graph Neural Network (AFGNN), a simple but powerful model that can adaptively learn task-specific filter. For a given graph, it leverages graph filter assessment as regularization and learns to combine from a set of base filters. Experiments on both synthetic and real-world benchmark datasets demonstrate that our proposed model can indeed learn an appropriate filter and perform well on graph tasks.""","""The paper investigates graph convolutional filters, and proposes an adaptation of the Fisher score to assess the quality of a convolutional filter. Formally, the defined Graph Filter Discriminant Score assesses how the filter improves the Fisher score attached to a pair of classes (considering the nodes in each class, and their embedding through the filter and the graph structure, as propositional samples), taking into account the class imbalance. An analysis is conducted on synthetic graphs to assess how the hyper-parameters (order, normalization strategy) of the filter rule the GFD score depending on the graph and class features. As could have been expected there no single killer filter. A finite set of filters, called base filters, being defined by varying the above hyper-parameters, the search space is that of a linear combination of the base filters in each layer. Three losses are considered: with and without graph filter discriminant score, and alternatively optimizing the cross-entropy loss and the GFD; this last option is the best one in the experiments. As noted by the reviewers and other public comments, the idea of incorporating LDA ideas into GNN is nice and elegant. The reservations of the reviewers are mostly related to the experimental validation: of course getting the best score on each dataset is not expected; but the set of considered problems is too limited and their diversity is limited too (as demonstrated by the very nice Fig. 5). The area chair thus encourages the authors to pursue this very promising line of research and hopes to see a revised version backed up with more experimental evidence. """
1463,"""Optimistic Exploration even with a Pessimistic Initialisation""","['Reinforcement Learning', 'Exploration', 'Optimistic Initialisation']","""Optimistic initialisation is an effective strategy for efficient exploration in reinforcement learning (RL). In the tabular case, all provably efficient model-free algorithms rely on it. However, model-free deep RL algorithms do not use optimistic initialisation despite taking inspiration from these provably efficient tabular algorithms. In particular, in scenarios with only positive rewards, Q-values are initialised at their lowest possible values due to commonly used network initialisation schemes, a pessimistic initialisation. Merely initialising the network to output optimistic Q-values is not enough, since we cannot ensure that they remain optimistic for novel state-action pairs, which is crucial for exploration. We propose a simple count-based augmentation to pessimistically initialised Q-values that separates the source of optimism from the neural network. We show that this scheme is provably efficient in the tabular setting and extend it to the deep RL setting. Our algorithm, Optimistic Pessimistically Initialised Q-Learning (OPIQ), augments the Q-value estimates of a DQN-based agent with count-derived bonuses to ensure optimism during both action selection and bootstrapping. We show that OPIQ outperforms non-optimistic DQN variants that utilise a pseudocount-based intrinsic motivation in hard exploration tasks, and that it predicts optimistic estimates for novel state-action pairs.""","""The paper propose a scheme to enable optimistic initialization in the deep RL setting, and shows that it's helpful. The reviewers agreed that the paper is well-motivated and executed, but had some minor reservations (e.g. about the proposal scaling in practice). In an example of a successful rebuttal two of the reviewers raised their scores after the authors clarified the paper and added an experiment on Montezuma's revenge. The paper proposes a useful, simple and practical idea on the bridge between tabular and deep RL, and I gladly recommend acceptance."""
1464,"""EINS: Long Short-Term Memory with Extrapolated Input Network Simplification""","['recurrent neural network', 'RNN', 'long short-term memory', 'LSTM', 'gated recurrent network', 'GRU', 'dynamical mathematics', 'interpretability']","""This paper contrasts the two canonical recurrent neural networks (RNNs) of long short-term memory (LSTM) and gated recurrent unit (GRU) to propose our novel light-weight RNN of Extrapolated Input for Network Simplification (EINS). We treat LSTMs and GRUs as differential equations, and our analysis highlights several auxiliary components in the standard LSTM design that are secondary in importance. Guided by these insights, we present a design that abandons the LSTM redundancies, thereby introducing EINS. We test EINS against the LSTM over a carefully chosen range of tasks from language modelling and medical data imputation-prediction through a sentence-level variational autoencoder and image generation to learning to learn to optimise another neural network. Despite having both a simpler design and fewer parameters, this simplification either performs comparably, or better, than the LSTM in each task.""","""The paper proposed new version of LSTM which is claimed to abandon the redundancies in LSTM. It is weak both in theory and experiments. All reviewers gave clear rejects and the AC agree."""
1465,"""Neural Design of Contests and All-Pay Auctions using Multi-Agent Simulation""","['Auctions', 'Mechanism Design', 'Multi-Agent', 'Fictitious Play']","""We propose a multi-agent learning approach for designing crowdsourcing contests and all-pay auctions. Prizes in contests incentivise contestants to expend effort on their entries, with different prize allocations resulting in different incentives and bidding behaviors. In contrast to auctions designed manually by economists, our method searches the possible design space using a simulation of the multi-agent learning process, and can thus handle settings where a game-theoretic equilibrium analysis is not tractable. Our method simulates agent learning in contests and evaluates the utility of the resulting outcome for the auctioneer. Given a large contest design space, we assess through simulation many possible contest designs within the space, and fit a neural network to predict outcomes for previously untested contest designs. Finally, we apply mirror descent to optimize the design so as to achieve more desirable outcomes. Our empirical analysis shows our approach closely matches the optimal outcomes in settings where the equilibrium is known, and can produce high quality designs in settings where the equilibrium strategies are not solvable analytically. ""","""This paper demonstrates a framework for optimizing designs in auction/contest problems. The approach relies on considering a multi-agent learning process and then simulating it. To a large degree there is agreement among reviewers that this approach is sensible and sound, however lacks substantial novelty. The authors provided a rebuttal which clarified the aspects that they consider novel, however the reviewers remained mostly unconvinced. Furthermore, it would help if the improvement over past approaches is demonstrated in a more convincing way, for example with increased scope experiments that also involve richer analysis. """
1466,"""Effect of top-down connections in Hierarchical Sparse Coding""","['Hierarchical Sparse Coding', 'Convolutional Sparse Coding', 'Top-down connections']","""Hierarchical Sparse Coding (HSC) is a powerful model to efficiently represent multi-dimensional, structured data such as images. The simplest solution to solve this computationally hard problem is to decompose it into independent layerwise subproblems. However, neuroscientific evidence would suggest inter-connecting these subproblems as in the Predictive Coding (PC) theory, which adds top-down connections between consecutive layers. In this study, a new model called Sparse Deep Predictive Coding (SDPC) is introduced to assess the impact of this inter-layer feedback connection. In particular, the SDPC is compared with a Hierarchical Lasso (Hi-La) network made out of a sequence of Lasso layers. A 2-layered SDPC and a Hi-La networks are trained on 3 different databases and with different sparsity parameters on each layer. First, we show that the overall prediction error generated by SDPC is lower thanks to the feedback mechanism as it transfers prediction error between layers. Second, we demonstrate that the inference stage of the SDPC is faster to converge than for the Hi-La model. Third, we show that the SDPC also accelerates the learning process. Finally, the qualitative analysis of both models dictionaries, supported by their activation probability, show that the SDPC features are more generic and informative.""","""This paper introduces a new architecture for sparse coding. The reviewers gave long and constructive feedback that the authors in turn responded at length on. There is consensus among the reviewers that despite contributions this paper in its current form is not ready for acceptance. Rejection is therefore recommended with encouragement to make updated version for next conference. """
1467,"""Scalable Differentially Private Data Generation via Private Aggregation of Teacher Ensembles""",[],"""We present a novel approach named G-PATE for training differentially private data generator. The generator can be used to produce synthetic datasets with strong privacy guarantee while preserving high data utility. Our approach leverages generative adversarial nets to generate data and exploits the PATE (Private Aggregation of Teacher Ensembles) framework to protect data privacy. Compared to existing methods, our approach significantly improves the use of privacy budget. This is possible since we only need to ensure differential privacy for the generator, which is the part of the model that actually needs to be published for private data generation. To achieve this, we connect a student generator with an ensemble of teacher discriminators and propose a private gradient aggregation mechanism to ensure differential privacy on all the information that flows from the teacher discriminators to the student generator. Theoretically, we prove that our algorithm ensures differential privacy for the generator. Empirically, we provide thorough experiments to demonstrate the superiority of our method over prior work on both image and non-image datasets.""","""This paper addresses the problem of differential private data generator. The paper presents a novel approach called G_PATE which builds on the existing PATE framework. The main contribution is in using a student generator with an ensemble of teacher discriminators and in proposing a new private gradient aggregation mechanism which ensures differential privacy in the information flow from discriminator to generator. Although the idea is interesting, there are significant concerns raised by the reviewers about the experiments and analysis done in the paper which seem to be valid and have not been addressed yet in the final revision. I believe upon making significant changes to the paper, this could be a good contribution. Thus, as of now, I am recommending a Rejection."""
1468,"""Testing For Typicality with Respect to an Ensemble of Learned Distributions""","['anomaly detection', 'density estimation', 'generative models']","""Good methods of performing anomaly detection on high-dimensional data sets are needed, since algorithms which are trained on data are only expected to perform well on data that is similar to the training data. There are theoretical results on the ability to detect if a population of data is likely to come from a known base distribution, which is known as the goodness-of-fit problem, but those results require knowing a model of the base distribution. The ability to correctly reject anomalous data hinges on the accuracy of the model of the base distribution. For high dimensional data, learning an accurate-enough model of the base distribution such that anomaly detection works reliably is very challenging, as many researchers have noted in recent years. Existing methods for the goodness-of-fit problem do not ac- count for the fact that a model of the base distribution is learned. To address that gap, we offer a theoretically motivated approach to account for the density learning procedure. In particular, we propose training an ensemble of density models, considering data to be anomalous if the data is anomalous with respect to any member of the ensemble. We provide a theoretical justification for this approach, proving first that a test on typicality is a valid approach to the goodness-of-fit problem, and then proving that for a correctly constructed ensemble of models, the intersection of typical sets of the models lies in the interior of the typical set of the base distribution. We present our method in the context of an example on synthetic data in which the effects we consider can easily be seen.""","""The paper proposes a new method for testing whether new data comes from the same distribution as training data without having an a-priori density model of the training data. This is done by looking at the intersection of typical sets of an ensemble of learned models. On the theoretical side, the paper was received positively by all reviewers. The theoretical results were deemed strong, and the ideas in the paper were considered novel. The problem setting was considered relevant, and seen as a good proposal to deal with the shortcoming of models on out of distribution data. However, the lack of empirical results on at least somewhat realistic datasets (e.g. MNIST) was commented on by all reviewers. The authors only present a toy experiment. The authors have explained their decision, but I agree with R1 that it would be appropriate in such situations to present the toy experiment next to a more realistic dataset. This also means that the effectiveness of the proposed method in real settings is as of yet unclear. Although the provided toy example was considered clear and illuminating, the clarity of the text could still be improved. Although the reviewers had a spread in their final score, I think they would all agree that the direction this paper takes is very exciting, but that the current version of the paper is somewhat premature. Thus, unfortunately, I have to recommend rejection at this point. """
1469,"""Learning Latent Representations for Inverse Dynamics using Generalized Experiences""","['deep reinforcement learning', 'continuous control', 'inverse dynamics model']","""Many practical robot locomotion tasks require agents to use control policies that can be parameterized by goals. Popular deep reinforcement learning approaches in this direction involve learning goal-conditioned policies or value functions, or Inverse Dynamics Models (IDMs). IDMs map an agents current state and desired goal to the required actions. We show that the key to achieving good performance with IDMs lies in learning the information shared between equivalent experiences, so that they can be generalized to unseen scenarios. We design a training process that guides the learning of latent representations to encode this shared information. Using a limited number of environment interactions, our agent is able to efficiently navigate to arbitrary points in the goal space. We demonstrate the effectiveness of our approach in high-dimensional locomotion environments such as the Mujoco Ant, PyBullet Humanoid, and PyBullet Minitaur. We provide quantitative and qualitative results to show that our method clearly outperforms competing baseline approaches.""","""Solid, but not novel enough to merit publication. The reviewers agree on rejection, and despite authors' adaptation, the paper requires more work and broader experimentation for publication."""
1470,"""Score and Lyrics-Free Singing Voice Generation""","['singing voice generation', 'GAN', 'generative adversarial network']","""Generative models for singing voice have been mostly concerned with the task of ""singing voice synthesis,"" i.e., to produce singing voice waveforms given musical scores and text lyrics. In this work, we explore a novel yet challenging alternative: singing voice generation without pre-assigned scores and lyrics, in both training and inference time. In particular, we experiment with three different schemes: 1) free singer, where the model generates singing voices without taking any conditions; 2) accompanied singer, where the model generates singing voices over a waveform of instrumental music; and 3) solo singer, where the model improvises a chord sequence first and then uses that to generate voices. We outline the associated challenges and propose a pipeline to tackle these new tasks. This involves the development of source separation and transcription models for data preparation, adversarial networks for audio generation, and customized metrics for evaluation.""","""Main content: Blind review #1 summarizes it well: his paper claims to be the first to tackle unconditional singing voice generation. It is noted that previous singing voice generation approaches leverage explicit pitch information (either of an accompaniment via a score or for the voice itself), and/or specified lyrics the voice should sing. The authors first create their own dataset of singing voice data with accompaniments, then use a GAN to generate singing voice waveforms in three different settings: 1) Free singer - only noise as input, completely unconditional singing sampling 2) Accompanied singer - Providing the accompaniment *waveform* (not symbolic data like a score - the model needs to learn how to transcribe to use this information) as a condition for the singing voice 3) Solo singer - The same setting as 1 but the model first generates an accompaniment then, from that, generates singing voice -- Discussion: The reviews generally point out that while a lot of new work has been done, this paper bites off too much at once: it tackles many different open problems, in a generative art domain where evaluation is subjective. -- Recommendation and justification: This paper is a weak reject, not because it is uninteresting or bad work, but because the ambitious scope is really too large for a single conference paper. In a more specialized conference like ISMIR, it would still have a good chance. The authors should break it down into conference sized chunks, and address more of the reviewer comments in each chunk."""
1471,"""Optimistic Adaptive Acceleration for Optimization""",[],"""This paper considers a new variant of AMSGrad called Optimistic-AMSGrad. AMSGrad is a popular adaptive gradient based optimization algorithm that is widely used in training deep neural networks. The new variant assumes that mini-batch gradients in consecutive iterations have some underlying structure, which makes the gradients sequentially predictable. By exploiting the predictability and some ideas from Optimistic Online learning, the proposed algorithm can accelerate the convergence and also enjoys a tighter regret bound. We evaluate Optimistic-AMSGrad and AMSGrad in terms of various performance measures (i.e., training loss, testing loss, and classification accuracy on training/testing data), which demonstrate that Optimistic-AMSGrad improves AMSGrad.""","""The paper introduces a variant of AMSGrad (""Optimistic-AMSGrad""), which integrates an estimate of the future gradient into the optimization problem. While the method is interesting, reviewers agree that novelty is on the low side. The motivation of the approach should also be clarified. The experimental section should be made stronger; in particular, reporting convincing wall-clock running time advantages is critical for validating the viability of the proposed approach. """
1472,"""Simplified Action Decoder for Deep Multi-Agent Reinforcement Learning""","['multi-agent RL', 'theory of mind']","""In recent years we have seen fast progress on a number of benchmark problems in AI, with modern methods achieving near or super human performance in Go, Poker and Dota. One common aspect of all of these challenges is that they are by design adversarial or, technically speaking, zero-sum. In contrast to these settings, success in the real world commonly requires humans to collaborate and communicate with others, in settings that are, at least partially, cooperative. In the last year, the card game Hanabi has been established as a new benchmark environment for AI to fill this gap. In particular, Hanabi is interesting to humans since it is entirely focused on theory of mind, i.e. the ability to effectively reason over the intentions, beliefs and point of view of other agents when observing their actions. Learning to be informative when observed by others is an interesting challenge for Reinforcement Learning (RL): Fundamentally, RL requires agents to explore in order to discover good policies. However, when done naively, this randomness will inherently make their actions less informative to others during training. We present a new deep multi-agent RL method, the Simplified Action Decoder (SAD), which resolves this contradiction exploiting the centralized training phase. During training SAD allows other agents to not only observe the (exploratory) action chosen, but agents instead also observe the greedy action of their team mates. By combining this simple intuition with an auxiliary task for state prediction and best practices for multi-agent learning, SAD establishes a new state of the art for 2-5 players on the self-play part of the Hanabi challenge.""","""The method presented, the simplified action decoder, is a clever way of addressing the influence of exploratory actions in multi-agent RL. It's shown to enable state of the art performance in Hanabi, an interesting and relatively novel cooperative AI challenge. It seems, however, that the method has wider applicability than that. All reviewers agree that this is good and interesting work. Reviewer 2 had some issues with the presentation of the results and certain assumptions, but the authors responded so as to alleviate any concerns. This paper should definitely be accepted, if possible as oral. """
1473,"""RTC-VAE: HARNESSING THE PECULIARITY OF TOTAL CORRELATION IN LEARNING DISENTANGLED REPRESENTATIONS""","['Total Correlation', 'VAEs', 'Disentanglement']","""In the problem of unsupervised learning of disentangled representations, one of the promising methods is to penalize the total correlation of sampled latent vari-ables. Unfortunately, this well-motivated strategy often fail to achieve disentanglement due to a problematic difference between the sampled latent representation and its corresponding mean representation. We provide a theoretical explanation that low total correlation of sample distribution cannot guarantee low total correlation of the mean representation. We prove that for the mean representation of arbitrarily high total correlation, there exist distributions of latent variables of abounded total correlation. However, we still believe that total correlation could be a key to the disentanglement of unsupervised representative learning, and we propose a remedy, RTC-VAE, which rectifies the total correlation penalty. Experiments show that our model has a more reasonable distribution of the mean representation compared with baseline models, e.g.,-TCVAE and FactorVAE.""","""This paper highlights the problem of penalizing the total correlation of sampled latent variables for unsupervised learning of disentangled representations. Authors prove a theorem on how sample representations with bounded total correlation may have arbitrarily large total correlation when computed with the underlying mean. As a fix, the authors propose RTC-VAE method that penalizes total covariance of sampled latent variables. R2 appreciated the simplicity of the idea, making it easy to understand and implement, but raises serious concerns on empirical evaluation of the method. Specifically, very limited datasets (initially dsprites and 3d shapes) and with no evaluation of disentanglement performance and no comparison against other disentangling methods like DIP-VAE-1. While the authors added another dataset (3d face) in their revised versions, the concerns about disentanglement performance evaluation and its comparison against baselines remained as before, and R2 was not convinced to raise the initial score. Similarly, while R1 and R3 appreciate author's response, they believe the response was not convincing enough for them, and maintained their initial ratings. Overall, the submission has room for improvement toward a clear evaluation of the proposed method against related baselines."""
1474,"""Scheduled Intrinsic Drive: A Hierarchical Take on Intrinsically Motivated Exploration""","['Reinforcement Learning', 'Exploration', 'Intrinsic Motivation', 'Sparse Rewards']","""Exploration in sparse reward reinforcement learning remains an open challenge. Many state-of-the-art methods use intrinsic motivation to complement the sparse extrinsic reward signal, giving the agent more opportunities to receive feedback during exploration. Commonly these signals are added as bonus rewards, which results in a mixture policy that neither conducts exploration nor task fulfillment resolutely. In this paper, we instead learn separate intrinsic and extrinsic task policies and schedule between these different drives to accelerate exploration and stabilize learning. Moreover, we introduce a new type of intrinsic reward denoted as successor feature control (SFC), which is general and not task-specific. It takes into account statistics over complete trajectories and thus differs from previous methods that only use local information to evaluate intrinsic motivation. We evaluate our proposed scheduled intrinsic drive (SID) agent using three different environments with pure visual inputs: VizDoom, DeepMind Lab and DeepMind Control Suite. The results show a substantially improved exploration efficiency with SFC and the hierarchical usage of the intrinsic drives. A video of our experimental results can be found at pseudo-url.""","""The paper presents a method for intrinsically motivated exploration using successor features by interleaving the exploration task with intrinsic rewards and extrinsic task original external rewards. In addition, the paper proposes ""successor feature control"" (distance between consecutive successor features) as an intrinsic reward. The proposed method is interesting and it can potentially address the limitation of existing exploration methods based on intrinsic motivation. In experimental results, the method is evaluated on navigation tasks using Vizdoom and DeepMind Lab, as well as continuous control tasks of Cartpole in the DeepMind control suite, with promising results. On the negative side, there are some domain-specific properties (e.g., moderate map size with relatively simple structures, different rooms having visually distinct patterns, bottleneck states generally leading to better rewards, etc.) that make the proposed method work well. In addition, off-policy learning of the successor features could be a potential technical issue. Finally, the proposed method is not evaluated against stronger baselines on harder exploration tasks (such as Atari Montezuma's revenge, etc.), thus the addition of such results would make the paper more convincing. In the current form, the paper seems to need more work to be acceptable for ICLR."""
1475,"""P-BN: Towards Effective Batch Normalization in the Path Space""",[],"""Neural networks with ReLU activation functions have demonstrated their success in many applications. Recently, researchers noticed a potential issue with the optimization of ReLU networks: the ReLU activation functions are positively scale-invariant (PSI), while the weights are not. This mismatch may lead to undesirable behaviors in the optimization process. Hence, some new algorithms that conduct optimizations directly in the path space (the path space is proven to be PSI) were developed, such as Stochastic Gradient Descent (SGD) in the path space, and it was shown that SGD in the path space is superior to that in the weight space. However, it is still unknown whether other deep learning techniques beyond SGD, such as batch normalization (BN), could also have their counterparts in the path space. In this paper, we conduct a formal study on the design of BN in the path space. According to our study, the key challenge is how to ensure the forward propagation in the path space, because BN is utilized during the forward process. To tackle such challenge, we propose a novel re-parameterization of ReLU networks, with which we replace each weight in the original neural network, with a new value calculated from one or several paths, while keeping the outputs of the network unchanged for any input. Then we show that BN in the path space, namely P-BN, is just a slightly modified conventional BN on the re-parameterized ReLU networks. Our experiments on two benchmark datasets, CIFAR and ImageNet, show that the proposed P-BN can signicantly outperform the conventional BN in the weight space.""","""This paper addresses the extension of path-space-based SGD (which has some previously-acknowledged advantages over traditional weight-space SGD) to handle batch normalization. Given the success of BN in traditional settings, this is a reasonable scenario to consider. The analysis and algorithm development involved exploits a reparameterization process to transition from the weight space to the path space. Empirical tests are then conducted on CIFAR and ImageNet. Overall, there was a consensus among reviewers to reject this paper, and the AC did not find sufficient justification to overrule this consensus. Note that some of the negative feedback was likely due, at least in part, to unclear aspects of the paper, an issue either explicitly stated or implied by all reviewers. While obviously some revisions were made, at this point it seems that a new round of review is required to reevaluate the contribution and ensure that it is properly appreciated."""
1476,"""Fractional Graph Convolutional Networks (FGCN) for Semi-Supervised Learning""","['convolutional networks', 'node classification', 'Levy flight', 'graph-based semi-supervised learning', 'local graph topology']","""Due to high utility in many applications, from social networks to blockchain to power grids, deep learning on non-Euclidean objects such as graphs and manifolds continues to gain an ever increasing interest. Most currently available techniques are based on the idea of performing a convolution operation in the spectral domain with a suitably chosen nonlinear trainable filter and then approximating the filter with finite order polynomials. However, such polynomial approximation approaches tend to be both non-robust to changes in the graph structure and to capture primarily the global graph topology. In this paper we propose a new Fractional Generalized Graph Convolutional Networks (FGCN) method for semi-supervised learning, which casts the L\'evy Fights into random walks on graphs and, as a result, allows to more accurately account for the intrinsic graph topology and to substantially improve classification performance, especially for heterogeneous graphs.""","""This paper proposes a fractional graph convolutional networks for semi-supervised learning, using a classification function repurposed from previous work, as well as parallelization and weighted combinations of pooling function. This leads to good results on several tasks. Reviewers had concerns about the part played by each piece, the lack of comparison to recent related work, and asked for better explanation of the rationale of the method and more experimental details. Authors provided explanations and details, and a more thorough set of comparison to other work, showing better performance in some but not all cases. However, concerns that the proposed innovations are too incremental remain. Therefore, we cannot recommend acceptance."""
1477,"""MANAS: Multi-Agent Neural Architecture Search""","['Neural Architecture Search', 'NAS', 'AutoML', 'Computer Vision']","""The Neural Architecture Search (NAS) problem is typically formulated as a graph search problem where the goal is to learn the optimal operations over edges in order to maximize a graph-level global objective. Due to the large architecture parameter space, efficiency is a key bottleneck preventing NAS from its practical use. In this paper, we address the issue by framing NAS as a multi-agent problem where agents control a subset of the network and coordinate to reach optimal architectures. We provide two distinct lightweight implementations, with reduced memory requirements ( pseudo-formula th of state-of-the-art), and performances above those of much more computationally expensive methods. Theoretically, we demonstrate vanishing regrets of the form pseudo-formula , with pseudo-formula being the total number of rounds. Finally, aware that random search is an (often ignored) effective baseline we perform additional experiments on pseudo-formula alternative datasets and pseudo-formula network configurations, and achieve favorable results in comparison with this baseline and other competing methods.""","""This paper introduces a NAS algorithm based on multi-agent optimization, treating each architecture choice as a bandit and using an adversarial bandit framework to address the non-stationarity of the system that results from the other bandits running in parallel. Two reviewers ranked the paper as a weak accept and one ranked it as a weak reject. The rebuttal answered some questions, and based on this the reviewers kept their ratings. The discussion between reviewers and AC did not result in a consensus. The average score was below the acceptance threshold, but since it was close I read the paper in detail myself before deciding. Here is my personal assessment: "" Positives: 1. It is very nice to see some theory for NAS, as there isn't really any so far. The theory for MANAS itself does not appear to be very compelling, since it assumes that all but one bandit is fixed, i.e., that the problem is stationary, which it clearly isn't. But if I understand correctly, MANAS-LS does not have that problem. (It would be good if the authors could make these points more explicit in future versions.) 2. The absolute numbers for the experimental results on CIFAR-10 are strong. 3. I welcome the experiments on 3 additional datasets. Negatives: 1. The paper crucially omits a comparison to random search with weight sharing (RandomNAS-WS) as introduced by Li & Talwalkar's paper ""Random Search and Reproducibility for Neural Architecture Search"" (pseudo-url), on arXiv since February and published at UAI 2019. This method is basically MANAS without the update step, using a uniform random distribution at step 3 of the algorithm, and therefore would be the right baseline to see whether the bandits are actually learning anything. RandomNAS-WS has the same memory improvements over DARTS as MANAS, so this part is not new. Similarly, there is GDAS as another recent approach with the same low memory requirement: pseudo-url This is my most important criticism. 2. I think there may be a typo somewhere concerning the runtimes of MANAS. It would be extremely surprising if MANAS truly takes 2.5 times longer when run with 20 cells and 500 epochs than when run with 8 cells and 50 epochs. It would make sense if MANAS gets 2.5 slower when just going from 8 to 20 cells, but when going from 50 to 500 epochs the cost should go up by another factor of 10. And the text states specifically that ""for datasets other than ImageNet, we use 500 epochs during the search phase for architectures with 20 cells, 400 epochs for 14 cells, and 50 epochs for 8 cells"". Therefore, I think either that text is wrong or MANAS got 10x more budget than DARTS. 3. Figure 2 shows that on Sport-8, MANAS actually does *significantly worse* when searching on 14 cells than on 8 cells (note the different scale of the y axis). It's also slightly better with 8 cells on MIT-67. I recommend that the authors discuss this in the text and offer some explanation, rather than have the text claim that 14 cells are better and the figure contradict this. Only for MANAS-LS, the 14-cell version actually works better. 4. The authors are unclear about whether they compare to random search or random sampling. These are two different approaches. Random sampling (as proposed by Sciuto et al, 2019) takes a single random architecture from the search space and compares to that. Standard random search iteratively samples N random architectures and evaluates them (usually on some proxy metric), selecting and retraining the best one found that way. The number N is chosen for random search to use the same computational resources as the method being compared. The authors call their method random search but then appear to be describing random sampling. Also, with several recent papers showcasing problems in NAS evaluation (many design decisions affect NAS performance), it would be a big plus to have code available to ensure reproducibility. Many ICLR papers are submitted with an anonymized code repository, and if possible, I would encourage the authors to do this for a future version. "" The prior rating based on the reviewers was slightly below the acceptance threshold, and my personal judgement did not push the paper above the acceptance threshold. I encourage the authors to improve the paper by addressing the reviewer's points and the points above and resubmit to a future venue. Overall, I believe this is very interesting work and am looking forward to a future version."""
1478,"""A Deep Recurrent Neural Network via Unfolding Reweighted l1-l1 Minimization""",[],"""Deep unfolding methods design deep neural networks as learned variations of optimization methods. These networks have been shown to achieve faster convergence and higher accuracy than the original optimization methods. In this line of research, this paper develops a novel deep recurrent neural network (coined reweighted-RNN) by unfolding a reweighted l1-l1 minimization algorithm and applies it to the task of sequential signal reconstruction. To the best of our knowledge, this is the first deep unfolding method that explores reweighted minimization. Due to the underlying reweighted minimization model, our RNN has a different soft-thresholding function (alias, different activation function) for each hidden unit in each layer. Furthermore, it has higher network expressivity than existing deep unfolding RNN models due to the over-parameterizing weights. Moreover, we establish theoretical generalization error bounds for the proposed reweighted-RNN model by means of Rademacher complexity. The bounds reveal that the parameterization of the proposed reweighted-RNN ensures good generalization. We apply the proposed reweighted-RNN to the problem of video-frame reconstruction from low-dimensional measurements, that is, sequential frame reconstruction. The experimental results on the moving MNIST dataset demonstrate that the proposed deep reweighted-RNN significantly outperforms existing RNN models.""","""This paper presents a novel RNN algorithm based on unfolding a reweighted L1-L1 minimization problem. Authors derive the generalization error bound which is tighter than existing methods. All reviewers appreciate the theoretical contributions of the paper, particularly the derivation of generalization error bounds. However, at a higher-level, the overall idea is incremental because RNN by unfolding L1-L1 minimization problem (Le+,2019) and reweighted L1 minimization (Candes+,2008) are both known techniques. The proposed method is essentially a simple combination of them and therefore the result seems somewhat obvious. Also, I agree with reviewers that some experiments are not deep enough to support the theory. For example, for over-parameterization (large model parameters) issue, one can compare the models with the same number of parameters and observe how they generalize. Overall, this is the very borderline paper that provides a good theoretical contribution with limited conceptual novelty and empirical evidences. As a conclusion, I decided to recommend rejection but could be accepted if there is a room. """
1479,"""Functional vs. parametric equivalence of ReLU networks""","['ReLU networks', 'symmetry', 'functional equivalence', 'over-parameterization']","""We address the following question: How redundant is the parameterisation of ReLU networks? Specifically, we consider transformations of the weight space which leave the function implemented by the network intact. Two such transformations are known for feed-forward architectures: permutation of neurons within a layer, and positive scaling of all incoming weights of a neuron coupled with inverse scaling of its outgoing weights. In this work, we show for architectures with non-increasing widths that permutation and scaling are in fact the only function-preserving weight transformations. For any eligible architecture we give an explicit construction of a neural network such that any other network that implements the same function can be obtained from the original one by the application of permutations and rescaling. The proof relies on a geometric understanding of boundaries between linear regions of ReLU networks, and we hope the developed mathematical tools are of independent interest. ""","""This work proves that the weights of feed-forward ReLU networks are determined, up to a specified set of symmetries, by the functions they define. Reviewers found the paper easy to read and the proof technically sound. There was some debate over the motivation for the paper, Reviewer 1 argues that there is no practical significance for the result, a point that the authors do not deny. I appreciate the concerns raised by Reviewer 1, theorists in machine learning should think carefully about the motivation for their work. However, while there is no clear practical significance of this work, I believe there is value to accepting it. Because the considered question concerns a sufficiently fundamental property of neural networks, and the proof is both easy to read and provides insights into a well studied class of models, I believe many researchers will find value in reading this paper."""
1480,"""Reinforced Genetic Algorithm Learning for Optimizing Computation Graphs""","['reinforcement learning', 'learning to optimize', 'combinatorial optimization', 'computation graphs', 'model parallelism', 'learning for systems']","""We present a deep reinforcement learning approach to minimizing the execution cost of neural network computation graphs in an optimizing compiler. Unlike earlier learning-based works that require training the optimizer on the same graph to be optimized, we propose a learning approach that trains an optimizer offline and then generalizes to previously unseen graphs without further training. This allows our approach to produce high-quality execution decisions on real-world TensorFlow graphs in seconds instead of hours. We consider two optimization tasks for computation graphs: minimizing running time and peak memory usage. In comparison to an extensive set of baselines, our approach achieves significant improvements over classical and other learning-based methods on these two tasks. ""","""The submission presents an approach that leverages machine learning to optimize the placement and scheduling of computation graphs (such as TensorFlow graphs) by a compiler. The work is interesting and well-executed. All reviewers recommend accepting the paper."""
1481,"""Neural Maximum Common Subgraph Detection with Guided Subgraph Extraction""","['graph matching', 'maximum common subgraph', 'graph neural networks', 'subgraph extraction', 'graph alignment']","""Maximum Common Subgraph (MCS) is defined as the largest subgraph that is commonly present in both graphs of a graph pair. Exact MCS detection is NP-hard, and its state-of-the-art exact solver based on heuristic search is slow in practice without any time complexity guarantee. Given the huge importance of this task yet the lack of fast solver, we propose an efficient MCS detection algorithm, NeuralMCS, consisting of a novel neural network model that learns the node-node correspondence from the ground-truth MCS result, and a subgraph extraction procedure that uses the neural network output as guidance for final MCS prediction. The whole model guarantees polynomial time complexity with respect to the number of the nodes of the larger of the two input graphs. Experiments on four real graph datasets show that the proposed model is 48.1x faster than the exact solver and more accurate than all the existing competitive approximate approaches to MCS detection.""","""This paper proposed graph neural networks based approach for subgraph detection. The reviewers find that the overall the paper is interesting, however further improvements are needed to meet ICLR standard: 1. Experiments on larger graph. Slight speedup in small graphs are less exciting. 2. It seems there's a mismatch between training and inference. 3. The stopping criterion is quite heuristic. """
1482,"""Fair Resource Allocation in Federated Learning""","['federated learning', 'fairness', 'distributed optimization']","""Federated learning involves training statistical models in massive, heterogeneous networks. Naively minimizing an aggregate loss function in such a network may disproportionately advantage or disadvantage some of the devices. In this work, we propose q-Fair Federated Learning (q-FFL), a novel optimization objective inspired by fair resource allocation in wireless networks that encourages a more fair (specifically, a more uniform) accuracy distribution across devices in federated networks. To solve q-FFL, we devise a communication-efficient method, q-FedAvg, that is suited to federated networks. We validate both the effectiveness of q-FFL and the efficiency of q-FedAvg on a suite of federated datasets with both convex and non-convex models, and show that q-FFL (along with q-FedAvg) outperforms existing baselines in terms of the resulting fairness, flexibility, and efficiency.""","""This manuscript proposes and analyzes a federated learning procedure with more uniform performance across devices, motivated as resulting in a fairer performance distribution. The resulting algorithm is tunable in terms of the fairness-performance tradeoff and is evaluated on a variety of datasets. The reviewers and AC agree that the problem studied is timely and interesting, as there is limited work on fairness in federated learning. However, this manuscript also received quite divergent reviews, resulting from differences in opinion about the novelty and clarity of the conceptual and empirical results. In reviews and discussion, the reviewers noted insufficient justification of the approach and results, particularly in terms of broad empirical evaluation, and sensitivity of the results to misestimation of various constants. In the opinion of the AC, while the paper can be much improved, it seems to be technically correct, and the results are of sufficiently broad interest to consider publication."""
1483,"""On Empirical Comparisons of Optimizers for Deep Learning""","['Deep learning', 'optimization', 'adaptive gradient methods', 'Adam', 'hyperparameter tuning']","""Selecting an optimizer is a central step in the contemporary deep learning pipeline. In this paper we demonstrate the sensitivity of optimizer comparisons to the metaparameter tuning protocol. Our findings suggest that the metaparameter search space may be the single most important factor explaining the rankings obtained by recent empirical comparisons in the literature. In fact, we show that these results can be contradicted when metaparameter search spaces are changed. As tuning effort grows without bound, more general update rules should never underperform the ones they can approximate (i.e., Adam should never perform worse than momentum), but the recent attempts to compare optimizers either assume these inclusion relationships are not relevant in practice or restrict the metaparameters they tune to break the inclusions. In our experiments, we find that the inclusion relationships between optimizers matter in practice and always predict optimizer comparisons. In particular, we find that the popular adative gradient methods never underperform momentum or gradient descent. We also report practical tips around tuning rarely-tuned metaparameters of adaptive gradient methods and raise concerns about fairly benchmarking optimizers for neural network training.""","""This paper examines classifiers and challenges a (somewhat widely held) assumption that adaptive gradient methods underperform simpler methods. This paper sparked a *large* amount of discussion, more than any other paper in my area. It was also somewhat controversial. After reading the discussion and paper itself, on one hand I think this makes a valuable contribution to the community. It points out a (near-) inclusion relationship between many adaptive gradient methods and standard SGD-style methods, and points out that rather obviously if a particular method is included by a more general method, the more general method will never be worse and often will be better if hyperparameters are set appropriately. However, there were several concerns raised with the paper. For example, reviewer 1 pointed out that in order for Adam to include Momentum-based SGD, it must follow a specialized learning rate schedule that is not used with Adam in practice. This is pointed out in the paper, but I think it could be even more clear. For example, in the intro ""For example, ADAM (Kingma and Ba, 2015) and RMSPROP (Tieleman and Hinton, 2012) can approximately simulate MOMENTUM (Polyak, 1964) if the term in the denominator of their parameter updates is allowed to grow very large."" does not make any mention of the specialized learning rate schedule. Second, Reviewer 1 was concerned with the fact that the paper does not clearly qualify that the conclusion that more complicated optimization schedules do better depends on extensive hyperparameter search. This fact somewhat weakens one of the main points of the paper. I feel that this paper is very much on the borderline, but cannot strongly recommend acceptance. I hope that the authors take the above notes, as well as the reviewers' other comments into account seriously and try to reflect them in a revised version of the paper."""
1484,"""A Training Scheme for the Uncertain Neuromorphic Computing Chips""","['deep learning', 'neuromorphic computing', 'uncertainty', 'training']","""Uncertainty is a very important feature of the intelligence and helps the brain become a flexible, creative and powerful intelligent system. The crossbar-based neuromorphic computing chips, in which the computing is mainly performed by analog circuits, have the uncertainty and can be used to imitate the brain. However, most of the current deep neural networks have not taken the uncertainty of the neuromorphic computing chip into consideration. Therefore, their performances on the neuromorphic computing chips are not as good as on the original platforms (CPUs/GPUs). In this work, we proposed the uncertainty adaptation training scheme (UATS) that tells the uncertainty to the neural network in the training process. The experimental results show that the neural networks can achieve comparable inference performances on the uncertain neuromorphic computing chip compared to the results on the original platforms, and much better than the performances without this training scheme.""","""The paper is proposing uncertainty of the NNs in the training process on analog-circuits based chips. As one reviewer emphasized, the paper addresses important and unique research problem to run NN on chips. Unfortunately, a few issues are raised by reviewers including presentation, novelly and experiments. This might be partially be mitigated by 1) writing motivation/intro in most lay person possible way 2) give easy contrast to normal NN (on computers) to emphasize the unique and interesting challenges in this setting. We encourage authors to take a few cycles of edition, and hope this paper to see the light soon. """
1485,"""Adversarial Lipschitz Regularization""","['generative adversarial networks', 'wasserstein generative adversarial networks', 'lipschitz regularization', 'adversarial training']","""Generative adversarial networks (GANs) are one of the most popular approaches when it comes to training generative models, among which variants of Wasserstein GANs are considered superior to the standard GAN formulation in terms of learning stability and sample quality. However, Wasserstein GANs require the critic to be 1-Lipschitz, which is often enforced implicitly by penalizing the norm of its gradient, or by globally restricting its Lipschitz constant via weight normalization techniques. Training with a regularization term penalizing the violation of the Lipschitz constraint explicitly, instead of through the norm of the gradient, was found to be practically infeasible in most situations. Inspired by Virtual Adversarial Training, we propose a method called Adversarial Lipschitz Regularization, and show that using an explicit Lipschitz penalty is indeed viable and leads to competitive performance when applied to Wasserstein GANs, highlighting an important connection between Lipschitz regularization and adversarial training.""","""This paper introduces an adversarial approach to enforcing a Lipschitz constraint on neural networks. The idea is intuitively appealing, and the paper is clear and well written. It's not clear from the experiments if this method outperforms competing approaches, but it is at least comparable, which means this is at the very least another useful tool in the toolbox. There was a lot of back-and-forth with the reviewers, mostly over the experiments and some other minor points. The reviewers feel like their concerns have all been addressed, and now agree on acceptance. """
1486,"""Deep Multiple Instance Learning with Gaussian Weighting""","['Multiple instance learning', 'deep learning']","""In this paper we present a deep Multiple Instance Learning (MIL) method that can be trained end-to-end to perform classification from weak supervision. Our MIL method is implemented as a two stream neural network, specialized in tasks of instance classification and weighting. Our instance weighting stream makes use of Gaussian radial basis function to normalize the instance weights by comparing instances locally within the bag and globally across bags. The final classification score of the bag is an aggregate of all instance classification scores. The instance representation is shared by both instance classification and weighting streams. The Gaussian instance weighting allows us to regularize the representation learning of instances such that all positive instances to be closer to each other w.r.t. the instance weighting function. We evaluate our method on five standard MIL datasets and show that our method outperforms other MIL methods. We also evaluate our model on two datasets where all models are trained end-to-end. Our method obtain better bag-classification and instance classification results on these datasets. We conduct extensive experiments to investigate the robustness of the proposed model and obtain interesting insights.""","""The authors propose a novel MIL method that uses a novel approach to normalize the instance weights. The majority of reviewers found the paper lacking in novelty and sufficient experimental performance evidence."""
1487,"""Deep Ensembles: A Loss Landscape Perspective""","['loss landscape', 'deep ensemble', 'subspace', 'tunnel', 'low loss', 'connector', 'weight averaging', 'dropout', 'gaussian', 'connectivity', 'diversity', 'function space']","""Deep ensembles have been empirically shown to be a promising approach for improving accuracy, uncertainty and out-of-distribution robustness of deep learning models. While deep ensembles were theoretically motivated by the bootstrap, non-bootstrap ensembles trained with just random initialization also perform well in practice, which suggests that there could be other explanations for why deep ensembles work well. Bayesian neural networks, which learn distributions over the parameters of the network, are theoretically well-motivated by Bayesian principles, but do not perform as well as deep ensembles in practice, particularly under dataset shift. One possible explanation for this gap between theory and practice is that popular scalable approximate Bayesian methods tend to focus on a single mode, whereas deep ensembles tend to explore diverse modes in function space. We investigate this hypothesis by building on recent work on understanding the loss landscape of neural networks and adding our own exploration to measure the similarity of functions in the space of predictions. Our results show that random initializations explore entirely different modes, while functions along an optimization trajectory or sampled from the subspace thereof cluster within a single mode predictions-wise, while often deviating significantly in the weight space. We demonstrate that while low-loss connectors between modes exist, they are not connected in the space of predictions. Developing the concept of the diversity--accuracy plane, we show that the decorrelation power of random initializations is unmatched by popular subspace sampling methods.""","""Paper pseudo-url (Garipov et. al, NeurIPS 2018) shows that one can find curves between two independently trained solutions along which the loss is relatively constant. The authors of this ICLR submission claim as a key contribution that they show the weights along the path correspond to different models that make different predictions (""Note that prior work on loss landscapes has focused on mode-connectivity and low-loss tunnels, but has not explicitly focused on how diverse the functions from different modes are, beyond an initial exploration in Fort & Jastrzebski (2019)""). Much of the disagreement between two of the reviewers and the authors is whether this point had already been shown in 1802.10026. It is in fact very clear that 1802.10026 shows that different points on the curve correspond to diverse functions. Figure 2 (right) of this paper shows the test error of an _ensemble_ of predictions made by the network for the parameters at one end of the curve, and the network described by \phi_\theta(t) at some point t along the curve: since the error goes down and changes significantly as t varies, the functions corresponding to different parameter settings along these curves must be diverse. This functional diversity is also made explicit multiple times in 1802.10026, which clearly says that this result shows that the curves contain meaningfully different representations. In response to R3, the authors incorrectly claim that ""Figure 2 in Garipov et al. only plots loss and accuracy, and does not measure function space similarity, between different initializations, or along the tunnel at all. Just by looking at accuracy and loss values, there is no way to infer how similar the predictions of the two functions are."" But Figure 2 (right) is actually showing the test error of an average of predictions of networks with parameters at different points along the curve, how it changes as one moves along the curve, and the improved accuracy of the ensemble over using one of the endpoints. If the functions associated with different parameters along the curve were the same, averaging their predictions would not help performance. Moreover, Figure 6 (bottom left, dashed lines) in the appendix of 1802.10026 shows the improvement in performance in ensembling points along the curve over ensembling independently trained networks. Section A6 (Appendix) also describes ensembling along the curve in some detail, with several quantitative results. There is no sense in ensembling models along the curve if they were the same model. These results unequivocally demonstrate that the points on the curve have functional diversity, and this connection is made explicit multiple times in 1802.10026 with the claim of meaningfully different representations: This result also demonstrates that these curves do not exist only due to degenerate parametrizations of the network (such as rescaling on either side of a ReLU); instead, points along the curve correspond to meaningfully different representations of the data that can be ensembled for improved performance. Additionally, other published work has built on this observation, such as 1907.07504 (UAI 2019), which performs Bayesian model averaging over the mode connecting subspace, relying on diversity of functions in this space; that work also visualizes the different functions arising in this space. It is incorrect to attribute these findings to Fort & Jastrzebski (2019) or the current submission. It is a positive contribution to build on prior work, but what is prior work and what is new should be accurately characterized, and currently is not, even after the discussion phase where multiple reviewers raised the same concern. Reviewers appreciated the broader investigation of diversity and its effect on ensembling, and the more detailed study regarding connecting curves. In addition to the concerns about inaccurate claims regarding prior work and novelty (which included aspects of the mode connectivity work but also other works), several reviewers also felt that the time-accuracy trade-offs of deep ensembles relative to standard approaches were not clearly presented, and comparisons were lacking. It would be simple and informative to do an experiment showing a runtime-accuracy trade-off curve for deep ensembles alongside FGE and various Bayesian deep learning methods and mc-dropout. It's also possible to use for example parallel MCMC chains to explore multiple quite different modes like deep ensembles but for Bayesian deep learning. For the paper to be accepted, it would need significant revisions, correcting the accuracy of claims, and providing such experiments."""
1488,"""TabNet: Attentive Interpretable Tabular Learning""","['Tabular data', 'interpretable neural networks', 'attention models']","""We propose a novel high-performance interpretable deep tabular data learning network, TabNet. TabNet utilizes a sequential attention mechanism that softly selects features to reason from at each decision step and then aggregates the processed information to make a final prediction decision. By explicitly selecting sparse features, TabNet learns very efficiently as the model capacity at each decision step is fully utilized for the most relevant features, resulting in a high performance model. This sparsity also enables more interpretable decision making through the visualization of feature selection masks. We demonstrate that TabNet outperforms other neural network and decision tree variants on a wide range of tabular data learning datasets and yields interpretable feature attributions and insights into the global model behavior.""","""This paper constitutes interesting progress on an important topic; the reviewers identify certain improvements and directions for future work, and I urge the authors to continue to develop refinements and extensions."""
1489,"""A Unified framework for randomized smoothing based certified defenses""","['Certificated Defense', 'Randomized Smoothing', 'A Unified and Self-Contained Framework']","""Randomized smoothing, which was recently proved to be a certified defensive technique, has received considerable attention due to its scalability to large datasets and neural networks. However, several important questions still remain unanswered in the existing frameworks, such as (i) whether Gaussian mechanism is an optimal choice for certifying pseudo-formula -normed robustness, and (ii) whether randomized smoothing can certify pseudo-formula -normed robustness (on high-dimensional datasets like ImageNet). To answer these questions, we introduce a {\em unified} and {\em self-contained} framework to study randomized smoothing-based certified defenses, where we mainly focus on the two most popular norms in adversarial machine learning, {\em i.e.,} pseudo-formula and pseudo-formula norm. We answer the above two questions by first demonstrating that Gaussian mechanism and Exponential mechanism are the (near) optimal options to certify the pseudo-formula and pseudo-formula -normed robustness. We further show that the largest pseudo-formula radius certified by randomized smoothing is upper bounded by pseudo-formula , where pseudo-formula is the dimensionality of the data. This theoretical finding suggests that certifying pseudo-formula -normed robustness by randomized smoothing may not be scalable to high-dimensional data. The veracity of our framework and analysis is verified by extensive evaluations on CIFAR10 and ImageNet.""","""After the rebuttal, the reviewers agree that this paper would benefit from further revisions to clarify issues regarding the motivation of the DP-based security definition, any relationship it may have to standard definitions of privacy, and the role of dimensionality in the theoretical guarantees."""
1490,"""BAIL: Best-Action Imitation Learning for Batch Deep Reinforcement Learning""","['Deep Reinforcement Learning', 'Batch Reinforcement Learning', 'Sample Efficiency']","""The field of Deep Reinforcement Learning (DRL) has recently seen a surge in research in batch reinforcement learning, which aims for sample-efficient learning from a given data set without additional interactions with the environment. In the batch DRL setting, commonly employed off-policy DRL algorithms can perform poorly and sometimes even fail to learn altogether. In this paper we propose anew algorithm, Best-Action Imitation Learning (BAIL), which unlike many off-policy DRL algorithms does not involve maximizing Q functions over the action space. Striving for simplicity as well as performance, BAIL first selects from the batch the actions it believes to be high-performing actions for their corresponding states; it then uses those state-action pairs to train a policy network using imitation learning. Although BAIL is simple, we demonstrate that BAIL achieves state of the art performance on the Mujoco benchmark, typically outperforming BatchConstrained deep Q-Learning (BCQ) by a wide margin.""","""The authors propose a novel algorithm for batch RL with offline data. The method is simple and outperforms a recently proposed algorithm, BCQ, on Mujoco benchmark tasks. The main points that have not been addressed after the author rebuttal are: * Lack of rigor and incorrectness of theoretical statements. Furthermore, there is little analysis of the method beyond the performance results. * Non-standard assumptions/choices in the algorithm without justification (e.g., concatenating episodes). * Numerous sloppy statements / assumptions that are not justified. * No comparison to BEAR, making it challenging to evaluate their state-of-the-art claims. The reviewers also point out several limitations of the proposed method. Adding a brief discussion of these limitations would strengthen the paper. The method is interesting and simple, so I believe that the paper has the potential to be a strong submission if the authors incorporate the reviewers suggestions in a future submission. However, at this time, the paper falls below the acceptance bar."""
1491,"""Generalized Bayesian Posterior Expectation Distillation for Deep Neural Networks""","['Bayesian Neural Networks', 'Distillation']","""In this paper, we present a general framework for distilling expectations with respect to the Bayesian posterior distribution of a deep neural network, significantly extending prior work on a method known as ``Bayesian Dark Knowledge."" Our generalized framework applies to the case of classification models and takes as input the architecture of a ``teacher"" network, a general posterior expectation of interest, and the architecture of a ``student"" network. The distillation method performs an online compression of the selected posterior expectation using iteratively generated Monte Carlo samples from the parameter posterior of the teacher model. We further consider the problem of optimizing the student model architecture with respect to an accuracy-speed-storage trade-off. We present experimental results investigating multiple data sets, distillation targets, teacher model architectures, and approaches to searching for student model architectures. We establish the key result that distilling into a student model with an architecture that matches the teacher, as is done in Bayesian Dark Knowledge, can lead to sub-optimal performance. Lastly, we show that student architecture search methods can identify student models with significantly improved performance. ""","""The authors consider distilling posterior expectations for Bayesian neural networks. While reviewers found the material interesting, and the responses thoughtful, there were questions about the practical utility of the work. Evaluations of classification favour NLL (and typically do not show accuracy), and regression (which was considered in the original Bayesian Dark Knowledge paper) is not considered. In general, it is difficult to assess and interpret how the approach is working, and in what application regime it would be a gold standard, e.g., with respect to downstream tasks. The authors are encouraged to continue with this work, taking reviewer comments into account in a final version."""
1492,"""Fast Training of Sparse Graph Neural Networks on Dense Hardware""",[],"""Graph neural networks have become increasingly popular in recent years due to their ability to naturally encode relational input data and their ability to operate on large graphs by using a sparse representation of graph adjacency matrices. As we look to scale up these models using custom hardware, a natural assumption would be that we need hardware tailored to sparse operations and/or dynamic control flow. In this work, we question this assumption by scaling up sparse graph neural networks using a platform targeted at dense computation on fixed-size data. Drawing inspiration from optimization of numerical algorithms on sparse matrices, we develop techniques that enable training the sparse graph neural network model from Allamanis et al. (2018) in 13 minutes using a 512-core TPUv2 Pod, whereas the original training takes almost a day.""","""While there was some interest in the ideas presented, this paper was on the borderline, and was ultimately not able to be accepted for publication at ICLR. Reviewers raised concerns as to the novelty, generality, and practicality of the approach, which could have been better demonstrated via experiments."""
1493,"""A Bayes-Optimal View on Adversarial Examples""","['Adversarial Examples', 'Generative Models']","""Adversarial attacks on CNN classifiers can make an imperceptible change to an input image and alter the classification result. The source of these failures is still poorly understood, and many explanations invoke the ""unreasonably linear extrapolation"" used by CNNs along with the geometry of high dimensions. In this paper we show that similar attacks can be used against the Bayes-Optimal classifier for certain class distributions, while for others the optimal classifier is robust to such attacks. We present analytical results showing conditions on the data distribution under which all points can be made arbitrarily close to the optimal decision boundary and show that this can happen even when the classes are easy to separate, when the ideal classifier has a smooth decision surface and when the data lies in low dimensions. We introduce new datasets of realistic images of faces and digits where the Bayes-Optimal classifier can be calculated efficiently and show that for some of these datasets the optimal classifier is robust and for others it is vulnerable to adversarial examples. In systematic experiments with many such datasets, we find that standard CNN training consistently finds a vulnerable classifier even when the optimal classifier is robust while large-margin methods often find a robust classifier with the exact same training data. Our results suggest that adversarial vulnerability is not an unavoidable consequence of machine learning in high dimensions, and may often be a result of suboptimal training methods used in current practice.""","""The paper studies how adversarial robustness and Bayes optimality relate in a simple gaussian mixture setting. The paper received two recommendations for rejection and one weak accept. One of the central complaints was whether the study had any bearing on ""real world"" adversarial examples. I think this is a fair concern, given how limited the model appears on the surface, although perhaps the model is a good model of any local ""piece"" of a decision boundary in a real problem. That said, I do not agree with the strong rejection (1) in most places. The weak reject asked for some experiments. The revision produced these experiments, but I'm not sure how convincing these are since only one robust training method was used, and it's not clear that it's the best one could do among SOTA methods. For whatever reason, the reviewers did not update their scores. I am not certain that they reviewed the revision, despite my prodding."""
1494,"""Imitation Learning via Off-Policy Distribution Matching""","['reinforcement learning', 'deep learning', 'imitation learning', 'adversarial learning']","""When performing imitation learning from expert demonstrations, distribution matching is a popular approach, in which one alternates between estimating distribution ratios and then using these ratios as rewards in a standard reinforcement learning (RL) algorithm. Traditionally, estimation of the distribution ratio requires on-policy data, which has caused previous work to either be exorbitantly data- inefficient or alter the original objective in a manner that can drastically change its optimum. In this work, we show how the original distribution ratio estimation objective may be transformed in a principled manner to yield a completely off-policy objective. In addition to the data-efficiency that this provides, we are able to show that this objective also renders the use of a separate RL optimization unnecessary. Rather, an imitation policy may be learned directly from this objective without the use of explicit rewards. We call the resulting algorithm ValueDICE and evaluate it on a suite of popular imitation learning benchmarks, finding that it can achieve state-of-the-art sample efficiency and performance.""","""This work addresses new insights in the imitation learning setting, and shows how a popular type of approach can be extended in a principled way to the off-policy learning setting. Several requests for clarification were addressed in the rebuttal phase, in particular regarding the empirical evaluation in off-policy settings. The authors improved the empirical validation and overall clarity of the paper. The resulting manuscript provides valuable new insights, in particular in its principled connections, and extension to previous work."""
1495,"""A Novel Analysis Framework of Lower Complexity Bounds for Finite-Sum Optimization""","['convex optimization', 'lower bound complexity', 'proximal incremental first-order oracle']","""This paper studies the lower bound complexity for the optimization problem whose objective function is the average of pseudo-formula individual smooth convex functions. We consider the algorithm which gets access to gradient and proximal oracle for each individual component. For the strongly-convex case, we prove such an algorithm can not reach an pseudo-formula -suboptimal point in fewer than n})\log(1/\eps)) iterations, where pseudo-formula is the condition number of the objective function. This lower bound is tighter than previous results and perfectly matches the upper bound of the existing proximal incremental first-order oracle algorithm Point-SAGA. We develop a novel construction to show the above result, which partitions the tridiagonal matrix of classical examples into pseudo-formula groups to make the problem difficult enough to stochastic algorithms. This construction is friendly to the analysis of proximal oracle and also could be used in general convex and average smooth cases naturally.""","""The paper considers a lower bound complexity for the convex problems. The reviewers worry about whether the scope of this paper fit in ICLR, the initialization issues, and the novelty and some other problems."""
1496,"""Striving for Simplicity in Off-Policy Deep Reinforcement Learning""","['reinforcement learning', 'off-policy', 'batch RL', 'offline RL', 'benchmark']","""This paper advocates the use of offline (batch) reinforcement learning (RL) to help (1) isolate the contributions of exploitation vs. exploration in off-policy deep RL, (2) improve reproducibility of deep RL research, and (3) facilitate the design of simpler deep RL algorithms. We propose an offline RL benchmark on Atari 2600 games comprising all of the replay data of a DQN agent. Using this benchmark, we demonstrate that recent off-policy deep RL algorithms, even when trained solely on logged DQN data, can outperform online DQN. We present Random Ensemble Mixture (REM), a simple Q-learning algorithm that enforces optimal Bellman consistency on random convex combinations of multiple Q-value estimates. The REM algorithm outperforms more complex RL agents such as C51 and QR-DQN on the offline Atari benchmark and performs comparably in the online setting.""","""All the reviewers recommend rejecting the submission. There is no basis for acceptance."""
1497,"""Gumbel-Matrix Routing for Flexible Multi-task Learning""",[],"""This paper proposes a novel per-task routing method for multi-task applications. Multi-task neural networks can learn to transfer knowledge across different tasks by using parameter sharing. However, sharing parameters between unrelated tasks can hurt performance. To address this issue, routing networks can be applied to learn to share each group of parameters with a different subset of tasks to better leverage tasks relatedness. However, this use of routing methods requires to address the challenge of learning the routing jointly with the parameters of a modular multi-task neural network. We propose the Gumbel-Matrix routing, a novel multi-task routing method based on the Gumbel-Softmax, that is designed to learn fine-grained parameter sharing. When applied to the Omniglot benchmark, the proposed method improves the state-of-the-art error rate by 17%.""","""This paper proposed to use Gumbel softmax to optimize the routing matrix in routing network for multitask learning. All reviewers have a consensus on rejecting this paper. The paper did not clearly explain how and why this method works, and the experiments are not sufficient."""
1498,"""Self-Supervised Speech Recognition via Local Prior Matching""","['speech recognition', 'self-supervised learning', 'language model', 'semi-supervised learning', 'pseudo labeling']","""We propose local prior matching (LPM), a self-supervised objective for speech recognition. The LPM objective leverages a strong language model to provide learning signal given unlabeled speech. Since LPM uses a language model, it can take advantage of vast quantities of both unpaired text and speech. The loss is theoretically well-motivated and simple to implement. More importantly, LPM is effective. Starting from a model trained on 100 hours of labeled speech, with an additional 360 hours of unlabeled data LPM reduces the WER by 26% and 31% relative on a clean and noisy test set, respectively. This bridges the gap by 54% and 73% WER on the two test sets relative to a fully supervised model on the same 360 hours with labels. By augmenting LPM with an additional 500 hours of noisy data, we further improve the WER on the noisy test set by 15% relative. Furthermore, we perform extensive ablative studies to show the importance of various configurations of our self-supervised approach.""","""The paper proposed local prior matching that utilizes a language model to rescore the hypotheses generate by a teacher model on unlabeled data, which are then used to training the student model for improvement. The experimental results on Librispeech is thorough. But two concerns on this paper are: 1) limited novelty: LM trained on large tex data is already used in weak distillation and the only difference is the use of multiply hypotheses. As pointed out by the reviewers, the method is better understood through distillation even though the authors try to derive it from Bayesian perspective. 2) Librispeech is a medium sized dataset, justifications on much larger dataset for ASR would make it more convincing. """
1499,"""Gradient Descent can Learn Less Over-parameterized Two-layer Neural Networks on Classification Problems""","['gradient descent', 'neural network', 'over-parameterization']","""Recently, several studies have proven the global convergence and generalization abilities of the gradient descent method for two-layer ReLU networks. Most studies especially focused on the regression problems with the squared loss function, except for a few, and the importance of the positivity of the neural tangent kernel has been pointed out. However, the performance of gradient descent on classification problems using the logistic loss function has not been well studied, and further investigation of this problem structure is possible. In this work, we demonstrate that the separability assumption using a neural tangent model is more reasonable than the positivity condition of the neural tangent kernel and provide a refined convergence analysis of the gradient descent for two-layer networks with smooth activations. A remarkable point of our result is that our convergence and generalization bounds have much better dependence on the network width in comparison to related studies. Consequently, our theory significantly enlarges a class of over-parameterized networks with provable generalization ability, with respect to the network width, while most studies require much higher over-parameterization.""","""This article studies gradient optimization for classification problems with shallow networks with smooth activations, obtaining convergence and generalisation results under a separability assumption on the data. The results are obtained under much less stringent requirements on the width of the network than other related recent works. However, with results on convergence and generalisation having been established in other previous works, the reviewers found the contribution incremental. The responses clarified some of the distinctive challenges with the logistic loss compared with the squared loss that has been considered in other works, and provided examples for the separability assumption. Overall, the article makes important contributions in the case of classification problems. However, with many recent works addressing challenging problems in a similar direction, the bar has been set quite high. As pointed out by some of the reviewers, the contribution could gain substantially in relevance and make a more convincing case by addressing extensions to non smooth activations and deep models. """
1500,"""Hindsight Trust Region Policy Optimization""","['Hindsight', 'Sparse Reward', 'Reinforcement Learning', 'Policy Gradients']","""As reinforcement learning continues to drive machine intelligence beyond its conventional boundary, unsubstantial practices in sparse reward environment severely limit further applications in a broader range of advanced fields. Motivated by the demand for an effective deep reinforcement learning algorithm that accommodates sparse reward environment, this paper presents Hindsight Trust Region Policy Optimization (HTRPO), a method that efficiently utilizes interactions in sparse reward conditions to optimize policies within trust region and, in the meantime, maintains learning stability. Firstly, we theoretically adapt the TRPO objective function, in the form of the expected return of the policy, to the distribution of hindsight data generated from the alternative goals. Then, we apply Monte Carlo with importance sampling to estimate KL-divergence between two policies, taking the hindsight data as input. Under the condition that the distributions are sufficiently close, the KL-divergence is approximated by another f-divergence. Such approximation results in the decrease of variance and alleviates the instability during policy update. Experimental results on both discrete and continuous benchmark tasks demonstrate that HTRPO converges significantly faster than previous policy gradient methods. It achieves effective performances and high data-efficiency for training policies in sparse reward environments.""","""The paper pursues an interesting approach, but requires additional maturation. The experienced reviewers raise several concerns about the current version of the paper. The significance of the contribution was questioned. The paper missed key opportunities to evaluate and justify critical aspects of the proposed approach, via targeted ablation and baseline studies. The quality and clarity of the technical exposition was also criticized. The comments submitted by the reviewers should help the authors strengthen the paper. """
1501,"""Quantum Optical Experiments Modeled by Long Short-Term Memory""","['Recurrent Networks', 'LSTM', 'Sequence Analysis', 'Binary Classification']","""We demonstrate how machine learning is able to model experiments in quantum physics. Quantum entanglement is a cornerstone for upcoming quantum technologies such as quantum computation and quantum cryptography. Of particular interest are complex quantum states with more than two particles and a large number of entangled quantum levels. Given such a multiparticle high-dimensional quantum state, it is usually impossible to reconstruct an experimental setup that produces it. To search for interesting experiments, one thus has to randomly create millions of setups on a computer and calculate the respective output states. In this work, we show that machine learning models can provide significant improvement over random search. We demonstrate that a long short-term memory (LSTM) neural network can successfully learn to model quantum experiments by correctly predicting output state characteristics for given setups without the necessity of computing the states themselves. This approach not only allows for faster search but is also an essential step towards automated design of multiparticle high-dimensional quantum experiments using generative machine learning models.""","""The paper predicts properties of quantum states through RNNs. The idea is nice, but the results are very limited and require more work. It seems to be more suited for a conference focussing on quantum ML---even when the authors have an ML background. All reviewers agree on a rejection, and their arguments are solid. The authors offered no rebuttal."""
1502,"""V1Net: A computational model of cortical horizontal connections""","['Biologically plausible deep learning', 'Recurrent Neural Networks', 'Perceptual grouping', 'horizontal connections', 'visual neuroscience', 'perceptual robustness', 'Gestalt psychology']","""The primate visual system builds robust, multi-purpose representations of the external world in order to support several diverse downstream cortical processes. Such representations are required to be invariant to the sensory inconsistencies caused by dynamically varying lighting, local texture distortion, etc. A key architectural feature combating such environmental irregularities is long-range horizontal connections that aid the perception of the global form of objects. In this work, we explore the introduction of such horizontal connections into standard deep convolutional networks; we present V1Net -- a novel convolutional-recurrent unit that models linear and nonlinear horizontal inhibitory and excitatory connections inspired by primate visual cortical connectivity. We introduce the Texturized Challenge -- a new benchmark to evaluate object recognition performance under perceptual noise -- which we use to evaluate V1Net against an array of carefully selected control models with/without recurrent processing. Additionally, we present results from an ablation study of V1Net demonstrating the utility of diverse neurally inspired horizontal connections for state-of-the-art AI systems on the task of object boundary detection from natural images. We also present the emergence of several biologically plausible horizontal connectivity patterns, namely center-on surround-off, association fields and border-ownership connectivity patterns in a V1Net model trained to perform boundary detection on natural images from the Berkeley Segmentation Dataset 500 (BSDS500). Our findings suggest an increased representational similarity between V1Net and biological visual systems, and highlight the importance of neurally inspired recurrent contextual processing principles for learning visual representations that are robust to perceptual noise and furthering the state-of-the-art in computer vision.""","""The paper proposes a neurally inspired model that is a variant of conv-LSTM called V1net. The reviewers had trouble gleaning the main contributions of the work. Given that it is hard to obtain state of art results in neurally inspired architectures, the bar is much higher to demonstrate that there is value in pursuing these architectures. There are not enough convincing results in the paper to show this. I recommend rejection."""
1503,"""Unbiased Contrastive Divergence Algorithm for Training Energy-Based Latent Variable Models""","['energy model', 'restricted Boltzmann machine', 'contrastive divergence', 'unbiased Markov chain Monte Carlo', 'distribution coupling']","""The contrastive divergence algorithm is a popular approach to training energy-based latent variable models, which has been widely used in many machine learning models such as the restricted Boltzmann machines and deep belief nets. Despite its empirical success, the contrastive divergence algorithm is also known to have biases that severely affect its convergence. In this article we propose an unbiased version of the contrastive divergence algorithm that completely removes its bias in stochastic gradient methods, based on recent advances on unbiased Markov chain Monte Carlo methods. Rigorous theoretical analysis is developed to justify the proposed algorithm, and numerical experiments show that it significantly improves the existing method. Our findings suggest that the unbiased contrastive divergence algorithm is a promising approach to training general energy-based latent variable models.""","""Main content: Blind review #1 summarizes it well: The paper proposes an algorithmic improvement that significantly simplifies training of energy-based models, such as the Restricted Boltzmann Machine. The key issue in training such models is computing the gradient of the log partition function, which can be framed as computing the expected value of f(x) = dE(x; theta) / d theta over the model distribution p(x). The canonical algorithm for this problem is Contrastive Divergence which approximates x ~ p(x) with k steps of Gibbs sampling, resulting in biased gradients. In this paper, the authors apply the recently introduced unbiased MCMC framework of Jacob et al. to completely remove the bias. The key idea is to (1) rewrite the expectation as a limit of a telescopic sum: E f(x_0) + \sum_t E f(x_t) - E f(x_{t-1}); (2) run two coupled MCMC chains, one for the positive part of the telescopic sum and one for the negative part until they converge. After convergence, all remaining terms of the sum are zero and we can stop iterating. However, the number of time steps until convergence is now random. Other contributions of the paper are: 1. Proof that Bernoulli RBMs and other models satisfying certain conditions have finite expected number of steps and finite variance of the unbiased gradient estimator. 2. A shared random variables method for the coupled Gibbs chains that should result in faster convergence of the chains. 3. Verification of the proposed method on two synthetic datasets and a subset of MNIST, demonstrating more stable training compared to contrastive divergence and persistent contrastive divergence. -- Discussion: The main objection in reviews was to have meaningful empirical validation of the strong theoretical aspect of the paper, which the authors did during the rebuttal period to the satisfaction of reviewers. -- Recommendation and justification: As review #1 said, ""I am very excited about this paper and strongly support its acceptance, since the proposed method should revitalize research in energy-based models."""""
1504,"""Attraction-Repulsion Actor-Critic for Continuous Control Reinforcement Learning""","['reinforcement learning', 'continuous control', 'multi-agent', 'mujoco']","""Continuous control tasks in reinforcement learning are important because they provide an important framework for learning in high-dimensional state spaces with deceptive rewards, where the agent can easily become trapped into suboptimal solutions. One way to avoid local optima is to use a population of agents to ensure coverage of the policy space, yet learning a population with the ``best"" coverage is still an open problem. In this work, we present a novel approach to population-based RL in continuous control that leverages properties of normalizing flows to perform attractive and repulsive operations between current members of the population and previously observed policies. Empirical results on the MuJoCo suite demonstrate a high performance gain for our algorithm compared to prior work, including Soft-Actor Critic (SAC). ""","""The reviewers generally expressed considerable reservation about the novelty of the proposed method. After reading the reviews in detail and looking at the paper, I'm inclined to agree that the contribution is rather incremental. Using normalizing flows for representing policies in RL has been studied in a number of prior works, including with soft actor-critic, and I think the novelty in this work is limited in relation to prior work. Therefore, I cannot recommend acceptance at this time."""
1505,"""Confidence-Calibrated Adversarial Training: Towards Robust Models Generalizing Beyond the Attack Used During Training""","['Adversarial Training', 'Adversarial Examples', 'Adversarial Robustness', 'Confidence Calibration']","""Adversarial training is the standard to train models robust against adversarial examples. However, especially for complex datasets, adversarial training incurs a significant loss in accuracy and is known to generalize poorly to stronger attacks, e.g., larger perturbations or other threat models. In this paper, we introduce confidence-calibrated adversarial training (CCAT) where the key idea is to enforce that the confidence on adversarial examples decays with their distance to the attacked examples. We show that CCAT preserves better the accuracy of normal training while robustness against adversarial examples is achieved via confidence thresholding. Most importantly, in strong contrast to adversarial training, the robustness of CCAT generalizes to larger perturbations and other threat models, not encountered during training. We also discuss our extensive work to design strong adaptive attacks against CCAT and standard adversarial training which is of independent interest. We present experimental results on MNIST, SVHN and Cifar10.""","""This paper proposes a confidence-calibrated adversarial training (CCAT). The key idea is to enforce that the confidence on adversarial examples decays with their distance to the attacked examples. The authors show that CCAT can achieve better natural accuracy and robustness. After the author response and reviewer discussion, all the reviewers still think more work (e.g., improving the motivation to better position this work, conducting a fair comparison with adversarial training which does not have adversarial example detection component) needs to be done to make it a strong case. Therefore, I recommend reject."""
1506,"""Knowledge Transfer via Student-Teacher Collaboration""","['Network Compression and Acceleration', 'Knowledge Transfer', 'Student-Teacher Collaboration', 'Deep Learning.']","""Accompanying with the flourish development in various fields, deep neural networks, however, are still facing with the plight of high computational costs and storage. One way to compress these heavy models is knowledge transfer (KT), in which a light student network is trained through absorbing the knowledge from a powerful teacher network. In this paper, we propose a novel knowledge transfer method which employs a Student-Teacher Collaboration (STC) network during the knowledge transfer process. This is done by connecting the front part of the student network to the back part of the teacher network as the STC network. The back part of the teacher network takes the intermediate representation from the front part of the student network as input to make the prediction. The difference between the prediction from the collaboration network and the output tensor from the teacher network is taken into account of the loss during the train process. Through back propagation, the teacher network provides guidance to the student network in a gradient signal manner. In this way, our method takes advantage of the knowledge from the entire teacher network, who instructs the student network in learning process. Through plentiful experiments, it is proved that our STC method outperforms other KT methods with conventional strategy.""","""This paper has been assessed by three reviewers scoring it as follows: 6, 3, 8. The submission however attracted some criticism post-rebuttal from the reviewers e.g., why concatenating teacher to student is better than the use l2 loss or how the choice of transf. layers has been made (ad-hoc). Similarly, other major criticism includes lack of proper referencing to parts of work that have been in fact developed earlier in preceding papers. On balance, this paper falls short of the expectations of ICLR 2020, thus it cannot be accepted at this time. The authors are encouraged to work through major comments and resolve them for a future submission."""
1507,"""Deep geometric matrix completion: Are we doing it right?""","['Geometric Matrix Completion', 'Spectral Graph Theory', 'Functional Maps', 'Deep Linear Networks']","""We address the problem of reconstructing a matrix from a subset of its entries. Current methods, branded as geometric matrix completion, augment classical rank regularization techniques by incorporating geometric information into the solution. This information is usually provided as graphs encoding relations between rows/columns. In this work we propose a simple spectral approach for solving the matrix completion problem, via the framework of functional maps. We introduce the zoomout loss, a multiresolution spectral geometric loss inspired by recent advances in shape correspondence, whose minimization leads to state-of-the-art results on various recommender systems datasets. Surprisingly, for some datasets we were able to achieve comparable results even without incorporating geometric information. This puts into question both the quality of such information and current methods' ability to use it in a meaningful and efficient way.""","""This paper proposes a multiresolution spectral geometric loss called the zoomout loss to help with matrix completion, and show state-of-the-art results on several recommendation benchmarks, although experiments also show that the result improvements are not always dependent upon the geometric loss itself. Reviewers find the idea interesting and the results promising but also have important concerns about the experiments not establishing how the approach truly works. Authors have clarified their explanations in the revisions and provided requested experiments (e.g., on the importance of the initialization size), however important reservations re. why the approach works are still not sufficiently addressed, and would require more iterations to fulfill the potential of this paper. Therefore, we recommend rejection."""
1508,"""Path Space for Recurrent Neural Networks with ReLU Activations""","['optimization', 'neural network', 'positively scale-invariant', 'path space', 'deep learning', 'RNN']","""It is well known that neural networks with rectified linear units (ReLU) activation functions are positively scale-invariant (i.e., the neural network is invariant to positive rescaling of weights). Optimization algorithms like stochastic gradient descent that optimize the neural networks in the vector space of weights, which are not positively scale-invariant. To solve this mismatch, a new parameter space called path space has been proposed for feedforward and convolutional neural networks. The path space is positively scale-invariant and optimization algorithms operating in path space have been shown to be superior than that in the original weight space. However, the theory of path space and the corresponding optimization algorithm cannot be naturally extended to more complex neural networks, like Recurrent Neural Networks(RNN) due to the recurrent structure and the parameter sharing scheme over time. In this work, we aim to construct path space for RNN with ReLU activations so that we can employ optimization algorithms in path space. To achieve the goal, we propose leveraging the reduction graph of RNN which removes the influence of time-steps, and prove that all the values of whose paths can serve as a sufficient representation of the RNN with ReLU activations. We then prove that the path space for RNN is composed by the basis paths in reduction graph, and design a \emph{Skeleton Method} to identify the basis paths efficiently. With the identified basis paths, we develop the optimization algorithm in path space for RNN models. Our experiments on several benchmark datasets show that we can obtain significantly more effective RNN models in this way than using optimization methods in the weight space. ""","""The scores of the reviewers are just far to low to warrant an acceptance recommendation from the AC."""
1509,"""Understanding Isomorphism Bias in Graph Data Sets ""","['graph classification', 'data sets', 'graph representation learning']","""In recent years there has been a rapid increase in classification methods on graph structured data. Both in graph kernels and graph neural networks, one of the implicit assumptions of successful state-of-the-art models was that incorporating graph isomorphism features into the architecture leads to better empirical performance. However, as we discover in this work, commonly used data sets for graph classification have repeating instances which cause the problem of isomorphism bias, i.e. artificially increasing the accuracy of the models by memorizing target information from the training set. This prevents fair competition of the algorithms and raises a question of the validity of the obtained results. We analyze 54 data sets, previously extensively used for graph-related tasks, on the existence of isomorphism bias, give a set of recommendations to machine learning practitioners to properly set up their models, and open source new data sets for the future experiments. ""","""Thanks to reviewers and authors for an interesting discussion. It seems the central question is whether learning to identify correct bijections should be part of graph classification problems, or whether this leads to bias and overfitting. Reviews are generally negative, putting this in the lower third of the submissions. The paper, however, inspired an interesting discussion, and I would encourage the authors to continue this line of work, addressing the question of bias and overfitting more directly, possibly going beyond dataset evaluation and, for example, thinking about how to evaluate whether training on non-isomorphic graphs leads to better off-training set generalization."""
1510,"""Pay Attention to Features, Transfer Learn Faster CNNs""","['transfer learning', 'pruning', 'faster CNNs']","""Deep convolutional neural networks are now widely deployed in vision applications, but a limited size of training data can restrict their task performance. Transfer learning offers the chance for CNNs to learn with limited data samples by transferring knowledge from models pretrained on large datasets. Blindly transferring all learned features from the source dataset, however, brings unnecessary computation to CNNs on the target task. In this paper, we propose attentive feature distillation and selection (AFDS), which not only adjusts the strength of transfer learning regularization but also dynamically determines the important features to transfer. By deploying AFDS on ResNet-101, we achieved a state-of-the-art computation reduction at the same accuracy budget, outperforming all existing transfer learning methods. With a 10x MACs reduction budget, a ResNet-101 equipped with AFDS transfer learned from ImageNet to Stanford Dogs 120, can achieve an accuracy 11.07% higher than its best competitor.""","""This paper presents an attention-based approach to transfer faster CNNs, which tackles the problem of jointly transferring source knowledge and pruning target CNNs. Reviewers are unanimously positive on the paper, in terms of a well-written paper with a reasonable approach that yields strong empirical performance under the resource constraint. AC feels that the paper studies an important problem of making transfer learning faster for CNNs, however, the proposed model is a relatively straightforward combination of fine-tuning and filter-pruning, each having very extensive prior works. Also, AC has very critical comments for improving this paper: - The Attentive Feature Distillation (AFD) module is very similar to DELTA (Li et al. ICLR 2019) and L2T (Jang et al. ICML 2019), significantly weakening the novelty. The empirical evaluation should consider DELTA as baselines, e.g. AFS+DELTA. I accept this paper, assuming that all comments will be well addressed in the revision."""
1511,"""On Evaluating Explainability Algorithms""","['interpretability', 'Deep Learning']","""A plethora of methods attempting to explain predictions of black-box models have been proposed by the Explainable Artificial Intelligence (XAI) community. Yet, measuring the quality of the generated explanations is largely unexplored, making quantitative comparisons non-trivial. In this work, we propose a suite of multifaceted metrics that enables us to objectively compare explainers based on the correctness, consistency, as well as the confidence of the generated explanations. These metrics are computationally inexpensive, do not require model-retraining and can be used across different data modalities. We evaluate them on common explainers such as Grad-CAM, SmoothGrad, LIME and Integrated Gradients. Our experiments show that the proposed metrics reflect qualitative observations reported in earlier works.""","""The paper proposes metrics for comparing explainability metrics. Both reviewers and authors have engaged in a thorough discussion of the paper and feedback. The reviewers, although appreciating aspects of the paper, all see major issues with the paper. All reviewers recommend reject. """
1512,"""A FRAMEWORK FOR ROBUSTNESS CERTIFICATION OF SMOOTHED CLASSIFIERS USING F-DIVERGENCES""","['verification of machine learning', 'certified robustness of neural networks']","""Formal verification techniques that compute provable guarantees on properties of machine learning models, like robustness to norm-bounded adversarial perturbations, have yielded impressive results. Although most techniques developed so far require knowledge of the architecture of the machine learning model and remain hard to scale to complex prediction pipelines, the method of randomized smoothing has been shown to overcome many of these obstacles. By requiring only black-box access to the underlying model, randomized smoothing scales to large architectures and is agnostic to the internals of the network. However, past work on randomized smoothing has focused on restricted classes of smoothing measures or perturbations (like Gaussian or discrete) and has only been able to prove robustness with respect to simple norm bounds. In this paper we introduce a general framework for proving robustness properties of smoothed machine learning models in the black-box setting. Specifically, we extend randomized smoothing procedures to handle arbitrary smoothing measures and prove robustness of the smoothed classifier by using f-divergences. Our methodology improves upon the state of the art in terms of computation time or certified robustness on several image classification tasks and an audio classification task, with respect to several classes of adversarial perturbations. ""","""This submission proposes a black-box method for certifying the robustness of smoothed classifiers in the presence of adversarial perturbations. This work goes beyond previous works in certifying robustness for arbitrary smoothing measures. Strengths: -Sound formulation and theoretical justification to tackle an important problem. Weaknesses -Experimental comparison was at times not fair. -The presentation and writing could be improved. These two weaknesses were sufficiently addressed during the discussion. All reviewers recommend acceptance."""
1513,"""New Loss Functions for Fast Maximum Inner Product Search""",[],"""Quantization based methods are popular for solving large scale maximum inner product search problems. However, in most traditional quantization works, the objective is to minimize the reconstruction error for datapoints to be searched. In this work, we focus directly on minimizing error in inner product approximation and derive a new class of quantization loss functions. One key aspect of the new loss functions is that we weight the error term based on the value of the inner product, giving more importance to pairs of queries and datapoints whose inner products are high. We provide theoretical grounding to the new quantization loss function, which is simple, intuitive and able to work with a variety of quantization techniques, including binary quantization and product quantization. We conduct experiments on public benchmarking datasets \url{pseudo-url} to demonstrate that our method using the new objective outperforms other state-of-the-art methods. We are committed to release our source code.""","""While there was some support for the ideas presented, the majority of reviewers felt that this submission is not ready for publication at ICLR in its present form. Concerns were raised as to the generality of the approach, thoroughness of experiments, and clarity of the exposition."""
1514,"""Discriminative Variational Autoencoder for Continual Learning with Generative Replay""","['Continual learning', 'Generative replay', 'Variational Autoencoder']","""Generative replay (GR) is a method to alleviate catastrophic forgetting in continual learning (CL) by generating previous task data and learning them together with the data from new tasks. In this paper, we propose discriminative variational autoencoder (DiVA) to address the GR-based CL problem. DiVA has class-wise discriminative latent embeddings by maximizing the mutual information between classes and latent variables of VAE. Thus, DiVA is directly applicable to classification and class-conditional generation which are efficient and effective properties in the GR-based CL scenario. Furthermore, we use a novel trick based on domain translation to cover natural images which is challenging to GR-based methods. As a result, DiVA achieved the competitive or higher accuracy compared to state-of-the-art algorithms in Permuted MNIST, Split MNIST, and Split CIFAR10 settings.""","""The paper presents a method for continual learning with a variant of VAE. The proposed approach is reasonable but technical contribution is quite incremental. The experimental results are limited to comparisons among methods with generative replay, and experimental results on more complex datasets (e.g., CIFAR 100, CUB, ImageNet) are missing. Overall, the contribution of the work in the current form seems insufficient for acceptance at ICLR."""
1515,"""Unsupervised Meta-Learning for Reinforcement Learning""","['Meta-Learning', 'Reinforcement Learning']","""Meta-learning algorithms learn to acquire new tasks more quickly from past experience. In the context of reinforcement learning, meta-learning algorithms can acquire reinforcement learning procedures to solve new problems more efficiently by utilizing experience from prior tasks. The performance of meta-learning algorithms depends on the tasks available for meta-training: in the same way that supervised learning generalizes best to test points drawn from the same distribution as the training points, meta-learning methods generalize best to tasks from the same distribution as the meta-training tasks. In effect, meta-reinforcement learning offloads the design burden from algorithm design to task design. If we can automate the process of task design as well, we can devise a meta-learning algorithm that is truly automated. In this work, we take a step in this direction, proposing a family of unsupervised meta-learning algorithms for reinforcement learning. We motivate and describe a general recipe for unsupervised meta-reinforcement learning, and present an instantiation of this approach. Our conceptual and theoretical contributions consist of formulating the unsupervised meta-reinforcement learning problem and describing how task proposals based on mutual information can in principle be used to train optimal meta-learners. Our experimental results indicate that unsupervised meta-reinforcement learning effectively acquires accelerated reinforcement learning procedures without the need for manual task design and significantly exceeds the performance of learning from scratch.""","""The paper discusses the relevant topic of unsupervised meta-learning in an RL setting. The topic is an interesting one, but the writing and motivation could be much clearer. I advise the authors to make a few more iterations on the paper taking into account the reviewers' comments and then resubmit to a different venue."""
1516,"""Explaining Time Series by Counterfactuals""","['explainability', 'counterfactual modeling', 'time series']","""We propose a method to automatically compute the importance of features at every observation in time series, by simulating counterfactual trajectories given previous observations. We define the importance of each observation as the change in the model output caused by replacing the observation with a generated one. Our method can be applied to arbitrarily complex time series models. We compare the generated feature importance to existing methods like sensitivity analyses, feature occlusion, and other explanation baselines to show that our approach generates more precise explanations and is less sensitive to noise in the input signals.""","""The paper proposes a definition of and an algorithm for computing the importance of features in time series classification / regression. The importance is defined as a finite difference version of standard sensitivity analysis, where the distribution over finite perturbations is given by a learned time series model. The approach is tested on simulated and real-world data sets. The reviewers note a lack of novelty in the paper and deem the contribution somewhat incremental, although exposition and experiments have improved compared to previous versions of the manuscript. I recommend to reject this paper in its current form, taking into account on the reviews and my own reading, mostly due t a lack of novelty. Furthermore, the authors call their method a ""counterfactual"" approach. I don't agree with this terminology. No attempt is made to justify is by linking it to the relevant causal literature on counterfactuals. The authors do indeed motivate their algorithm by considering how the classifier output would change ""had an observation been different"" (a counterfactual), but mathematical in their model this the same as asking ""what changes if the observation is different"" (interventional query). """
1517,"""Difference-Seeking Generative Adversarial Network--Unseen Sample Generation""","['generative adversarial network', 'semi-supervised learning', 'novelty detection']",""" Unseen data, which are not samples from the distribution of training data and are difficult to collect, have exhibited importance in numerous applications, ({\em e.g.,} novelty detection, semi-supervised learning, and adversarial training). In this paper, we introduce a general framework called \textbf{d}ifference-\textbf{s}eeking \textbf{g}enerative \textbf{a}dversarial \textbf{n}etwork (DSGAN), to generate various types of unseen data. Its novelty is the consideration of the probability density of the unseen data distribution as the difference between two distributions pseudo-formula and pseudo-formula whose samples are relatively easy to collect. The DSGAN can learn the target distribution, pseudo-formula , (or the unseen data distribution) from only the samples from the two distributions, pseudo-formula and pseudo-formula . In our scenario, pseudo-formula is the distribution of the seen data, and pseudo-formula can be obtained from pseudo-formula via simple operations, so that we only need the samples of pseudo-formula during the training. Two key applications, semi-supervised learning and novelty detection, are taken as case studies to illustrate that the DSGAN enables the production of various unseen data. We also provide theoretical analyses about the convergence of the DSGAN. ""","""The authors propose a way to generate unseen examples in GANs by learning the difference of two distributions for which we have access. The majority of reviewers agree on the originality and practicality of the idea."""
1518,"""Deep Expectation-Maximization in Hidden Markov Models via Simultaneous Perturbation Stochastic Approximation""","['recommender system', 'gradient approximation', 'Hidden Markov Model']","""We propose a novel method to estimate the parameters of a collection of Hidden Markov Models (HMM), each of which corresponds to a set of known features. The observation sequence of an individual HMM is noisy and/or insufficient, making parameter estimation solely based on its corresponding observation sequence a challenging problem. The key idea is to combine the classical Expectation-Maximization (EM) algorithm with a neural network, while these two are jointly trained in an end-to-end fashion, mapping the HMM features to its parameters and effectively fusing the information across different HMMs. In order to address the numerical difficulty in computing the gradient of the EM iteration, simultaneous perturbation stochastic approximation (SPSA) is employed to approximate the gradient. We also provide a rigorous proof that the approximated gradient due to SPSA converges to the true gradient almost surely. The efficacy of the proposed method is demonstrated on synthetic data as well as a real-world e-Commerce dataset. ""","""The authors propose to use numerical differentiation to approximate the Jacobian while estimating the parameters for a collection of Hidden Markov Models (HMMs). Two reviewers provided detailed and constructive comments, while unanimously rated weak rejection. Reviewer #1 likes the general idea of the work, and consider the contribution to be sound. However, he concerns the reproducibility of the work due to the niche database from e-commerce applications. Reviewer #2 concerns the poor presentation, especially section 3. The authors respond to Reviewers concerns but did not change the rating. The ACs concur the concerns and the paper can not be accepted at its current state."""
1519,"""Adversarial Inductive Transfer Learning with input and output space adaptation""","['Inductive transfer learning', 'adversarial learning', 'multi-task learning', 'pharmacogenomics', 'precision oncology']","""We propose Adversarial Inductive Transfer Learning (AITL), a method for addressing discrepancies in input and output spaces between source and target domains. AITL utilizes adversarial domain adaptation and multi-task learning to address these discrepancies. Our motivating application is pharmacogenomics where the goal is to predict drug response in patients using their genomic information. The challenge is that clinical data (i.e. patients) with drug response outcome is very limited, creating a need for transfer learning to bridge the gap between large pre-clinical pharmacogenomics datasets (e.g. cancer cell lines) and clinical datasets. Discrepancies exist between 1) the genomic data of pre-clinical and clinical datasets (the input space), and 2) the different measures of the drug response (the output space). To the best of our knowledge, AITL is the first adversarial inductive transfer learning method to address both input and output discrepancies. Experimental results indicate that AITL outperforms state-of-the-art pharmacogenomics and transfer learning baselines and may guide precision oncology more accurately.""","""The paper proposes an adversarial inductive transfer learning method that handles distribution changes in both input and output spaces. While the studied problem is interesting, reviewers have major concerns about the incremental modeling contribution, the lack of comparative study to existing methods and ablation study to disentangling different modules. Overall, the current study is less convincing from either theoretical analysis or experimental results. Hence I recommend rejection."""
1520,"""PowerSGD: Powered Stochastic Gradient Descent Methods for Accelerated Non-Convex Optimization""","['stochastic gradient descent', 'non-convex optimization', 'powerball function', 'acceleration']","""In this paper, we propose a novel technique for improving the stochastic gradient descent (SGD) method to train deep networks, which we term \emph{PowerSGD}. The proposed PowerSGD method simply raises the stochastic gradient to a certain power pseudo-formula during iterations and introduces only one additional parameter, namely, the power exponent pseudo-formula (when pseudo-formula , PowerSGD reduces to SGD). We further propose PowerSGD with momentum, which we term \emph{PowerSGDM}, and provide convergence rate analysis on both PowerSGD and PowerSGDM methods. Experiments are conducted on popular deep learning models and benchmark datasets. Empirical results show that the proposed PowerSGD and PowerSGDM obtain faster initial training speed than adaptive gradient methods, comparable generalization ability with SGD, and improved robustness to hyper-parameter selection and vanishing gradients. PowerSGD is essentially a gradient modifier via a nonlinear transformation. As such, it is orthogonal and complementary to other techniques for accelerating gradient-based optimization. ""","""After reading the author's rebuttal, the reviewers still think that this is an incremental work, and the theory and experiments .are inconsistent. The authors are encouraged to consider the the reivewer's comments to improve the paper."""
1521,"""Neural Policy Gradient Methods: Global Optimality and Rates of Convergence""",[],"""Policy gradient methods with actor-critic schemes demonstrate tremendous empirical successes, especially when the actors and critics are parameterized by neural networks. However, it remains less clear whether such ""neural"" policy gradient methods converge to globally optimal policies and whether they even converge at all. We answer both the questions affirmatively in the overparameterized regime. In detail, we prove that neural natural policy gradient converges to a globally optimal policy at a sublinear rate. Also, we show that neural vanilla policy gradient converges sublinearly to a stationary point. Meanwhile, by relating the suboptimality of the stationary points to the~representation power of neural actor and critic classes, we prove the global optimality of all stationary points under mild regularity conditions. Particularly, we show that a key to the global optimality and convergence is the ""compatibility"" between the actor and critic, which is ensured by sharing neural architectures and random initializations across the actor and critic. To the best of our knowledge, our analysis establishes the first global optimality and convergence guarantees for neural policy gradient methods. ""","""The paper makes a solid contribution to understanding the convergence properties of policy gradient methods with over-parameterized neural network function approximators. This work is concurrent with and not subsumed by other strong work by Agarwal et al. on the same topic. There is sufficient novelty in this contribution to merit acceptance. The authors should nevertheless clarify the relationship between their work and the related work noted by AnonReviewer2, in addition to addressing the other comments of the reviewers."""
1522,"""Provable Benefit of Orthogonal Initialization in Optimizing Deep Linear Networks""","['deep learning theory', 'non-convex optimization', 'orthogonal initialization']","""The selection of initial parameter values for gradient-based optimization of deep neural networks is one of the most impactful hyperparameter choices in deep learning systems, affecting both convergence times and model performance. Yet despite significant empirical and theoretical analysis, relatively little has been proved about the concrete effects of different initialization schemes. In this work, we analyze the effect of initialization in deep linear networks, and provide for the first time a rigorous proof that drawing the initial weights from the orthogonal group speeds up convergence relative to the standard Gaussian initialization with iid weights. We show that for deep networks, the width needed for efficient convergence to a global minimum with orthogonal initializations is independent of the depth, whereas the width needed for efficient convergence with Gaussian initializations scales linearly in the depth. Our results demonstrate how the benefits of a good initialization can persist throughout learning, suggesting an explanation for the recent empirical successes found by initializing very deep non-linear networks according to the principle of dynamical isometry.""","""The paper shows that initializing the parameters of a deep linear network from the orthogonal group speeds up learning, whereas sampling the parameters from a Gaussian may be harmful. The result of this paper can be interesting to the deep learning community. The main concern the reviewers raised is the huge overlap with the paper by Du & Hu (2019). It would have been nice to actually see whether the results for linear networks empirically also hold for nonlinear networks. """
1523,"""Meta-Q-Learning""","['meta reinforcement learning', 'propensity estimation', 'off-policy']","""This paper introduces Meta-Q-Learning (MQL), a new off-policy algorithm for meta-Reinforcement Learning (meta-RL). MQL builds upon three simple ideas. First, we show that Q-learning is competitive with state-of-the-art meta-RL algorithms if given access to a context variable that is a representation of the past trajectory. Second, a multi-task objective to maximize the average reward across the training tasks is an effective method to meta-train RL policies. Third, past data from the meta-training replay buffer can be recycled to adapt the policy on a new task using off-policy updates. MQL draws upon ideas in propensity estimation to do so and thereby amplifies the amount of available data for adaptation. Experiments on standard continuous-control benchmarks suggest that MQL compares favorably with the state of the art in meta-RL.""","""This papers contribution is twofold: 1) it proposes a new meta-RL method that leverages off-policy meta-learning by importance weighting, and 2) it demonstrates that current popular meta-RL benchmarks dont necessarily require meta-learning, as a simple non-meta-learning algorithm (TD3) conditioned on a context variable of the trajectory is competitive with SoTA meta-learning approaches. The reviewers all agreed that the approach is interesting and the contributions are significant. Id like to thank the reviewers for engaging in a spirited discussion about this paper, both with each other and with the authors. There was also a disagreement about the semantics of whether the approach can be classified as meta-learning, but in my opinion this argument is orthogonal to the practical contributions. After the revisions and rebuttal, reviewers agreed that the paper was improved and increased their ratings as a result, with all recommending accept. Theres a good chance this work will make an impactful contribution to the field of meta-reinforcement learning and therefore I recommend it for an oral presentation. """
1524,"""Equivariant Entity-Relationship Networks""","['deep learning', 'relational model', 'knowledge graph', 'exchangeability', 'equivariance']","""Due to its extensive use in databases, the relational model is ubiquitous in representing big-data. However, recent progress in deep learning with relational data has been focused on (knowledge) graphs. In this paper we propose Equivariant Entity-Relationship Networks, the class of parameter-sharing neural networks derived from the entity-relationship model. We prove that our proposed feed-forward layer is the most expressive linear layer under the given equivariance constraints, and subsumes recently introduced equivariant models for sets, exchangeable tensors, and graphs. The proposed feed-forward layer has linear complexity in the the data and can be used for both inductive and transductive reasoning about relational databases, including database embedding, and the prediction of missing records. This, provides a principled theoretical foundation for the application of deep learning to one of the most abundant forms of data.""","""This paper defines a parameter-tying scheme for a general feed-forward network with the equivalence properties of relational data. Most reviewers raised a few concerns around the experiments, baselines, datasets used and motivation. A few pointed out that the paper is hard to read - for a person without heavy database theory literature, which includes most of ICLR readers. While this paper may read well for the folks in the domain, authors should consider revising the paper to be more inclusive so that it can be read more widely. The motivation of the problem was also another point that many reviewers have mentioned (perhaps related to the language issues above) that some noted that you may not always want equivariance in relational DB and other noted that the paper did not sufficiently demonstrate the advantage of the proposed methods. Reviewers also univocally commented on experiments - many voiced the lack of baselines (not even any simple one). Authors wrote back to defend that there is no similar method and even simple tensor factorization isnt applicable. That makes me wonder - is there really no single simple method you can compare with? If nobody had solution for this problem, is this a problem worth solving? Reviewers also encouraged to use larger (beyond Kaggle dataset) real-world datasets to strengthen the paper. All the points raised by reviewers suggests that this paper can benefit from another round of nontrivial editing before its ready for the show. """
1525,"""Gap-Aware Mitigation of Gradient Staleness""","['distributed', 'asynchronous', 'large scale', 'gradient staleness', 'staleness penalization', 'sgd', 'deep learning', 'neural networks', 'optimization']","""Cloud computing is becoming increasingly popular as a platform for distributed training of deep neural networks. Synchronous stochastic gradient descent (SSGD) suffers from substantial slowdowns due to stragglers if the environment is non-dedicated, as is common in cloud computing. Asynchronous SGD (ASGD) methods are immune to these slowdowns but are scarcely used due to gradient staleness, which encumbers the convergence process. Recent techniques have had limited success mitigating the gradient staleness when scaling up to many workers (computing nodes). In this paper we define the Gap as a measure of gradient staleness and propose Gap-Aware (GA), a novel asynchronous-distributed method that penalizes stale gradients linearly to the Gap and performs well even when scaling to large numbers of workers. Our evaluation on the CIFAR, ImageNet, and WikiText-103 datasets shows that GA outperforms the currently acceptable gradient penalization method, in final test accuracy. We also provide convergence rate proof for GA. Despite prior beliefs, we show that if GA is applied, momentum becomes beneficial in asynchronous environments, even when the number of workers scales up.""","""The authors propose a novel approach for measuring gradient staleness and use this measure to penalize stale gradients in an asynchronous stochastic gradient set up. Following previous work, they provide a convergence proof for their approach. Most importantly, they provide extensive evaluations comparing against previous approaches and show impressive gains over previous work. After the author response, the primary concerns from reviewers is regarding the gap between the proposed method and single worker SGD/synchronous SGD. I feel that the authors have made compelling arguments that ASGD is an important optimization paradigm to consider, so their improvements in narrowing the gap are of interest to the community. There were some concerns about the novelty of the theory, and my impression is that theorem is straightforward to prove based on assumptions and previous work, however, I view the main contribution of the paper as empirical. This paper is borderline, but I think the impressive empirical results over existing work on ASGD is a worthwhile contribution and others will find it interesting, so I am recommending acceptance."""
1526,"""Mesh-Free Unsupervised Learning-Based PDE Solver of Forward and Inverse problems""","['PDEs', 'forward problems', 'inverse problems', 'unsupervised learning', 'deep networks', 'EIT']","""We introduce a novel neural network-based partial differential equations solver for forward and inverse problems. The solver is grid free, mesh free and shape free, and the solution is approximated by a neural network. We employ an unsupervised approach such that the input to the network is a points set in an arbitrary domain, and the output is the set of the corresponding function values. The network is trained to minimize deviations of the learned function from the PDE solution and satisfy the boundary conditions. The resulting solution in turn is an explicit smooth differentiable function with a known analytical form. Unlike other numerical methods such as finite differences and finite elements, the derivatives of the desired function can be analytically calculated to any order. This framework therefore, enables the solution of high order non-linear PDEs. The proposed algorithm is a unified formulation of both forward and inverse problems where the optimized loss function consists of few elements: fidelity terms of L2 and L infinity norms, boundary conditions constraints and additional regularizers. This setting is flexible in the sense that regularizers can be tailored to specific problems. We demonstrate our method on a free shape 2D second order elliptical system with application to Electrical Impedance Tomography (EIT). ""","""This paper proposes modifying the training loss for neural net-based PDE solvers, by adding an L_infty (max) term to the standard L_2 loss. The motivation for this loss is sensible in that it matches the definition of a strong solution, but this is only a heuristic motivation, and is missing a theoretical analysis. This paper's lack of novelty and polish, as well as the lack of clarity in the implementation details, makes this a narrow reject."""
1527,"""Deep RL for Blood Glucose Control: Lessons, Challenges, and Opportunities""","['Deep Reinforcement Learning', 'Diabetes', 'Artificial Pancreas', 'Control']","""Individuals with type 1 diabetes (T1D) lack the ability to produce the insulin their bodies need. As a result, they must continually make decisions about how much insulin to self-administer in order to adequately control their blood glucose levels. Longitudinal data streams captured from wearables, like continuous glucose monitors, can help these individuals manage their health, but currently the majority of the decision burden remains on the user. To relieve this burden, researchers are working on closed-loop solutions that combine a continuous glucose monitor and an insulin pump with a control algorithm in an `artificial pancreas.' Such systems aim to estimate and deliver the appropriate amount of insulin. Here, we develop reinforcement learning (RL) techniques for automated blood glucose control. Through a series of experiments, we compare the performance of different deep RL approaches to non-RL approaches. We highlight the flexibility of RL approaches, demonstrating how they can adapt to new individuals with little additional data. On over 21k hours of simulated data across 30 patients, RL approaches outperform baseline control algorithms (increasing time spent in normal glucose range from 71% to 75%) without requiring meal announcements. Moreover, these approaches are adept at leveraging latent behavioral patterns (increasing time in range from 58% to 70%). This work demonstrates the potential of deep RL for controlling complex physiological systems with minimal expert knowledge. ""","""The reviewers all believe that this paper is not yet ready for publication. All agree that this is an important application, and an interesting approach. The methodological novelty, as well as other parts of exposition, involving related work, or further discussion of what this solution means for patients, is right now not completely convincing to reviewers. My recommendation is to work on making sure the exposition best explains the methodology, and making sure this venue is the best for the submitted line of work."""
1528,"""Variational Information Bottleneck for Unsupervised Clustering: Deep Gaussian Mixture Embedding""","['clustering', 'Variational Information Bottleneck', 'Gaussian Mixture Model']","""In this paper, we develop an unsupervised generative clustering framework that combines variational information bottleneck and the Gaussian Mixture Model. Specifically, in our approach we use the variational information bottleneck method and model the latent space as a mixture of Gaussians. We derive a bound on the cost function of our model that generalizes the evidence lower bound (ELBO); and provide a variational inference type algorithm that allows to compute it. In the algorithm, the coders mappings are parametrized using neural networks and the bound is approximated by Markov sampling and optimized with stochastic gradient descent. Numerical results on real datasets are provided to support the efficiency of our method.""","""This paper proposes to use a mixture of Gaussians to variationally encode high-dimensional data through a latent space. The latent codes are constrained using the variational information bottleneck machinery. While the paper is well-motivated and relatively well-written, it contains minimal novel ideas. The consensus in reviews and lack of rebuttal make it clear that this paper should be significantly augmented with novel material before being published to ICLR. """
1529,"""Decoupling Adaptation from Modeling with Meta-Optimizers for Meta Learning""","['meta-learning', 'MAML', 'analysis', 'depth', 'meta-optimizers']","""Meta-learning methods, most notably Model-Agnostic Meta-Learning (Finn et al, 2017) or MAML, have achieved great success in adapting to new tasks quickly, after having been trained on similar tasks. The mechanism behind their success, however, is poorly understood. We begin this work with an experimental analysis of MAML, finding that deep models are crucial for its success, even given sets of simple tasks where a linear model would suffice on any individual task. Furthermore, on image-recognition tasks, we find that the early layers of MAML-trained models learn task-invariant features, while later layers are used for adaptation, providing further evidence that these models require greater capacity than is strictly necessary for their individual tasks. Following our findings, we propose a method which enables better use of model capacity at inference time by separating the adaptation aspect of meta-learning into parameters that are only used for adaptation but are not part of the forward model. We find that our approach enables more effective meta-learning in smaller models, which are suitably sized for the individual tasks. ""","""This paper presents a number of experiments involving the Model-Agnostic Meta-Learning (MAML) framework, both for the purpose of understanding its behavior and motivating specific enhancements. With respect to the former, the paper argues that deeper networks allow earlier layers to learn generic modeling features that can be adapted via later layers in a task-specific way. The paper then suggests that this implicit decomposition can be explicitly formulated via the use of meta-optimizers for handling adaptations, allowing for simpler networks that may not require generic modeling-specific layers. At the end of the rebuttal and discussion phases, two reviewers chose rejection while one preferred acceptance. In this regard, as AC I did not find clear evidence that warranted overriding the reviewer majority, and consistent with some of the evaluations, I believe that there are several points whereby this paper could be improved. More specifically, my feeling is that some of the conclusions of this paper would either already be expected by members of the community, or else would require further empirical support to draw more firm conclusions. For example, the fact that earlier layers encode more generic features that are not adapted for each task is not at all surprising (such low-level features are natural to be shared). Moreover, when the linear model from Section 3.2 is replaced by a deep linear network, clearly the model capacity is not changed, but the effective number of parameters which determine the gradient update will be significantly expanded in a seemingly non-trivial way. This is then likely to be of some benefit. Consequently, one could naturally view the extra parameters as forming an implicit meta-optimizer, and it is not so remarkable that other trainable meta-optimizers might work well. Indeed cited references such as (Park & Oliva, 2019) have already applied explicit meta-optimizers to MAML and few-shot learning tasks. And based on Table 2, the proposed factorized meta-optimizer does not appear to show any clear advantage over the meta-curvature method from (Park & Oliva, 2019). Overall, either by using deeper networks or an explicit trainable meta-optimizer, there are going to be more adaptable parameters to exploit and so the expectation is that there will be room for improvement. Even so, I am not against the message of this paper. Rather it is just that for an empirically-based submission with close ties to existing work, the bar is generally a bit higher in terms of the quality and scope of the experiments. As a final (lesser) point, the paper argues that meta-optimizers allow for the decomposition of modeling and adaptation as mentioned above; however, I did not see exactly where this claim was precisely corroborated empirically. For example, one useful test could be to recreate Figure 2 but with the meta-optimizer in place and a shallower network architecture. The expectation then might be that general features are no longer necessary."""
1530,"""Unsupervised Spatiotemporal Data Inpainting""","['Deep Learning', 'Adversarial', 'MAP', 'GAN', 'neural networks', 'video']","""We tackle the problem of inpainting occluded area in spatiotemporal sequences, such as cloud occluded satellite observations, in an unsupervised manner. We place ourselves in the setting where there is neither access to paired nor unpaired training data. We consider several cases in which the underlying information of the observed sequence in certain areas is lost through an observation operator. In this case, the only available information is provided by the observation of the sequence, the nature of the measurement process and its associated statistics. We propose an unsupervised-learning framework to retrieve the most probable sequence using a generative adversarial network. We demonstrate the capacity of our model to exhibit strong reconstruction capacity on several video datasets such as satellite sequences or natural videos. ""","""This paper studies the problem of unsupervised inpainting occluded areas in spatiotemporal sequences and propose a GAN-based framework which is able to complete the occluded areas given the stochastic model of the occlusion process. The reviewers agree that the problem is interesting, the paper is well written, and that the proposed approach is reasonable. However, after the discussion phase the critical point raised by AnonReviewer1 remains: in principle, when applying different corruptions in each step, the model is able to see the entire video over the duration of the training. This coupled with the strong assumptions on the mask distribution makes it questionable whether the approach should be considered unsupervised. Given that the results of the supervised methods significantly outperform the unsupervised ones, this issue needs to be carefully addressed to provide a clear and convincing selling point. Hence, I will recommend rejection and encourage the authors to address the remaining issues (the answers in the rebuttal are a good starting point)."""
1531,"""Improved Modeling of Complex Systems Using Hybrid Physics/Machine Learning/Stochastic Models""","['Composition', 'extrapolation', 'boosting', 'autocorrelation', 'systematic errors']","""Combining domain knowledge models with neural models has been challenging. End-to-end trained neural models often perform better (lower Mean Square Error) than domain knowledge models or domain/neural combinations, and the combination is inefficient to train. In this paper, we demonstrate that by composing domain models with machine learning models, by using extrapolative testing sets, and invoking decorrelation objective functions, we create models which can predict more complex systems. The models are interpretable, extrapolative, data-efficient, and capture predictable but complex non-stochastic behavior such as unmodeled degrees of freedom and systemic measurement noise. We apply this improved modeling paradigm to several simulated systems and an actual physical system in the context of system identification. Several ways of composing domain models with neural models are examined for time series, boosting, bagging, and auto-encoding on various systems of varying complexity and non-linearity. Although this work is preliminary, we show that the ability to combine models is a very promising direction for neural modeling.""","""All reviewers agree that the paper is to be rejected, provided strong claims that were not answered. In this form (especially with such a title) it could not be published (it is more of a technical/engineering interest)."""
1532,"""Multichannel Generative Language Models""","['text generation', 'generative language models', 'natural language processing']","""A channel corresponds to a viewpoint or transformation of an underlying meaning. A pair of parallel sentences in English and French express the same underlying meaning but through two separate channels corresponding to their languages. In this work, we present Multichannel Generative Language Models (MGLM), which models the joint distribution over multiple channels, and all its decompositions using a single neural network. MGLM can be trained by feeding it k way parallel-data, bilingual data, or monolingual data across pre-determined channels. MGLM is capable of both conditional generation and unconditional sampling. For conditional generation, the model is given a fully observed channel, and generates the k-1 channels in parallel. In the case of machine translation, this is akin to giving it one source, and the model generates k-1 targets. MGLM can also do partial conditional sampling, where the channels are seeded with prespecified words, and the model is asked to infill the rest. Finally, we can sample from MGLM unconditionally over all k channels. Our experiments on the Multi30K dataset containing English, French, Czech, and German languages suggest that the multitask training with the joint objective leads to improvements in bilingual translations. We provide a quantitative analysis of the quality-diversity trade-offs for different variants of the multichannel model for conditional generation, and a measurement of self-consistency during unconditional generation. We provide qualitative examples for parallel greedy decoding across languages and sampling from the joint distribution of the 4 languages.""","""This paper presents a multi-view generative model which is applied to multilingual text generation. Although all reviewers find the overall approach is important and some results are interesting, the main concern is about the novelty. At the technical level, the proposed method is the extension of the original two-view KERMIT to multiviews, which I have to say incremental. At a higher level, multi-lingual language generation itself is not a very novel idea, and the contribution of the proposed method should be better positioned comparing to related studies. (for example, Dong et al, ACL 2015 as suggested by R#3). Also, some reviewers pointed out the problems in presentation and unconvincing experimental setup. I support the reviewers opinions and would like to recommend rejection this time. I recommend authors to take in the reviewers comments and polish the work for the next chance. """
1533,"""Global Adversarial Robustness Guarantees for Neural Networks""","['Adversarial Robustness', 'Statistical Guarantees', 'Deep Neural Networks', 'Bayesian Neural Networks']","""We investigate global adversarial robustness guarantees for machine learning models. Specifically, given a trained model we consider the problem of computing the probability that its prediction at any point sampled from the (unknown) input distribution is susceptible to adversarial attacks. Assuming continuity of the model, we prove measurability for a selection of local robustness properties used in the literature. We then show how concentration inequalities can be employed to compute global robustness with estimation error upper-bounded by pseudo-formula , for any > 0$ selected a priori. We utilise the methods to provide statistically sound analysis of the robustness/accuracy trade-off for a variety of neural networks architectures and training methods on MNIST, Fashion-MNIST and CIFAR. We empirically observe that robustness and accuracy tend to be negatively correlated for networks trained via stochastic gradient descent and with iterative pruning techniques, while a positive trend is observed between them in Bayesian settings.""","""The authors propose a framework for estimating ""global robustness"" of a neural network, defined as the expected value of ""local robustness"" (robustness to small perturbations) over the data distribution. The authors prove the the local robustness metric is measurable and that under this condition, derive a statistically efficient estimator. The authors use gradient based attacks to approximate local robustness in practice and report extensive experimental results across several datasets. While the paper does make some interesting contributions, the reviewers were concerned about the following issues: 1) The measurability result, while technically important, is not surprising and does not add much insight algorithmically or statistically into the problem at hand. Outside of this, the paper does not make any significant technical contributions. 2) The paper is poorly organized and does not clearly articulate the main contributions and significance of these relative to prior work. 3) The fact that the local robustness metric is approximated via gradient based attacks makes the final results void of any guarantees, since there are no guarantees that gradient based attacks compute the worst case adversarial perturbation. This calls into question the main contribution claim of the paper on computing global robustness guarantees. While some of the technical aspects of the reveiwers' concerns were clarified during the discussion phase, this was not sufficient to address the fundamental issues raised above. Hence, I recommend rejection."""
1534,"""Meta-Learning by Hallucinating Useful Examples""","['few-shot learning', 'meta-learning']","""Learning to hallucinate additional examples has recently been shown as a promising direction to address few-shot learning tasks, which aim to learn novel concepts from very few examples. The hallucination process, however, is still far from generating effective samples for learning. In this work, we investigate two important requirements for the hallucinator --- (i) precision: the generated examples should lead to good classifier performance, and (ii) collaboration: both the hallucinator and the classification component need to be trained jointly. By integrating these requirements as novel loss functions into a general meta-learning with hallucination framework, our model-agnostic PrecisE Collaborative hAlluciNator (PECAN) facilitates data hallucination to improve the performance of new classification tasks. Extensive experiments demonstrate state-of-the-art performance on competitive miniImageNet and ImageNet based few-shot benchmarks in various scenarios.""","""This paper describes a new approach to meta-learning with generating new useful examples. The reviewers liked the paper but overall felt that the paper is not ready for publication as it stands. Rejection is recommended. """
1535,"""How many weights are enough : can tensor factorization learn efficient policies ?""","['reinforcement learning', 'Q-learning', 'tensor factorization', 'low-rank approximation', 'data efficiency', 'second-order optimization', 'scattering']","""Deep reinforcement learning requires a heavy price in terms of sample efficiency and overparameterization in the neural networks used for function approximation. In this work, we employ tensor factorization in order to learn more compact representations for reinforcement learning policies. We show empirically that in the low-data regime, it is possible to learn online policies with 2 to 10 times less total coefficients, with little to no loss of performance. We also leverage progress in second order optimization, and use the theory of wavelet scattering to further reduce the number of learned coefficients, by foregoing learning the topmost convolutional layer filters altogether. We evaluate our results on the Atari suite against recent baseline algorithms that represent the state-of-the-art in data efficiency, and get comparable results with an order of magnitude gain in weight parsimony.""","""In this paper dense layers in deep neural networks representing policies are replaced by tensor regression layers, also by a scattering layer, and second-order optimization is considered. The paper does not have a single consistent message, and combines different techniques for unclear reason. Important related work is not cited. The presentation was found unclear by the reviewers. """
1536,"""Hyper-SAGNN: a self-attention based graph neural network for hypergraphs""","['graph neural network', 'hypergraph', 'representation learning']","""Graph representation learning for hypergraphs can be utilized to extract patterns among higher-order interactions that are critically important in many real world problems. Current approaches designed for hypergraphs, however, are unable to handle different types of hypergraphs and are typically not generic for various learning tasks. Indeed, models that can predict variable-sized heterogeneous hyperedges have not been available. Here we develop a new self-attention based graph neural network called Hyper-SAGNN applicable to homogeneous and heterogeneous hypergraphs with variable hyperedge sizes. We perform extensive evaluations on multiple datasets, including four benchmark network datasets and two single-cell Hi-C datasets in genomics. We demonstrate that Hyper-SAGNN significantly outperforms state-of-the-art methods on traditional tasks while also achieving great performance on a new task called outsider identification. We believe that Hyper-SAGNN will be useful for graph representation learning to uncover complex higher-order interactions in different applications. ""","""This work introduces a new neural network model that can represent hyperedges of variable size, which is experimentally shown to improve or match the state of the art on several problems. Both reviewers were in favor of acceptance given the method's strong performance, and had their concerns resolved by the rebuttals and the discussion. I am therefore recommending acceptance. """
1537,"""DeepEnFM: Deep neural networks with Encoder enhanced Factorization Machine""","['CTR', 'Attention', 'Transformer', 'Encoder']","""Click Through Rate (CTR) prediction is a critical task in industrial applications, especially for online social and commerce applications. It is challenging to find a proper way to automatically discover the effective cross features in CTR tasks. We propose a novel model for CTR tasks, called Deep neural networks with Encoder enhanced Factorization Machine (DeepEnFM). Instead of learning the cross features directly, DeepEnFM adopts the Transformer encoder as a backbone to align the feature embeddings with the clues of other fields. The embeddings generated from encoder are beneficial for the further feature interactions. Particularly, DeepEnFM utilizes a bilinear approach to generate different similarity functions with respect to different field pairs. Furthermore, the max-pooling method makes DeepEnFM feasible to capture both the supplementary and suppressing information among different attention heads. Our model is validated on the Criteo and Avazu datasets, and achieves state-of-art performance.""","""The authors address the problem of CTR prediction by using a Transformer based encoder to capture interactions between features. They suggest simple modifications to the basic Multiple Head Self Attention (MSHA) mechanism and show that they get the best performance on two publicly available datasets. While the reviewers agreed that this work is of practical importance, they had a few objections which I have summarised below: 1) Lack of novelty: The reviewers felt that the adoption of MSHA for the CTR task was straightforward. The suggested modifications in the form of Bilinear similarity and max-pooling were viewed as incremental contributions. 2) Lack of comparison with existing work: The reviewers suggested some additional baselines (Deep and Cross) which need to be added (the authors have responded that they will do so later). 3) Need to strengthen experiments: The reviewers appreciated the ablation studies done by the authors but requested for more studies to convincingly demonstrate the effect of some components. One reviewer also pointed that the authors should control form model complexity to ensure an apples-to-apples comparison (I agree that many papers in the past have not done this but going froward I have a hunch that many reviewers will start asking for this) . IMO, the above comments are important and the authors should try to address them in subsequent submissions. Based on the reviewer comments and lack of any response from the authors, I recommend that the paper in it current form cannot be accepted. """
1538,"""Sequence-level Intrinsic Exploration Model for Partially Observable Domains""","['deep learning', 'reinforcement learning']","""Training reinforcement learning policies in partially observable domains with sparse reward signal is an important and open problem for the research community. In this paper, we introduce a new sequence-level intrinsic novelty model to tackle the challenge of training reinforcement learning policies in sparse rewarded partially observable domains. First, we propose a new reasoning paradigm to infer the novelty for the partially observable states, which is built upon forward dynamics prediction. Different from conventional approaches that perform self-prediction or one-step forward prediction, our proposed approach engages open-loop multi-step prediction, which enables the difficulty of novelty prediction to flexibly scale and thus results in high-quality novelty scores. Second, we propose a novel dual-LSTM architecture to facilitate the sequence-level reasoning over the partially observable state space. Our proposed architecture efficiently synthesizes information from an observation sequence and an action sequence to derive meaningful latent representations for inferring the novelty for states. To evaluate the efficiency of our proposed approach, we conduct extensive experiments on several challenging 3D navigation tasks from ViZDoom and DeepMind Lab. We also present results on two hard-exploration domains from Atari 2600 series in Appendix to demonstrate our proposed approach could generalize beyond partially observable navigation tasks. Overall, the experiment results reveal that our proposed intrinsic novelty model could outperform several state-of-the-art curiosity baselines with considerable significance in the testified domains.""","""This paper introduces a new architecture based on intrinsic rewards, to deal with partially observable and sparse reward domains. The reviewers found the novelty of the work not particularly high, and had concerns about the general utility of the method based on the empirical evidence. This paper has numerous issues and could use significant revision in terms of writing, connections to literature, experiment design, and clarity of results. Much of the discussion focused on the scaling parameter. From an algorithmic point of view, the scaling parameter is very problematic. It is domain specific and when tuned per domain resulted in very different values. The ablation study showed that only two settings in one domain led to good performance, whereas the other resulted in no learning (for some reason the other two values were not plotted). There are concerns that the baselines were not completely fair. In many cases different domains were used to compare against RND and ICM, and there appears to be no tuning of these baselines for the new domains---this a problem due to the inherent bias in favor one's own method. In the solaris domain which was used in the RND paper, the results don't appear to match the RND paper, and in vizdoom the performance numbers are difficult to compare for ICM because a different metric is used---even if you don't like their performance numbers at least report them once so we can be confident the baselines are well calibrated. One reviewer pointed out the meta-parameters where different for RND than the published previous, but the paper does not describe what approach was used to tune those parameters and this is not acceptable. We cannot have much confidence that these results are reflective of those methods. Finally, there is no comment on how the performance numbers were computed and no description of how the errorbars where computed or what they represent. The paper focuses on partially observable domains, the evidence that this method is effective in closer to Markov settings is unclear. The Atari experiments do not yield significant results by large (solairs looks as if there is no learning occurring at all---a no comment about it in the text to explain). The paper claims evidence the approach can work well in both cases, but it was not even indicated if frame-stacking was used in the Atari experiments. In fact, the result was only alluded too in the conclusion---there was no reference in the main text to a specific result in the appendix. Text is very challenging to read. The language is informal and imprecise, and the paper frequently uses terms incorrectly or in different ways through (e.g., the use of the term novelty throughout) This is clearly an interesting direction. The authors should keep working, but this paper is not ready for publication. I urge the authors to dig deeper in the literature to gain a more nuanced understanding of the topic. Barto et al's excellent paper on the topic is a great place to start: pseudo-url """
1539,"""Attention Forcing for Sequence-to-sequence Model Training""","['deep learning', 'sequence-to-sequence model', 'attention mechanism', 'speech synthesis', 'machine translation']","""Auto-regressive sequence-to-sequence models with attention mechanism have achieved state-of-the-art performance in many tasks such as machine translation and speech synthesis. These models can be difficult to train. The standard approach, teacher forcing, guides a model with reference output history during training. The problem is that the model is unlikely to recover from its mistakes during inference, where the reference output is replaced by generated output. Several approaches deal with this problem, largely by guiding the model with generated output history. To make training stable, these approaches often require a heuristic schedule or an auxiliary classifier. This paper introduces attention forcing, which guides the model with generated output history and reference attention. This approach can train the model to recover from its mistakes, in a stable fashion, without the need for a schedule or a classifier. In addition, it allows the model to generate output sequences aligned with the references, which can be important for cascaded systems like many speech synthesis systems. Experiments on speech synthesis show that attention forcing yields significant performance gain. Experiments on machine translation show that for tasks where various re-orderings of the output are valid, guiding the model with generated output history is challenging, while guiding the model with reference attention is beneficial.""","""The paper proposed an attention-forcing algorithm that guides the sequence-to-sequence model training to make it more stable. But as pointed out by the reviewers, the proposed method requires alignment which is normally unavailable. The solution to address that is using another teacher-forcing model, which can be expensive. The major concern about this paper is the experimental justification is not sufficient: * lack of evaluations of the proposed method on different tasks; * lack of experiments on understanding how it interact with existing techniques such as scheduled sampling etc; * lack of comparisons to related existing supervised attention mechanisms. """
1540,"""SELF-KNOWLEDGE DISTILLATION ADVERSARIAL ATTACK""","['Adversarial Examples', 'Transferability', 'black-box targeted attack', 'Distillation']","""Neural networks show great vulnerability under the threat of adversarial examples. By adding small perturbation to a clean image, neural networks with high classification accuracy can be completely fooled. One intriguing property of the adversarial examples is transferability. This property allows adversarial examples to transfer to networks of unknown structure, which is harmful even to the physical world. The current way of generating adversarial examples is mainly divided into optimization based and gradient based methods. Liu et al. (2017) conjecture that gradient based methods can hardly produce transferable targeted adversarial examples in black-box-attack. However, in this paper, we use a simple technique to improve the transferability and success rate of targeted attacks with gradient based methods. We prove that gradient based methods can also generate transferable adversarial examples in targeted attacks. Specifically, we use knowledge distillation for gradient based methods, and show that the transferability can be improved by effectively utilizing different classes of information. Unlike the usual applications of knowledge distillation, we did not train a student network to generate adversarial examples. We take advantage of the fact that knowledge distillation can soften the target and obtain higher information, and combine the soft target and hard target of the same network as the loss function. Our method is generally applicable to most gradient based attack methods.""","""This paper proposes an attack method to improve the transferability of adversarial examples under black-box attack settings. Despite the simplicity of the proposed idea, reviewers and AC commonly think that the paper is far from being ready to publish in various aspects: (a) the presentation/writing quality, (b) in-depth analysis and (c) experimental results. Hence, I recommend rejection."""
1541,"""Sequential Latent Knowledge Selection for Knowledge-Grounded Dialogue""","['dialogue', 'knowledge', 'language', 'conversation']","""Knowledge-grounded dialogue is a task of generating an informative response based on both discourse context and external knowledge. As we focus on better modeling the knowledge selection in the multi-turn knowledge-grounded dialogue, we propose a sequential latent variable model as the first approach to this matter. The model named sequential knowledge transformer (SKT) can keep track of the prior and posterior distribution over knowledge; as a result, it can not only reduce the ambiguity caused from the diversity in knowledge selection of conversation but also better leverage the response information for proper choice of knowledge. Our experimental results show that the proposed model improves the knowledge selection accuracy and subsequently the performance of utterance generation. We achieve the new state-of-the-art performance on Wizard of Wikipedia (Dinan et al., 2019) as one of the most large-scale and challenging benchmarks. We further validate the effectiveness of our model over existing conversation methods in another knowledge-based dialogue Holl-E dataset (Moghe et al., 2018).""","""This paper proposes a sequential latent variable model for the knowledge selection task for knowledge grounded dialogues. Experimental results demonstrate improvements over the previous SOTA in the WoW, knowledge grounded dialogue dataset, through both automated and human evaluation. All reviewers scored the paper highly, but they also made several suggestions for improving the presentation. Authors responded positively to all these suggestions and provided updated results and other stats. The paper will be a good contribution to ICLR."""
1542,"""Scale-Equivariant Neural Networks with Decomposed Convolutional Filters""","['scale-equivariant', 'convolutional neural network', 'deformation robustness']","""Encoding the input scale information explicitly into the representation learned by a convolutional neural network (CNN) is beneficial for many vision tasks especially when dealing with multiscale input signals. We study, in this paper, a scale-equivariant CNN architecture with joint convolutions across the space and the scaling group, which is shown to be both sufficient and necessary to achieve scale-equivariant representations. To reduce the model complexity and computational burden, we decompose the convolutional filters under two pre-fixed separable bases and truncate the expansion to low-frequency components. A further benefit of the truncated filter expansion is the improved deformation robustness of the equivariant representation. Numerical experiments demonstrate that the proposed scale-equivariant neural network with decomposed convolutional filters (ScDCFNet) achieves significantly improved performance in multiscale image classification and better interpretability than regular CNNs at a reduced model size.""","""This paper presents a CNN architecture equivariant to scaling and translation which is realized by the proposed joint convolution across the space and scaling groups. All reviewers find the theorical side of the paper is sound and interesting. Through the discussion based on authors rebuttal, one reviewer decided to update the score to Weak Accept, putting this paper on the borderline. However, some concerns still remain. Some reviewers are still not convinced regarding the novelty of the paper, particularly in terms of the difference from (Chen+,2019). Also, they agree that experiments are still very weak and not convincing enough. Overall, as there was no opinion to champion this paper, Id like to recommend rejection this time. I encourage authors to polish the experimentations taking in the reviewers suggestions. """
1543,"""The Benefits of Over-parameterization at Initialization in Deep ReLU Networks""","['deep relu networks', 'he initialization', 'norm preserving', 'gradient preserving']","""It has been noted in existing literature that over-parameterization in ReLU networks generally improves performance. While there could be several factors involved behind this, we prove some desirable theoretical properties at initialization which may be enjoyed by ReLU networks. Specifically, it is known that He initialization in deep ReLU networks asymptotically preserves variance of activations in the forward pass and variance of gradients in the backward pass for infinitely wide networks, thus preserving the flow of information in both directions. Our paper goes beyond these results and shows novel properties that hold under He initialization: i) the norm of hidden activation of each layer is equal to the norm of the input, and, ii) the norm of weight gradient of each layer is equal to the product of norm of the input vector and the error at output layer. These results are derived using the PAC analysis framework, and hold true for finitely sized datasets such that the width of the ReLU network only needs to be larger than a certain finite lower bound. As we show, this lower bound depends on the depth of the network and the number of samples, and by the virtue of being a lower bound, over-parameterized ReLU networks are endowed with these desirable properties. For the aforementioned hidden activation norm property under He initialization, we further extend our theory and show that this property holds for a finite width network even when the number of data samples is infinite. Thus we overcome several limitations of existing papers, and show new properties of deep ReLU networks at initialization.""","""The article studies benefits of over-parametrization and theoretical properties at initialization in ReLU networks. The reviewers raised concerns about the work being very close to previous works and also about the validity of some assumptions and derivations. Nonetheless, some reviewers mentioned that the analysis might be a starting point in understanding other phenomena and made some suggestions. However, the authors did not provide a rebuttal nor a revision. """
1544,"""Convergence of Gradient Methods on Bilinear Zero-Sum Games""","['GAN', 'gradient algorithm', 'convergence', 'min-max optimization', 'bilinear game']","""Min-max formulations have attracted great attention in the ML community due to the rise of deep generative models and adversarial methods, while understanding the dynamics of gradient algorithms for solving such formulations has remained a grand challenge. As a first step, we restrict to bilinear zero-sum games and give a systematic analysis of popular gradient updates, for both simultaneous and alternating versions. We provide exact conditions for their convergence and find the optimal parameter setup and convergence rates. In particular, our results offer formal evidence that alternating updates converge ""better"" than simultaneous ones.""","""All reviewers found the work interesting but worried about the extension to non-bilinear games. This is a point the authors should explicitly address in their work before publication."""
1545,"""Learning to Contextually Aggregate Multi-Source Supervision for Sequence Labeling""","['crowdsourcing', 'domain adaptation', 'sequence labeling', 'named entity recognition', 'weak supervision']","""Sequence labeling is a fundamental framework for various natural language processing problems including part-of-speech tagging and named entity recognition. Its performance is largely influenced by the annotation quality and quantity in supervised learning scenarios. In many cases, ground truth labels are costly and time-consuming to collect or even non-existent, while imperfect ones could be easily accessed or transferred from different domains. A typical example is crowd-sourced datasets which have multiple annotations for each sentence which may be noisy or incomplete. Additionally, predictions from multiple source models in transfer learning can be seen as a case of multi-source supervision. In this paper, we propose a novel framework named Consensus Network (CONNET) to conduct training with imperfect annotations from multiple sources. It learns the representation for every weak supervision source and dynamically aggregates them by a context-aware attention mechanism. Finally, it leads to a model reflecting the consensus among multiple sources. We evaluate the proposed framework in two practical settings of multi-source learning: learning with crowd annotations and unsupervised cross-domain model adaptation. Extensive experimental results show that our model achieves significant improvements over existing methods in both settings.""","""While the revised paper was better and improved the reviewers assessment of the work, the paper is just below the threshold for acceptance. The authors are strongly encouraged to continue this work."""
1546,"""Continual learning with hypernetworks""","['Continual Learning', 'Catastrophic Forgetting', 'Meta Model', 'Hypernetwork']","""Artificial neural networks suffer from catastrophic forgetting when they are sequentially trained on multiple tasks. To overcome this problem, we present a novel approach based on task-conditioned hypernetworks, i.e., networks that generate the weights of a target model based on task identity. Continual learning (CL) is less difficult for this class of models thanks to a simple key feature: instead of recalling the input-output relations of all previously seen data, task-conditioned hypernetworks only require rehearsing task-specific weight realizations, which can be maintained in memory using a simple regularizer. Besides achieving state-of-the-art performance on standard CL benchmarks, additional experiments on long task sequences reveal that task-conditioned hypernetworks display a very large capacity to retain previous memories. Notably, such long memory lifetimes are achieved in a compressive regime, when the number of trainable hypernetwork weights is comparable or smaller than target network size. We provide insight into the structure of low-dimensional task embedding spaces (the input space of the hypernetwork) and show that task-conditioned hypernetworks demonstrate transfer learning. Finally, forward information transfer is further supported by empirical results on a challenging CL benchmark based on the CIFAR-10/100 image datasets.""","""This paper proposes to use hypernetwork to prevent catastrophic forgetting. Overall, the paper is well-written, well-motivated, and the idea is novel. Experimentally, the proposed approach achieves SOTA on various (well-chosen) standard CL benchmarks (notably P-MNIST for CL, Split MNIST) and also does reasonably well on Split CIFAR-10/100 benchmark. The authors are suggested to investigate alternative penalties in the rehearsal objective, and also add comparison with methods like HAT and PackNet."""
1547,"""JAX MD: End-to-End Differentiable, Hardware Accelerated, Molecular Dynamics in Pure Python""","['Automatic Differentiation', 'Software Library', 'Physics Simulation', 'Differentiable Physics']","""A large fraction of computational science involves simulating the dynamics of particles that interact via pairwise or many-body interactions. These simulations, called Molecular Dynamics (MD), span a vast range of subjects from physics and materials science to biochemistry and drug discovery. Most MD software involves significant use of handwritten derivatives and code reuse across C++, FORTRAN, and CUDA. This is reminiscent of the state of machine learning before automatic differentiation became popular. In this work we bring the substantial advances in software that have taken place in machine learning to MD with JAX, M.D. (JAX MD). JAX MD is an end-to-end differentiable MD package written entirely in Python that can be just-in-time compiled to CPU, GPU, or TPU. JAX MD allows researchers to iterate extremely quickly and lets researchers easily incorporate machine learning models into their workflows. Finally, since all of the simulation code is written in Python, researchers can have unprecedented flexibility in setting up experiments without having to edit any low-level C++ or CUDA code. In addition to making existing workloads easier, JAX MD allows researchers to take derivatives through whole-simulations as well as seamlessly incorporate neural networks into simulations. This paper explores the architecture of JAX MD and its capabilities through several vignettes. Code is available at pseudo-url along with an interactive Colab notebook.""","""The paper is about a software library that allows for relatively easy simulation of molecular dynamics. The library is based on JAX and draws heavily from its benefits. To be honest, this is a difficult paper to evaluate for everyone involved in this discussion. The reason for this is that it is an unconventional paper (software) whose target application centered around molecular dynamics. While the package seems to be useful for this purpose (and some ML-related purposes), the paper does not expose which of the benefits come from JAX and which ones the authors added in JAX MD. It looks like that most of the benefits are built-in benefits in JAX. Furthermore, I am missing a detailed analysis of computation speed (the authors do mention this in the discussion below and in a sentence in the paper, but this insufficient). Currently, it seems that the package is relatively slow compared to existing alternatives. Here are some recommendations: 1. It would be good if the authors focused more on ML-related problems in the paper, because this would also make sure that the package is not considered a specialized package that overfits to molecular dynamics. 2. Please work out the contribution/delta of JAX MD compared to JAX. 3. Provide a thorough analysis of the computation speed 4. Make a better case, why JAX MD should be the go-to method for practitioners. Overall, I recommend rejection of this paper. A potential re-submission venue could be JMLR, which has an explicit software track."""
1548,"""Compressed Sensing with Deep Image Prior and Learned Regularization""","['compressed sensing', 'sparsity', 'inverse problems']","""We propose a novel method for compressed sensing recovery using untrained deep generative models. Our method is based on the recently proposed Deep Image Prior (DIP), wherein the convolutional weights of the network are optimized to match the observed measurements. We show that this approach can be applied to solve any differentiable linear inverse problem, outperforming previous unlearned methods. Unlike various learned approaches based on generative models, our method does not require pre-training over large datasets. We further introduce a novel learned regularization technique, which incorporates prior information on the network weights. This reduces reconstruction error, especially for noisy measurements. Finally we prove that, using the DIP optimization approach, moderately overparameterized single-layer networks trained can perfectly fit any signal despite the nonconvex nature of the fitting problem. This theoretical result provides justification for early stopping.""","""This paper proposes a compressed sensing (CS) method which employs deep image prior (DIP) algorithm to recovering signals for images from noisy measurements using untrained deep generative models. A novel learned regularization technique is also introduced. Experimental results show that the proposed methods outperformed the existing work. The theoretical analysis of early stopping is also given. All reviewers agree that it is novel to combine the deep learning method with compressed sensing. The paper is well written and overall good. However the reviewers also proposed many concerns about method and the experiments, but the authors gave no rebuttal almost no revisions were made on the paper. I would suggest the author to consider the reviewers' concern seriously and resubmit the paper to another conference or journal."""
1549,"""FreeLB: Enhanced Adversarial Training for Natural Language Understanding""",[],"""Adversarial training, which minimizes the maximal risk for label-preserving input perturbations, has proved to be effective for improving the generalization of language models. In this work, we propose a novel adversarial training algorithm, FreeLB, that promotes higher invariance in the embedding space, by adding adversarial perturbations to word embeddings and minimizing the resultant adversarial risk inside different regions around input samples. To validate the effectiveness of the proposed approach, we apply it to Transformer-based models for natural language understanding and commonsense reasoning tasks. Experiments on the GLUE benchmark show that when applied only to the finetuning stage, it is able to improve the overall test scores of BERT-base model from 78.3 to 79.4, and RoBERTa-large model from 88.5 to 88.8. In addition, the proposed approach achieves state-of-the-art single-model test accuracies of 85.44% and 67.75% on ARC-Easy and ARC-Challenge. Experiments on CommonsenseQA benchmark further demonstrate that FreeLB can be generalized and boost the performance of RoBERTa-large model on other tasks as well.""","""The paper proposes a new algorithm for adversarial training of language models. This is an important research area and the paper is well presented, has great empirical results and a novel idea. """
1550,"""Embodied Multimodal Multitask Learning""","['Visual Grounding', 'Semantic Goal Navigation', 'Embodied Question Answering']","""Visually-grounded embodied language learning models have recently shown to be effective at learning multiple multimodal tasks such as following navigational instructions and answering questions. In this paper, we address two key limitations of these models, (a) the inability to transfer the grounded knowledge across different tasks and (b) the inability to transfer to new words and concepts not seen during training using only a few examples. We propose a multitask model which facilitates knowledge transfer across tasks by disentangling the knowledge of words and visual attributes in the intermediate representations. We create scenarios and datasets to quantify cross-task knowledge transfer and show that the proposed model outperforms a range of baselines in simulated 3D environments. We also show that this disentanglement of representations makes our model modular and interpretable which allows for transfer to instructions containing new concepts.""","""This paper offers a new approach to cross-modal embodied learning that aims to overcome limited vocabulary and other issues. Reviews are mixed. I concur with the two reviewers who say the work is interesting but the contribution is not sufficiently clear for acceptance at this time."""
1551,"""Mean-field Behaviour of Neural Tangent Kernel for Deep Neural Networks""",[],"""Recent work by Jacot et al. (2018) has showed that training a neural network of any kind with gradient descent in parameter space is equivalent to kernel gradient descent in function space with respect to the Neural Tangent Kernel (NTK). Lee et al. (2019) built on this result to show that the output of a neural network trained using full batch gradient descent can be approximated by a linear model for wide networks. In parallel, a recent line of studies ( Schoenhols et al. (2017), Hayou et al. (2019)) suggested that a special initialization known as the Edge of Chaos leads to good performance. In this paper, we bridge the gap between this two concepts and show the impact of the initialization and the activation function on the NTK as the network depth becomes large. We provide experiments illustrating our theoretical results.""","""This paper focuses on studying the impact of initialization and activation functions on the Neural Tangent Kernel (NTK) type analysis. The authors claim to make a connection between NTK and edge of chaos analysis. The reviewers had some concern about (1) impact of smooth activations ""any NTK-based training method for DNNs should use a Smooth Activation Function from the class S and the network should be initialized on the EOC"" (2) proofs of residual networks (3) and why mixing NTK with EOC is interesting. Some of these concerns were addressed in the response. I do share the reviewer concerns about (2). The authors need to give a clear proof. I think this combination of NTK and EOC could be interesting but needs to be better motivated. As a result I do not recommend publication."""
1552,"""SesameBERT: Attention for Anywhere""","['Natural Language Processing', 'Deep Learning', 'Self Attention']","""Fine-tuning with pre-trained models has achieved exceptional results for many language tasks. In this study, we focused on one such self-attention network model, namely BERT, which has performed well in terms of stacking layers across diverse language-understanding benchmarks. However, in many downstream tasks, information between layers is ignored by BERT for fine-tuning. In addition, although self-attention networks are well-known for their ability to capture global dependencies, room for improvement remains in terms of emphasizing the importance of local contexts. In light of these advantages and disadvantages, this paper proposes SesameBERT, a generalized fine-tuning method that (1) enables the extraction of global information among all layers through Squeeze and Excitation and (2) enriches local information by capturing neighboring contexts via Gaussian blurring. Furthermore, we demonstrated the effectiveness of our approach in the HANS dataset, which is used to determine whether models have adopted shallow heuristics instead of learning underlying generalizations. The experiments revealed that SesameBERT outperformed BERT with respect to GLUE benchmark and the HANS evaluation set.""","""This paper proposes a few architectural modifications to the BERT model for language understanding, which are meant to apply during fine-tuning for target tasks. All three reviewers had concerns about the motivation for at least one of the proposed methods, and none of three reviewers found the primary experimental results convincing: The proposed methods yield a small improvement on average across target tasks, but one that is not consistent across tasks, and that may not be statistically significant. The authors clarified some points, but did not substantially rebut any of the reviewers concerns. Even though the reviewers express relatively low confidence, their concerns sound serious and uncontested, so I don't think we can accept this paper as is."""
1553,"""RPGAN: random paths as a latent space for GAN interpretability""","['generative models', 'GAN', 'interpretability']","""In this paper, we introduce Random Path Generative Adversarial Network (RPGAN) --- an alternative scheme of GANs that can serve as a tool for generative model analysis. While the latent space of a typical GAN consists of input vectors, randomly sampled from the standard Gaussian distribution, the latent space of RPGAN consists of random paths in a generator network. As we show, this design allows to associate different layers of the generator with different regions of the latent space, providing their natural interpretability. With experiments on standard benchmarks, we demonstrate that RPGAN reveals several interesting insights about roles that different layers play in the image generation process. Aside from interpretability, the RPGAN model also provides competitive generation quality and allows efficient incremental learning on new data.""","""The paper received mixed scores: Weak Reject (R1 and R2) and Accept (R3). AC has closely read the reviews/comments/rebuttal and examined the paper. After the rebuttal, R2's concerns still remain. AC sides with R2 and feels that the generated interpretations are not convincing, and that the conclusions drawn are not fully supported. Thus the paper just falls below the acceptance threshold, unfortunately. The work has merits however and the authors should revise their paper to incorporate the constructive feedback."""
1554,"""Is There Mode Collapse? A Case Study on Face Generation and Its Black-box Calibration""","['Generative Adversarial Networks', 'Mode Collapse', 'Calibration']","""Generative adversarial networks (GANs) nowadays are capable of producing im-ages of incredible realism. One concern raised is whether the state-of-the-artGANs learned distribution still suffers from mode collapse. Existing evaluation metrics for image synthesis focus on low-level perceptual quality. Diversity tests of samples from GANs are usually conducted qualitatively on a small scale. In this work, we devise a set of statistical tools, that are broadly applicable to quantitatively measuring the mode collapse of GANs. Strikingly, we consistently observe strong mode collapse on several state-of-the-art GANs using our toolset. We analyze possible causes, and for the first time present two simple yet effective black-box methods to calibrate the GAN learned distribution, without accessing either model parameters or the original training data.""","""This paper studies the problem of mode collapse in GANs. The authors present new metrics to judge the model's diversity of the generated faces. The authors present two black-box approaches to increasing the model diversity. The benefit of using a black box approach is that the method does not require access to the weights of the model and hence it is more easily usable than white-box approaches. However, there are significant evaluation problems and lack of theoretical and empirical motivation on why the methods proposed by the paper are good. The reviewers have not changed their score after having read the response and there is still some gaps in evaluation which can be improved in the paper. Thus, I'm recommending a Rejection."""
1555,"""QXplore: Q-Learning Exploration by Maximizing Temporal Difference Error""","['Deep Reinforcement Learning', 'Exploration']","""A major challenge in reinforcement learning is exploration, especially when reward landscapes are sparse. Several recent methods provide an intrinsic motivation to explore by directly encouraging agents to seek novel states. A potential disadvantage of pure state novelty-seeking behavior is that unknown states are treated equally regardless of their potential for future reward. In this paper, we propose an exploration objective using the temporal difference error experienced on extrinsic rewards as a secondary reward signal for exploration in deep reinforcement learning. Our objective yields novelty-seeking in the absence of extrinsic reward, while accelerating exploration of reward-relevant states in sparse (but nonzero) reward landscapes. This objective draws inspiration from dopaminergic pathways in the brain that influence animal behavior. We implement the objective with an adversarial Q-learning method in which Q and Qx are the action-value functions for extrinsic and secondary rewards, respectively. Secondary reward is given by the absolute value of the TD-error of Q. Training is off-policy, based on a replay buffer containing a mix of trajectories sampled using Q and Qx. We characterize performance on a set of continuous control benchmark tasks, and demonstrate comparable or faster convergence on all tasks when compared with other state-of-the-art exploration methods.""","""There is insufficient support to recommend accepting this paper. Although the authors provided detailed responses, none of the reviewers changed their recommendation from reject. One of the main criticisms, even after revision, concerned the quality of the experimental evaluation. The reviewers criticized the lack of important baselines, and remained unsure about adequate hyperparameter tuning in the revision. The technical exposition lacked a sober discussion of limitations. The paper would be greatly strengthened by the addition of a theoretical justification of the proposed approach. In the end, the submitted reviews should be able to help the authors strengthen this paper."""
1556,"""Improving Batch Normalization with Skewness Reduction for Deep Neural Networks""","['Batch Normalization', 'Deep Learning']","""Batch Normalization (BN) is a well-known technique used in training deep neural networks. The main idea behind batch normalization is to normalize the features of the layers ( pseudo-formula , transforming them to have a mean equal to zero and a variance equal to one). Such a procedure encourages the optimization landscape of the loss function to be smoother, and improve the learning of the networks for both speed and performance. In this paper, we demonstrate that the performance of the network can be improved, if the distributions of the features of the output in the same layer are similar. As normalizing based on mean and variance does not necessarily make the features to have the same distribution, we propose a new normalization scheme: Batch Normalization with Skewness Reduction (BNSR). Comparing with other normalization approaches, BNSR transforms not just only the mean and variance, but also the skewness of the data. By tackling this property of a distribution, we are able to make the output distributions of the layers to be further similar. The nonlinearity of BNSR may further improve the expressiveness of the underlying network. Comparisons with other normalization schemes are tested on the CIFAR-100 and ImageNet datasets. Experimental results show that the proposed approach can outperform other state-of-the-arts that are not equipped with BNSR.""","""The paper proposes a novel mechanism to reduce the skewness of the activations. The paper evaluates their claims on the CIFAR-10 and Tiny Imagenet dataset. The reviewers found the scale of the experiments to be too limited to support the claims. Thus we recommend the paper be improved by considering larger datasets such as the full Imagenet. The paper should also better motivate the goal of reducing skewness."""
1557,"""PROGRESSIVE LEARNING AND DISENTANGLEMENT OF HIERARCHICAL REPRESENTATIONS""","['generative model', 'disentanglement', 'progressive learning', 'VAE']","""Learning rich representation from data is an important task for deep generative models such as variational auto-encoder (VAE). However, by extracting high-level abstractions in the bottom-up inference process, the goal of preserving all factors of variations for top-down generation is compromised. Motivated by the concept of starting small, we present a strategy to progressively learn independent hierarchical representations from high- to low-levels of abstractions. The model starts with learning the most abstract representation, and then progressively grow the network architecture to introduce new representations at different levels of abstraction. We quantitatively demonstrate the ability of the presented model to improve disentanglement in comparison to existing works on two benchmark datasets using three disentanglement metrics, including a new metric we proposed to complement the previously-presented metric of mutual information gap. We further present both qualitative and quantitative evidence on how the progression of learning improves disentangling of hierarchical representations. By drawing on the respective advantage of hierarchical representation learning and progressive learning, this is to our knowledge the first attempt to improve disentanglement by progressively growing the capacity of VAE to learn hierarchical representations.""","""This paper proposes a novel way to learn hierarchical disentangled latent representations by building on the previously published Variational Ladder AutoEncoder (VLAE) work. The proposed extension involves learning disentangled representations in a progressive manner, from the most abstract to the more detailed. While at first the reviewers expressed some concerns about the paper, in terms of its main focus (whether it was the disentanglement or the hierarchical aspect of the learnt representation), connections to past work, and experimental results, these concerns were fully alleviated during the discussion period. All of the reviewers now agree that this is a valuable contribution to the field and should be accepted to ICLR. Hence, I am happy to recommend this paper for acceptance as an oral."""
1558,"""Evidence-Aware Entropy Decomposition For Active Deep Learning""","['active learning', 'entropy decomposition', 'uncertainty']","""We present a novel multi-source uncertainty prediction approach that enables deep learning (DL) models to be actively trained with much less labeled data. By leveraging the second-order uncertainty representation provided by subjective logic (SL), we conduct evidence-based theoretical analysis and formally decompose the predicted entropy over multiple classes into two distinct sources of uncertainty: vacuity and dissonance, caused by lack of evidence and conflict of strong evidence, respectively. The evidence based entropy decomposition provides deeper insights on the nature of uncertainty, which can help effectively explore a large and high-dimensional unlabeled data space. We develop a novel loss function that augments DL based evidence prediction with uncertainty anchor sample identification through kernel density estimation (KDE). The accurately estimated multiple sources of uncertainty are systematically integrated and dynamically balanced using a data sampling function for label-efficient active deep learning (ADL). Experiments conducted over both synthetic and real data and comparison with competitive AL methods demonstrate the effectiveness of the proposed ADL model. ""","""The authors propose a new perspective on active learning by borrowing concepts from subjective logic. In particular, they model uncertainty as a combination of dissonance and vacuity; two orthogonal forms of uncertainty that may invite additional labels for different reasons. The concepts introduced are not specific to deep learning but are generally applicable. Experiments on 2d data and a couple standard datasets are provided. The derivation of the model is intuitive but it's not clear that it is ""better"" than any other intuitively derived model for active learning. With the field of active learning having such a long history, the field has moved towards a standard of expecting theoretical guarantees to distinguish a new method from the rest; this paper provides none. Instead anecdotal examples and small experiments are performed. Like other reviews, I am extremely skeptical about the use of KDE which is known to have essentially no inferential ability in high dimensions (such as in deep learning situations where presumably images are involved). It is hard not to feel as though deep learning is somewhat of a red herring in this paper. I recommend the authors lean into understanding the method from a perspective beyond anecdotes and experiments if they wish for this method to gain traction. """
1559,"""FoveaBox: Beyound Anchor-based Object Detection""",[],"""We present FoveaBox, an accurate, flexible, and completely anchor-free framework for object detection. While almost all state-of-the-art object detectors utilize predefined anchors to enumerate possible locations, scales and aspect ratios for the search of the objects, their performance and generalization ability are also limited to the design of anchors. Instead, FoveaBox directly learns the object existing possibility and the bounding box coordinates without anchor reference. This is achieved by: (a) predicting category-sensitive semantic maps for the object existing possibility, and (b) producing category-agnostic bounding box for each position that potentially contains an object. The scales of target boxes are naturally associated with feature pyramid representations. We demonstrate its effectiveness on standard benchmarks and report extensive experimental analysis. Without bells and whistles, FoveaBox achieves state-of-the-art single model performance on the standard COCO detection benchmark. More importantly, FoveaBox avoids all computation and hyper-parameters related to anchor boxes, which are often sensitive to the final detection performance. We believe the simple and effective approach will serve as a solid baseline and help ease future research for object detection. ""","""The paper proposes a method for object detection by predicting category-specific object probability and category-agnostic bounding box coordinates for each position that's likely to contain an object. The proposed idea is interesting and the experimental results show improvement over RetinaNet and other baselines. However, in terms of weakness, (1) conceptually speaking it's unclear whether the proposed method is a big departure from the existing frameworks; and (2) although the authors are claiming SOTA performance, the proposed method seems to be worse than other existing/recent work. Some example references are listed below (more available here: pseudo-url). [1] Scale-Aware Trident Networks for Object Detection pseudo-url [2] GCNet: Non-local Networks Meet Squeeze-Excitation Networks and Beyond pseudo-url [3] CBNet: A Novel Composite Backbone Network Architecture for Object Detection pseudo-url [4] EfficientDet: Scalable and Efficient Object Detection pseudo-url References [3] and [4] are concurrent works so shouldn't be a ground of rejection per se, but the performance gap is quite large. Compared to [1] and [2] which have been on arxiv for a while (+5 months) the performance of the proposed method is still inferior. Despite considering that object detection is a very competitive field, the conceptual/technical novelty and overall practical significance seem limited for ICLR. For a future submission, I would suggest that a revision of this paper being reviewed in a computer vision conference, rather than ML conference. """
1560,"""Implicit Generative Modeling for Efficient Exploration""","['Reinforcement Learning', 'Exploration', 'Intrinsic Reward', 'Implicit Generative Models']","""Efficient exploration remains a challenging problem in reinforcement learning, especially for those tasks where rewards from environments are sparse. A commonly used approach for exploring such environments is to introduce some ""intrinsic"" reward. In this work, we focus on model uncertainty estimation as an intrinsic reward for efficient exploration. In particular, we introduce an implicit generative modeling approach to estimate a Bayesian uncertainty of the agent's belief of the environment dynamics. Each random draw from our generative model is a neural network that instantiates the dynamic function, hence multiple draws would approximate the posterior, and the variance in the future prediction based on this posterior is used as an intrinsic reward for exploration. We design a training algorithm for our generative model based on the amortized Stein Variational Gradient Descent. In experiments, we compare our implementation with state-of-the-art intrinsic reward-based exploration approaches, including two recent approaches based on an ensemble of dynamic models. In challenging exploration tasks, our implicit generative model consistently outperforms competing approaches regarding data efficiency in exploration.""","""There is insufficient support to recommend accepting this paper. The authors provided detailed responses, but the reviewers unanimously kept their recommendation as reject. The novelty and significance of the main contribution was not made sufficiently clear, given the context of related work. Critically, the experimental evaluation was not considered to be convincing, lacking detailed explanation and justification, and a sufficiently thorough comparison to strong baselines, The submitted reviews should help the authors improve their paper."""
1561,"""Attributed Graph Learning with 2-D Graph Convolution""","['2-D Graph Convolution', 'Attributed Graph', 'Representation learning']","""Graph convolutional neural networks have demonstrated promising performance in attributed graph learning, thanks to the use of graph convolution that effectively combines graph structures and node features for learning node representations. However, one intrinsic limitation of the commonly adopted 1-D graph convolution is that it only exploits graph connectivity for feature smoothing, which may lead to inferior performance on sparse and noisy real-world attributed networks. To address this problem, we propose to explore relational information among node attributes to complement node relations for representation learning. In particular, we propose to use 2-D graph convolution to jointly model the two kinds of relations and develop a computationally efficient dimensionwise separable 2-D graph convolution (DSGC). Theoretically, we show that DSGC can reduce intra-class variance of node features on both the node dimension and the attribute dimension to facilitate learning. Empirically, we demonstrate that by incorporating attribute relations, DSGC achieves significant performance gain over state-of-the-art methods on node classification and clustering on several real-world attributed networks. ""","""The paper studies the problem of graph learning with attributes, and propose a 2-D graph convolution that models the node relation graph and the attribute graph jointly. The paper proposes and efficient algorithm and models intra-class variation. Empirical performance on 20-NG, L-Cora, and Wiki show the promise of the approach. The authors responded to the reviews by updating the paper, but the reviewers unfortunately did not further engage during the discussion period. Therefore it is unclear whether their concerns have been adequately addressed. Overall, there have been many strong submissions on graph neural networks at ICLR this year, and this submission as is currently stands does not quite make the threshold of acceptance."""
1562,"""Amharic Negation Handling""","['Negation Handling Algorithm', 'Amharic Sentiment Analysis', 'Amharic Sentiment lexicon', 'char level', 'word level ngram', 'machine learning', 'hybrid']","""User generated content contains opinionated texts not only in dominant languages (like English) but also less dominant languages( like Amharic). However, negation handling techniques that supports for sentiment detection is not developed in such less dominant language(i.e. Amharic). Negation handling is one of the challenging tasks for sentiment classification. Thus, this work builds negation handling schemes which enhances Amharic Sentiment classification. The proposed Negation Handling framework combines the lexicon based approach and character ngram based machine learning model. The performance of framework is evaluated using the annotated Amharic News Comments. The system is outperforming the best of all models and the baselines by an accuracy of 98.0. The result is compared with the baselines (without negation handling and word level ngram model).""","""Main content: This paper presents negation handling approaches for Amharic sentiment classification. -- Discussion: All reviewers agree the paper is poorly written, uses outdated approaches, and requires better organization and formatting. -- Recommendation and justification: This paper after more work might be better submitted in an NLP workshop on low resource languages, rather than ICLR which is more focused on new machine learning methods."""
1563,"""Automatically Discovering and Learning New Visual Categories with Ranking Statistics""","['deep learning', 'classification', 'novel classes', 'transfer learning', 'clustering', 'incremental learning']","""We tackle the problem of discovering novel classes in an image collection given labelled examples of other classes. This setting is similar to semi-supervised learning, but significantly harder because there are no labelled examples for the new classes. The challenge, then, is to leverage the information contained in the labelled images in order to learn a general-purpose clustering model and use the latter to identify the new classes in the unlabelled data. In this work we address this problem by combining three ideas: (1) we suggest that the common approach of bootstrapping an image representation using the labeled data only introduces an unwanted bias, and that this can be avoided by using self-supervised learning to train the representation from scratch on the union of labelled and unlabelled data; (2) we use rank statistics to transfer the model's knowledge of the labelled classes to the problem of clustering the unlabelled images; and, (3) we train the data representation by optimizing a joint objective function on the labelled and unlabelled subsets of the data, improving both the supervised classification of the labelled data, and the clustering of the unlabelled data. We evaluate our approach on standard classification benchmarks and outperform current methods for novel category discovery by a significant margin.""","""The paper defines a methodology to discover unknown classes in a semi-supervised learning setting, based on: i) defining a proper representation based on self-supervision on all samples; ii) defining equivalence classes on the unlabelled samples, based on ranking statistics; iii) training supervised heads aimed to predict the labels (when available) and the equivalence class indices (when unlabelled). All reviewers agree that the ranking statistics-based heuristics is a quite innovative element of the paper. The extensive and careful experimental validation, with the ablation studies, establishes the merits of all ingredients. Therefore, I propose acceptance of this paper. """
1564,"""LOSSLESS SINGLE IMAGE SUPER RESOLUTION FROM LOW-QUALITY JPG IMAGES""","['Super Resolution', 'Low-quality JPG', 'Recovering details']","""Super Resolution (SR) is a fundamental and important low-level computer vision (CV) task. Different from traditional SR models, this study concentrates on a specific but realistic SR issue: How can we obtain satisfied SR results from compressed JPG (C-JPG) image, which widely exists on the Internet. In general, C-JPG can release storage space while keeping considerable quality in visual. However, further image processing operations, e.g., SR, will suffer from enlarging inner artificial details and result in unacceptable outputs. To address this problem, we propose a novel SR structure with two specifically designed components, as well as a cycle loss. In short, there are mainly three contributions to this paper. First, our research can generate high-qualified SR images for prevalent C-JPG images. Second, we propose a functional sub-model to recover information for C-JPG images, instead of the perspective of noise elimination in traditional SR approaches. Third, we further integrate cycle loss into SR solver to build a hybrid loss function for better SR generation. Experiments show that our approach achieves outstanding performance among state-of-the-art methods.""","""Main summary: Sngle image super-resolution network that can generate high-resolution images from the corresponding C-JPG images Discussions reviewer 3: reviewer has a few issues including, claim the method is lossless, want more information about JPG revovering step reviewer 1: (not knowledgable): paper is well written and reviewer gives very few cons reviewer 2: main concerns are wrt novelty and technically sound Recommendation: the 2 more knowledgable reviwers mark this as Reject, I agree."""
1565,"""OmniNet: A unified architecture for multi-modal multi-task learning""","['multimodal', 'multi-task', 'transformer', 'spatio-temporal', 'attention-networks', 'neural-network']","""Transformer is a popularly used neural network architecture, especially for language understanding. We introduce an extended and unified architecture that can be used for tasks involving a variety of modalities like image, text, videos, etc. We propose a spatio-temporal cache mechanism that enables learning spatial dimension of the input in addition to the hidden states corresponding to the temporal input sequence. The proposed architecture further enables a single model to support tasks with multiple input modalities as well as asynchronous multi-task learning, thus we refer to it as OmniNet. For example, a single instance of OmniNet can concurrently learn to perform the tasks of part-of-speech tagging, image captioning, visual question answering and video activity recognition. We demonstrate that training these four tasks together results in about three times compressed model while retaining the performance in comparison to training them individually. We also show that using this neural network pre-trained on some modalities assists in learning unseen tasks such as video captioning and video question answering. This illustrates the generalization capacity of the self-attention mechanism on the spatio-temporal cache present in OmniNet. ""","""This paper presents OmniNet, an architecture based on the popular transformer for learning on data from multiple modalities and predicting on multiple tasks. The reviewers found the paper well written, technically sound and empirically thorough. However, overall the scores fell below the bar for acceptance and none of the reviewers felt strongly enough to 'champion' the paper for acceptance. The primary concern cited by the reviewers was a lack of strong baselines, i.e. comparison to other methods for multi-task learning. Unfortunately, as such the recommendation is to reject. However, adding a thorough comparison to existing literature empirically and in the related work would make this a much stronger submission to a future conference."""
1566,"""Unsupervised Clustering using Pseudo-semi-supervised Learning""","['Unsupervised Learning', 'Unsupervised Clustering', 'Deep Learning']","""In this paper, we propose a framework that leverages semi-supervised models to improve unsupervised clustering performance. To leverage semi-supervised models, we first need to automatically generate labels, called pseudo-labels. We find that prior approaches for generating pseudo-labels hurt clustering performance because of their low accuracy. Instead, we use an ensemble of deep networks to construct a similarity graph, from which we extract high accuracy pseudo-labels. The approach of finding high quality pseudo-labels using ensembles and training the semi-supervised model is iterated, yielding continued improvement. We show that our approach outperforms state of the art clustering results for multiple image and text datasets. For example, we achieve 54.6% accuracy for CIFAR-10 and 43.9% for 20news, outperforming state of the art by 8-12% in absolute terms.""","""The authors addressed the issues raised by the reviewers, so I suggest the acceptance of this paper."""
1567,"""Discovering the compositional structure of vector representations with Role Learning Networks""","['compositionality', 'generalization', 'neurosymbolic', 'symbolic structures', 'interpretability', 'tensor product representations']","""Neural networks (NNs) are able to perform tasks that rely on compositional structure even though they lack obvious mechanisms for representing this structure. To analyze the internal representations that enable such success, we propose ROLE, a technique that detects whether these representations implicitly encode symbolic structure. ROLE learns to approximate the representations of a target encoder E by learning a symbolic constituent structure and an embedding of that structure into Es representational vector space. The constituents of the approximating symbol structure are defined by structural positions roles that can be filled by symbols. We show that when E is constructed to explicitly embed a particular type of structure (e.g., string or tree), ROLE successfully extracts the ground-truth roles defining that structure. We then analyze a seq2seq network trained to perform a more complex compositional task (SCAN), where there is no ground truth role scheme available. For this model, ROLE successfully discovers an interpretable symbolic structure that the model implicitly uses to perform the SCAN task, providing a comprehensive account of the link between the representations and the behavior of a notoriously hard-to-interpret type of model. We verify the causal importance of the discovered symbolic structure by showing that, when we systematically manipulate hidden embeddings based on this symbolic structure, the models output is also changed in the way predicted by our analysis. Finally, we use ROLE to explore whether popular sentence embedding models are capturing compositional structure and find evidence that they are not; we conclude by discussing how insights from ROLE can be used to impart new inductive biases that will improve the compositional abilities of such models.""","""This work builds directly on McCoy et al. (2019a) and add a RNN that can replace what was human generated hypotheses to the role schemes. The final goal of ROLE is to analyze a network by identifying symbolic structure. The authors conduct sanity check by conducting experiments with ground truth, and extend the work further to apply it to a complex model. I wonder under what definition of interpretable authors have in mind with the final output (figure 2) - the output is very complex. It remains questionable if this will give some insight or how would humans parse this info such that it is useful for them in some way. Overall, though this is a good paper, due to the number of strong papers this year, it cannot be accepted at this time. We hope the comments given by reviewers can help improve a future version. """
1568,"""Population-Guided Parallel Policy Search for Reinforcement Learning""","['Reinforcement Learning', 'Parallel Learning', 'Population Based Learning']","""In this paper, a new population-guided parallel learning scheme is proposed to enhance the performance of off-policy reinforcement learning (RL). In the proposed scheme, multiple identical learners with their own value-functions and policies share a common experience replay buffer, and search a good policy in collaboration with the guidance of the best policy information. The key point is that the information of the best policy is fused in a soft manner by constructing an augmented loss function for policy update to enlarge the overall search region by the multiple learners. The guidance by the previous best policy and the enlarged range enable faster and better policy search, and monotone improvement of the expected cumulative return by the proposed scheme is proved theoretically. Working algorithms are constructed by applying the proposed scheme to the twin delayed deep deterministic (TD3) policy gradient algorithm, and numerical results show that the constructed P3S-TD3 outperforms most of the current state-of-the-art RL algorithms, and the gain is significant in the case of sparse reward environment.""","""The paper proposes a new approach to multi-actor RL, which ensure diversity and performance of the population of actors, by distilling the policy of the best performing agent in a soft way and maintaining some distance between the agents. The authors show improved performance over several state-of-the-art mono-actor algorithms and over several other multi-actor RL algorithms. Initially, reviewers were concerned with magnitude of the contribution/novelty, as well as some technical issues (e.g. the beta update), and relative lack of baseline comparisons. However, after discussion the reviewers largely agree that their main concerns have been addressed. Therefore, I recommend this paper for acceptance."""
1569,"""Understanding the Limitations of Conditional Generative Models""","['Conditional Generative Models', 'Generative Classifiers', 'Robustness', 'Adversarial Examples']","""Class-conditional generative models hold promise to overcome the shortcomings of their discriminative counterparts. They are a natural choice to solve discriminative tasks in a robust manner as they jointly optimize for predictive performance and accurate modeling of the input distribution. In this work, we investigate robust classification with likelihood-based generative models from a theoretical and practical perspective to investigate if they can deliver on their promises. Our analysis focuses on a spectrum of robustness properties: (1) Detection of worst-case outliers in the form of adversarial examples; (2) Detection of average-case outliers in the form of ambiguous inputs and (3) Detection of incorrectly labeled in-distribution inputs. Our theoretical result reveals that it is impossible to guarantee detectability of adversarially-perturbed inputs even for near-optimal generative classifiers. Experimentally, we find that while we are able to train robust models for MNIST, robustness completely breaks down on CIFAR10. We relate this failure to various undesirable model properties that can be traced to the maximum likelihood training objective. Despite being a common choice in the literature, our results indicate that likelihood-based conditional generative models may are surprisingly ineffective for robust classification.""","""This paper presents theoretical results showing the conditional generative models cannot be robust. The paper also provide counter examples and some empirical evidence showing that the theory is reflected in practice. Some reviewers doubt how much of the theory holds in reality, but still they think that this paper could be a useful for the community. After the rebuttal period, R2 increased their score and it seems that with the current score the paper can be accepted."""
1570,"""Graph Neural Networks for Reasoning 2-Quantified Boolean Formulas""","['Graph Neural Networks', '2-Quantified Boolean Formula', 'Symbolic Reasoning']","""It is valuable yet remains challenging to apply neural networks in logical reasoning tasks. Despite some successes witnessed in learning SAT (Boolean Satisfiability) solvers for propositional logic via Graph Neural Networks (GNN), there haven't been any successes in learning solvers for more complex predicate logic. In this paper, we target the QBF (Quantified Boolean Formula) satisfiability problem, the complexity of which is in-between propositional logic and predicate logic, and investigate the feasibility of learning GNN-based solvers and GNN-based heuristics for the cases with a universal-existential quantifier alternation (so-called 2QBF problems). We conjecture, with empirical support, that GNNs have certain limitations in learning 2QBF solvers, primarily due to the inability to reason about a set of assignments. Then we show the potential of GNN-based heuristics in CEGAR-based solvers and explore the interesting challenges to generalize them to larger problem instances. In summary, this paper provides a comprehensive surveying view of applying GNN-based embeddings to 2QBF problems and aims to offer insights in applying machine learning tools to more complicated symbolic reasoning problems. ""","""This work investigates the use of graph NNs for solving 2QBF . The authors provide empirical evidence that for this type of satisfiability decision problem, GNNs are not able to provide solutions and claim this is due to the message passing mechanism that cannot afford for complex reasoning. Finally, the authors propose a number of heuristics that extend GNNs and show that these improve their performance. 2-QBF problem is used as a playground since, as the authors also point, their complexity is in between that of predicate and propositional logic. This on its own is not bad, as it can be used as a minimal environment for the type of investigation the authors are interested. That being said, I find a number a number of flaws in the current form of the paper (some of them pointed by R3 as well), with the main issue being that of lack experimental rigor. Given the restricted set of problems the authors consider, I think the experiments on identifying pathologies of GNNs on this setup could have gone more in depth. Let me be specific. 1) The bad performance is attributed to message-passing. However, this feels anecdotal at the moment and authors do not provide firm conclusions about that. The only evidence they provide is that performance becomes better with more message-passing iterations they allow. This is a hint though to dive deeper rather than a firm conclusion. For example do we know if the finding about sensitivity to message-passing is due to the small size of the network or the training procedure? 2) To add on that, there is virtually no information on the paper about the specifics of the experimental setup, so the reader cannot be convinced that the negative results do not arise from a bad experimental configuration (e.g., small size of network). 3) Moreover, the negative results here, as the authors point, seem to contradict previous work, providing negative results against GNNs. Again, this is a valuable contribution if that is indeed the case, but again the paper does not provide enough evidence. In lieu of a convincing set of experiments, the paper could provide a proof (as also asked by R3). However with no proof and not strong empirical evidence that this result does not feel ready to get published at ICLR. Overall, I think this paper with a bit more rigor could be a very good submission for a later conference. However, as it stands I cannot recommend acceptance. """
1571,"""Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With Trainable Masked Layers""","['neural network pruning', 'sparse learning', 'network compression', 'architecture search']","""We present a novel network pruning algorithm called Dynamic Sparse Training that can jointly nd the optimal network parameters and sparse network structure in a unied optimization process with trainable pruning thresholds. These thresholds can have ne-grained layer-wise adjustments dynamically via backpropagation. We demonstrate that our dynamic sparse training algorithm can easily train very sparse neural network models with little performance loss using the same training epochs as dense models. Dynamic Sparse Training achieves prior art performance compared with other sparse training algorithms on various network architectures. Additionally, we have several surprising observations that provide strong evidence to the effectiveness and efciency of our algorithm. These observations reveal the underlying problems of traditional three-stage pruning algorithms and present the potential guidance provided by our algorithm to the design of more compact network architectures.""","""The paper lies on the borderline. An accept is suggested based on majority reviews and authors' response."""
1572,"""Few-shot Text Classification with Distributional Signatures""","['text classification', 'meta learning', 'few shot learning']","""In this paper, we explore meta-learning for few-shot text classification. Meta-learning has shown strong performance in computer vision, where low-level patterns are transferable across learning tasks. However, directly applying this approach to text is challenging--lexical features highly informative for one task may be insignificant for another. Thus, rather than learning solely from words, our model also leverages their distributional signatures, which encode pertinent word occurrence patterns. Our model is trained within a meta-learning framework to map these signatures into attention scores, which are then used to weight the lexical representations of words. We demonstrate that our model consistently outperforms prototypical networks learned on lexical knowledge (Snell et al., 2017) in both few-shot text classification and relation classification by a significant margin across six benchmark datasets (20.0% on average in 1-shot classification).""","""This paper proposes a meta-learning approach for few-shot text classification. The main idea is to use an attention mechanism over the distributional signatures of the inputs to weight word importance. Experiments on text classification datasets show that the proposed method improves over baselines in 1-shot and 5-shot settings. The paper addresses an important problem of learning from a few labeled examples. The proposed approach makes sense and the results clearly show the strength of the proposed approach. R1 had some questions regarding the proposed method and experimental details. I believe this have been addressed by the authors in their rebuttal. R2 suggested that the authors clarified their experimental setup with respect to prior work and improved the clarity of their paper. The authors have made some adjustments based on this feedback, including adding new sections in the appendix. R3 had concerns regarding the contribution of the approach and whether it trades variance for bias. The authors have addressed most of these concerns and R3 has updated their review accordingly. I think all the reviewers gave valuable feedbacks that have been incorporated by the authors to improve their paper. While the overall scores remain low, I believe that they would have been increased had R1 and R2 reassessed the revised submission. I recommend to accept this paper. """
1573,"""Inductive Matrix Completion Based on Graph Neural Networks""","['matrix completion', 'graph neural network']","""We propose an inductive matrix completion model without using side information. By factorizing the (rating) matrix into the product of low-dimensional latent embeddings of rows (users) and columns (items), a majority of existing matrix completion methods are transductive, since the learned embeddings cannot generalize to unseen rows/columns or to new matrices. To make matrix completion inductive, most previous works use content (side information), such as user's age or movie's genre, to make predictions. However, high-quality content is not always available, and can be hard to extract. Under the extreme setting where not any side information is available other than the matrix to complete, can we still learn an inductive matrix completion model? In this paper, we propose an Inductive Graph-based Matrix Completion (IGMC) model to address this problem. IGMC trains a graph neural network (GNN) based purely on 1-hop subgraphs around (user, item) pairs generated from the rating matrix and maps these subgraphs to their corresponding ratings. It achieves highly competitive performance with state-of-the-art transductive baselines. In addition, IGMC is inductive -- it can generalize to users/items unseen during the training (given that their interactions exist), and can even transfer to new tasks. Our transfer learning experiments show that a model trained out of the MovieLens dataset can be directly used to predict Douban movie ratings with surprisingly good performance. Our work demonstrates that: 1) it is possible to train inductive matrix completion models without using side information while achieving similar or better performances than state-of-the-art transductive methods; 2) local graph patterns around a (user, item) pair are effective predictors of the rating this user gives to the item; and 3) Long-range dependencies might not be necessary for modeling recommender systems.""","""This paper proposes a novel technique for matrix completion, using graphical neighborhood structure to side-step the need for any side-information. Post-rebuttal, the reviewers converged on a unanimous decision to accept. The authors are encouraged to review to address reviewer comments."""
1574,"""Generative Adversarial Nets for Multiple Text Corpora""","['GAN', 'NLP', 'embeddings']","""Generative adversarial nets (GANs) have been successfully applied to the artificial generation of image data. In terms of text data, much has been done on the artificial generation of natural language from a single corpus. We consider multiple text corpora as the input data, for which there can be two applications of GANs: (1) the creation of consistent cross-corpus word embeddings given different word embeddings per corpus; (2) the generation of robust bag-of-words document embeddings for each corpora. We demonstrate our GAN models on real-world text data sets from different corpora, and show that embeddings from both models lead to improvements in supervised learning problems.""","""The general consensus amongst the reviewers is that this paper is not quite ready for publication. The reviewers raised several issues with your paper, which I hope will help you as you work towards finding a home for this work."""
1575,"""Accelerated Variance Reduced Stochastic Extragradient Method for Sparse Machine Learning Problems""","['non-smooth optimization', 'SVRG', 'proximal operator', 'extragradient descent', 'momentum acceleration']","""Recently, many stochastic gradient descent algorithms with variance reduction have been proposed. Moreover, their proximal variants such as Prox-SVRG can effectively solve non-smooth problems, which makes that they are widely applied in many machine learning problems. However, the introduction of proximal operator will result in the error of the optimal value. In order to address this issue, we introduce the idea of extragradient and propose a novel accelerated variance reduced stochastic extragradient descent (AVR-SExtraGD) algorithm, which inherits the advantages of Prox-SVRG and momentum acceleration techniques. Moreover, our theoretical analysis shows that AVR-SExtraGD enjoys the best-known convergence rates and oracle complexities of stochastic first-order algorithms such as Katyusha for both strongly convex and non-strongly convex problems. Finally, our experimental results show that for ERM problems and robust face recognition via sparse representation, our AVR-SExtraGD can yield the improved performance compared with Prox-SVRG and Katyusha. The asynchronous variant of AVR-SExtraGD outperforms KroMagnon and ASAGA, which are the asynchronous variants of SVRG and SAGA, respectively.""","""This paper proposes a stochastic variance reduced extragradient algorithm. The reviewers had a number of concerns which I feel have been adequately addressed by the authors. That being said, the field of optimizers is crowded and I could not be convinced that the proposed method would be used. In particular, (almost) hyperparameter-free methods are usually preferred (see Adam), which is not the case here. To be honest, this work is borderline and could have gone either way but was rated lower than other borderline submissions."""
1576,"""Finite Depth and Width Corrections to the Neural Tangent Kernel""","['Neural Tangent Kernel', 'Finite Width Corrections', 'Random ReLU Net', 'Wide Networks', 'Deep Networks']","""We prove the precise scaling, at finite depth and width, for the mean and variance of the neural tangent kernel (NTK) in a randomly initialized ReLU network. The standard deviation is exponential in the ratio of network depth to width. Thus, even in the limit of infinite overparameterization, the NTK is not deterministic if depth and width simultaneously tend to infinity. Moreover, we prove that for such deep and wide networks, the NTK has a non-trivial evolution during training by showing that the mean of its first SGD update is also exponential in the ratio of network depth to width. This is sharp contrast to the regime where depth is fixed and network width is very large. Our results suggest that, unlike relatively shallow and wide networks, deep and wide ReLU networks are capable of learning data-dependent features even in the so-called lazy training regime. ""","""This paper aims to study the mean and variance of the neural tangent kernel (NTK) in a randomly initialized ReLU network. The purpose is to understand the regime where the width and depth go to infinity together with a fixed ratio. The paper does not have a lot of numerical experiments to test the mathematical conclusions. In the discussion the reviewers concurred that the paper is interesting and has nice results but raised important points regarding the fact that only the diagonal elements are studied. This I think is the major limitation of this paper. Another issue raised was lack of experimental work validating the theory. Despite the limitations discussed above, overall I think this is an interesting and important area as it sheds light on how to move beyond the NTK regime. I also think studying this limit is very important to better understanding of neural network training. I recommend acceptance to ICLR."""
1577,"""Reinforcement Learning with Structured Hierarchical Grammar Representations of Actions""","['Hierarchical Reinforcement Learning', 'Action Representations', 'Macro-Actions', 'Action Grammars']","""From a young age humans learn to use grammatical principles to hierarchically combine words into sentences. Action grammars is the parallel idea; that there is an underlying set of rules (a ""grammar"") that govern how we hierarchically combine actions to form new, more complex actions. We introduce the Action Grammar Reinforcement Learning (AG-RL) framework which leverages the concept of action grammars to consistently improve the sample efficiency of Reinforcement Learning agents. AG-RL works by using a grammar inference algorithm to infer the action grammar"" of an agent midway through training, leading to a higher-level action representation. The agent's action space is then augmented with macro-actions identified by the grammar. We apply this framework to Double Deep Q-Learning (AG-DDQN) and a discrete action version of Soft Actor-Critic (AG-SAC) and find that it improves performance in 8 out of 8 tested Atari games (median +31%, max +668%) and 19 out of 20 tested Atari games (median +96%, maximum +3,756%) respectively without substantive hyperparameter tuning. We also show that AG-SAC beats the model-free state-of-the-art for sample efficiency in 17 out of the 20 tested Atari games (median +62%, maximum +13,140%), again without substantive hyperparameter tuning.""","""The topic of macro-actions/hierarchical RL is an important one and the perspective this paper takes on this topic by drawing parallels with action grammars is intriguing. However, some more work is needed to properly evaluate the significance. In particular, a better evaluation of the strengths and weaknesses of the method would improve this paper a lot."""
1578,"""What graph neural networks cannot learn: depth vs width""","['graph neural networks', 'capacity', 'impossibility results', 'lower bounds', 'expressive power']","""This paper studies the expressive power of graph neural networks falling within the message-passing framework (GNNmp). Two results are presented. First, GNNmp are shown to be Turing universal under sufficient conditions on their depth, width, node attributes, and layer expressiveness. Second, it is discovered that GNNmp can lose a significant portion of their power when their depth and width is restricted. The proposed impossibility statements stem from a new technique that enables the repurposing of seminal results from distributed computing and leads to lower bounds for an array of decision, optimization, and estimation problems involving graphs. Strikingly, several of these problems are deemed impossible unless the product of a GNNmp's depth and width exceeds a polynomial of the graph size; this dependence remains significant even for tasks that appear simple or when considering approximation.""","""This paper provides a theoretical background for the expressive power of graph convolutional networks. The results are obviously useful, and the discussion went in the positive way. All reviewers recommend accepting, and I am with them."""
1579,"""Towards Fast Adaptation of Neural Architectures with Meta Learning""","['Fast adaptation', 'Meta learning', 'NAS']","""Recently, Neural Architecture Search (NAS) has been successfully applied to multiple artificial intelligence areas and shows better performance compared with hand-designed networks. However, the existing NAS methods only target a specific task. Most of them usually do well in searching an architecture for single task but are troublesome for multiple datasets or multiple tasks. Generally, the architecture for a new task is either searched from scratch, which is neither efficient nor flexible enough for practical application scenarios, or borrowed from the ones searched on other tasks, which might be not optimal. In order to tackle the transferability of NAS and conduct fast adaptation of neural architectures, we propose a novel Transferable Neural Architecture Search method based on meta-learning in this paper, which is termed as T-NAS. T-NAS learns a meta-architecture that is able to adapt to a new task quickly through a few gradient steps, which makes the transferred architecture suitable for the specific task. Extensive experiments show that T-NAS achieves state-of-the-art performance in few-shot learning and comparable performance in supervised learning but with 50x less searching cost, which demonstrates the effectiveness of our method.""","""This paper introduces T-NAS, a neural architecture search (NAS) method that can quickly adapt architectures to new datasets based on gradient-based meta-learning. It is a combination of the NAS method DARTS and the meta-learning method MAML. All reviewers had some questions and minor criticisms that the authors replied to, and in the private discussion of reviewers and AC all reviewers were happy with the authors' answers. There was unanimous agreement that this is a solid poster. Therefore, I recommend acceptance as a poster."""
1580,"""Hope For The Best But Prepare For The Worst: Cautious Adaptation In RL Agents""","['safety', 'risk', 'uncertainty', 'adaptation']","""We study the problem of safe adaptation: given a model trained on a variety of past experiences for some task, can this model learn to perform that task in a new situation while avoiding catastrophic failure? This problem setting occurs frequently in real-world reinforcement learning scenarios such as a vehicle adapting to drive in a new city, or a robotic drone adapting a policy trained only in simulation. While learning without catastrophic failures is exceptionally difficult, prior experience can allow us to learn models that make this much easier. These models might not directly transfer to new settings, but can enable cautious adaptation that is substantially safer than na\""{i}ve adaptation as well as learning from scratch. Building on this intuition, we propose risk-averse domain adaptation (RADA). RADA works in two steps: it first trains probabilistic model-based RL agents in a population of source domains to gain experience and capture epistemic uncertainty about the environment dynamics. Then, when dropped into a new environment, it employs a pessimistic exploration policy, selecting actions that have the best worst-case performance as forecasted by the probabilistic model. We show that this simple maximin policy accelerates domain adaptation in a safety-critical driving environment with varying vehicle sizes. We compare our approach against other approaches for adapting to new environments, including meta-reinforcement learning.""","""The work this paper presents is interesting, but it is not quite ready yet for publication at ICLR. Specifically, the motivation of particular choices could be better, such as summing over quantiles, as indicated by Reviewer 1. The inherent trade-off between safety and speed of adaptation and how this relates to the proposed method could also use a clearer exposition."""
1581,"""Adversarial Training and Provable Defenses: Bridging the Gap""","['adversarial examples', 'adversarial training', 'provable defense', 'convex relaxations', 'deep learning']","""We present COLT, a new method to train neural networks based on a novel combination of adversarial training and provable defenses. The key idea is to model neural network training as a procedure which includes both, the verifier and the adversary. In every iteration, the verifier aims to certify the network using convex relaxation while the adversary tries to find inputs inside that convex relaxation which cause verification to fail. We experimentally show that this training method, named convex layerwise adversarial training (COLT), is promising and achieves the best of both worlds -- it produces a state-of-the-art neural network with certified robustness of 60.5% and accuracy of 78.4% on the challenging CIFAR-10 dataset with a 2/255 L-infinity perturbation. This significantly improves over the best concurrent results of 54.0% certified robustness and 71.5% accuracy. ""","""The reviewers develop a novel technique for training neural networks that are provably robust to adversarial attacks, by combining provable defenses using convex relaxations with latent adversarial attacks that lie in the gap between the convex relaxation and the true realizable set of activations at a layer of the network. The authors show that the resulting procedure is computationally efficient and able to train neural networks to attain SOTA provable robustness to adversarial attacks. The paper is well written and clearly explains an interesting idea, backed by thorough experiments. The reviewers were in consensus on acceptance and relatively minor concerns were clearly addressed in the rebuttal phase. Hence, I strongly recommend acceptance."""
1582,"""Frequency Pooling: Shift-Equivalent and Anti-Aliasing Down Sampling""","['pooling', 'anti-aliasing', 'shift-equivalent', 'frequency']","""Convolutional layer utilizes the shift-equivalent prior of images which makes it a great success for image processing. However, commonly used down sampling methods in convolutional neural networks (CNNs), such as max-pooling, average-pooling, and strided-convolution, are not shift-equivalent. This destroys the shift-equivalent property of CNNs and degrades their performance. In this paper, we propose a novel pooling method which is \emph{strict shift equivalent and anti-aliasing} in theory. This is achieved by (inverse) Discrete Fourier Transform and we call our method frequency pooling. Experiments on image classifications show that frequency pooling improves accuracy and robustness w.r.t shifts of CNNs. ""","""This submission has been assessed by three reviewers and scored 3/6/1. The reviewers also have not increased their scores after the rebuttal. Two reviewers pointed to poor experimental results that do not fully support what is claimed in contributions and conclusions. Theoretical support for the reconstruction criterion was considered weak. Finally, the paer is pointed to be a special case of (Zhang 2019). While the paper has some merits, all reviewers had a large number of unresolved criticism. Thus, this paper cannot be accepted by ICLR2020. """
1583,"""A new perspective in understanding of Adam-Type algorithms and beyond""","['Machine Learning', 'Algorithm', 'Adam', 'First-Order Method']","""First-order adaptive optimization algorithms such as Adam play an important role in modern deep learning due to their super fast convergence speed in solving large scale optimization problems. However, Adam's non-convergence behavior and regrettable generalization ability make it fall into a love-hate relationship to deep learning community. Previous studies on Adam and its variants (refer as Adam-Type algorithms) mainly rely on theoretical regret bound analysis, which overlook the natural characteristic reside in such algorithms and limit our thinking. In this paper, we aim at seeking a different interpretation of Adam-Type algorithms so that we can intuitively comprehend and improve them. The way we chose is based on a traditional online convex optimization algorithm scheme known as mirror descent method. By bridging Adam and mirror descent, we receive a clear map of the functionality of each part in Adam. In addition, this new angle brings us a new insight on identifying the non-convergence issue of Adam. Moreover, we provide new variant of Adam-Type algorithm, namely AdamAL which can naturally mitigate the non-convergence issue of Adam and improve its performance. We further conduct experiments on various popular deep learning tasks and models, and the results are quite promising.""","""In this paper, the authors draw upon online convex optimization in order to derive a different interpretation of Adam-Type algorithms, allowing them to identify the functionality of each part of Adam. Based on these observations, the authors derive a new Adam-Type algorithm, AdamAL and test it in 2 computer vision datasets using 3 CNN architectures. The main concern shared by all reviewers is the lack of novelty but also rigor both on the experimental and theoretical justification provided by the authors. After having read carefully the reviews and main points of the paper, I will side with the reviewers, thus not recommending acceptance of this paper. """
1584,"""Improving Dirichlet Prior Network for Out-of-Distribution Example Detection""","['predictive uncertainty', 'distributional uncertainty', 'Dirichlet distribution', 'out-of-distribution detection', 'deep learning']","""Determining the source of uncertainties in the predictions of AI systems are important. It allows the users to act in an informative manner to improve the safety of such systems, applied to the real-world sensitive applications. Predictive uncertainties can originate from the uncertainty in model parameters, data uncertainty or due to distributional mismatch between training and test examples. While recently, significant progress has been made to improve the predictive uncertainty estimation of deep learning models, most of these approaches either conflate the distributional uncertainty with model uncertainty or data uncertainty. In contrast, the Dirichlet Prior Network (DPN) can model distributional uncertainty distinctly by parameterizing a prior Dirichlet over the predictive categorical distributions. However, their complex loss function by explicitly incorporating KL divergence between Dirichlet distributions often makes the error surface ill-suited to optimize for challenging datasets with multiple classes. In this paper, we present an improved DPN framework by proposing a novel loss function using the standard cross-entropy loss along with a regularization term to control the sharpness of the output Dirichlet distributions from the network. Our proposed loss function aims to improve the training efficiency of the DPN framework for challenging classification tasks with large number of classes. In our experiments using synthetic and real datasets, we demonstrate that our DPN models can distinguish the distributional uncertainty from other uncertainty types. Our proposed approach significantly improves DPN frameworks and outperform the existing OOD detectors on CIFAR-10 and CIFAR-100 dataset while also being able to recognize distributional uncertainty distinctly.""","""In this work the authors build on the Dirichlet prior network of Malinin & Gales, replacing the loss function and adding a regularization term which improve training in the setting with a significant number of classes. Improving uncertainty for deep learning is a challenging but very important problem. The reviewers of this paper gave two weak rejects (one is of low confidence) and one weak accept. They found the paper well written, easy to follow and well motivated but somewhat incremental and not entirely empirically justified. None of the reviewers were willing to strongly champion the paper for acceptance. Unfortunately as such the paper falls below the bar for acceptance. It appears that the authors significantly added to the experiments in the discussion phase and hopefully that will make the paper much stronger for a future submission."""
1585,"""MIM: Mutual Information Machine""","['Mutual Information', 'Representation Learning', 'Generative Models', 'Probability Density Estimator']",""" We introduce the Mutual Information Machine (MIM), an autoencoder framework for learning joint distributions over observations and latent states. The model formulation reflects two key design principles: 1) symmetry, to encourage the encoder and decoder to learn different factorizations of the same underlying distribution; and 2) mutual information, to encourage the learning of useful representations for downstream tasks. The objective comprises the Jensen-Shannon divergence between the encoding and decoding joint distributions, plus a mutual information regularizer. We show that this can be bounded by a tractable cross-entropy loss between the true model and a parameterized approximation, and relate this to maximum likelihood estimation and variational autoencoders. Experiments show that MIM is capable of learning a latent representation with high mutual information, and good unsupervised clustering, while providing NLL comparable to VAE (with a sufficiently expressive architecture).""","""The paper proposes an autoencoder framework for learning joint distributions over observations and latent states. The reviewers expressed concerns regarding the motivation for this work, the presentation with respect to prior work, and unconvincing experiments. In its current form the paper is not ready for acceptance to ICLR-2020."""
1586,"""Global graph curvature""","['graph curvature', 'graph embedding', 'hyperbolic space', 'distortion', 'Ollivier curvature', 'Forman curvature']","""Recently, non-Euclidean spaces became popular for embedding structured data. However, determining suitable geometry and, in particular, curvature for a given dataset is still an open problem. In this paper, we define a notion of global graph curvature, specifically catered to the problem of embedding graphs, and analyze the problem of estimating this curvature using only graph-based characteristics (without actual graph embedding). We show that optimal curvature essentially depends on dimensionality of the embedding space and loss function one aims to minimize via embedding. We review the existing notions of local curvature (e.g., Ollivier-Ricci curvature) and analyze their properties theoretically and empirically. In particular, we show that such curvatures are often unable to properly estimate the global one. Hence, we propose a new estimator of global graph curvature specifically designed for zero-one loss function.""","""This paper studies the problem of embedding graphs into continuous spaces. The authors focus on determining the correct dimension and curvature to minimize distortion or a threshold loss of the embedding. The authors consider a variety of existing notions of curvature for graphs, introduce a notion of global curvature for the entire graph, and how to efficiently compute it. Reviewers were positive about the problem under study, but agreed that the current manuscript somewhat lacks a clear contribution. They also pointed out that the goal of using a global notion of curvature should be better motivated. For these reasons, the AC recommends rejection at this time. """
1587,"""Policy path programming""","['markov decision process', 'planning', 'hierarchical', 'reinforcement learning']","""We develop a normative theory of hierarchical model-based policy optimization for Markov decision processes resulting in a full-depth, full-width policy iteration algorithm. This method performs policy updates which integrate reward information over all states at all horizons simultaneously thus sequentially maximizing the expected reward obtained per algorithmic iteration. Effectively, policy path programming ascends the expected cumulative reward gradient in the space of policies defined over all state-space paths. An exact formula is derived which finitely parametrizes these path gradients in terms of action preferences. Policy path gradients can be directly computed using an internal model thus obviating the need to sample paths in order to optimize in depth. They are quadratic in successor representation entries and afford natural generalizations to higher-order gradient techniques. In simulations, it is shown that intuitive hierarchical reasoning is emergent within the associated policy optimization dynamics.""","""The reviewers were not convinced about the significance of this work. There is no empirical or theoretical result justifying why this method has advantages over the existing methods. The reviewers also raised concerns related to the scalability of the proposal. Since none of the reviewers were enthusiastic about the paper, including the expert ones, I cannot recommend acceptance of this work."""
1588,"""Learning to Group: A Bottom-Up Framework for 3D Part Discovery in Unseen Categories""","['Shape Segmentation', 'Zero-Shot Learning', 'Learning Representations']","""We address the problem of learning to discover 3D parts for objects in unseen categories. Being able to learn the geometry prior of parts and transfer this prior to unseen categories pose fundamental challenges on data-driven shape segmentation approaches. Formulated as a contextual bandit problem, we propose a learning-based iterative grouping framework which learns a grouping policy to progressively merge small part proposals into bigger ones in a bottom-up fashion. At the core of our approach is to restrict the local context for extracting part-level features, which encourages the generalizability to novel categories. On a recently proposed large-scale fine-grained 3D part dataset, PartNet, we demonstrate that our method can transfer knowledge of parts learned from 3 training categories to 21 unseen testing categories without seeing any annotated samples. Quantitative comparisons against four strong shape segmentation baselines show that we achieve the state-of-the-art performance.""","""This paper presents and evaluates a technique for unsupervised object part discovery in 3d -- i.e. grouping points of a point cloud into coherent parts for an object that has not been seen before. The paper received 3 reviews from experts working in this area. R1 recommended Weak Accept, and identified some specific technical questions for the authors to address in the response (which the authors provided and R1 seemed satisfied). R2 recommends Weak Reject, and indicates an overall positive view of the paper but felt the experimental results were somewhat weak and posed several specific questions to the reviewers. The authors' response convincingly addressed these questions. R3 recommends Accept, but suggests some additional qualitative examples and ablation studies. The author response again addresses these. Overall, the reviews indicate that this is a good paper with some specific questions and concerns that can be addressed; the AC thus recommends a (Weak) Accept based on the reviews and author responses."""
1589,"""Dynamical System Embedding for Efficient Intrinsically Motivated Artificial Agents""","['intrinsic motivation', 'empowerment', 'latent representation', 'encoder']","""Mutual Information between agent Actions and environment States (MIAS) quantifies the influence of agent on its environment. Recently, it was found that intrinsic motivation in artificial agents emerges from the maximization of MIAS. For example, empowerment is an information-theoretic approach to intrinsic motivation, which has been shown to solve a broad range of standard RL benchmark problems. The estimation of empowerment for arbitrary dynamics is a challenging problem because it relies on the estimation of MIAS. Existing approaches rely on sampling, which have formal limitations, requiring exponentially many samples. In this work, we develop a novel approach for the estimation of empowerment in unknown arbitrary dynamics from visual stimulus only, without sampling for the estimation of MIAS. The core idea is to represent the relation between action sequences and future states by a stochastic dynamical system in latent space, which admits an efficient estimation of MIAS by the ``Water-Filling"" algorithm from information theory. We construct this embedding with deep neural networks trained on a novel objective function and demonstrate our approach by numerical simulations of non-linear continuous-time dynamical systems. We show that the designed embedding preserves information-theoretic properties of the original dynamics, and enables us to solve the standard AI benchmark problems.""","""The paper proposes a novel method for embedding sequences of states and actions into a latent representation that enables efficient estimation of empowerment for an RL system. They use empowerment as intrinsic reward for safe exploration. While the reviewers agree that this paper has promise, they also agree that it is not quite ready for publication in its current state. In particular, the paper is lacking a theoretical justification for the proposed approach, the definition of empowerment used by the authors raised questions, and the manuscript would benefit from more clear and detailed description of the method. For these reasons I recommend rejection."""
1590,"""FasterSeg: Searching for Faster Real-time Semantic Segmentation""","['neural architecture search', 'real-time', 'segmentation']","""We present FasterSeg, an automatically designed semantic segmentation network with not only state-of-the-art performance but also faster speed than current methods. Utilizing neural architecture search (NAS), FasterSeg is discovered from a novel and broader search space integrating multi-resolution branches, that has been recently found to be vital in manually designed segmentation models. To better calibrate the balance between the goals of high accuracy and low latency, we propose a decoupled and fine-grained latency regularization, that effectively overcomes our observed phenomenons that the searched networks are prone to ""collapsing"" to low-latency yet poor-accuracy models. Moreover, we seamlessly extend FasterSeg to a new collaborative search (co-searching) framework, simultaneously searching for a teacher and a student network in the same single run. The teacher-student distillation further boosts the student models accuracy. Experiments on popular segmentation benchmarks demonstrate the competency of FasterSeg. For example, FasterSeg can run over 30% faster than the closest manually designed competitor on Cityscapes, while maintaining comparable accuracy.""","""This paper presents neural architecture search for semantic segmentation, with search space that integrates multi-resolution branches. The method also uses a regularization to overcome the issue of learned networks collapsing to low-latency but poor accuracy models. Another interesting contribution is a collaborative search procedure to simultaneously search for student and teacher networks in a single run. All reviewers agree that the proposed method is well-motivated and shows promising empirical results. Author response satisfactorily addressed most of the points raised by the reviewers. I recommend acceptance. """
1591,"""Model-free Learning Control of Nonlinear Stochastic Systems with Stability Guarantee""","['Reinforcement learning', 'nonlinear stochastic system', 'Lyapunov']","""Reinforcement learning (RL) offers a principled way to achieve the optimal cumulative performance index in discrete-time nonlinear stochastic systems, which are modeled as Markov decision processes. Its integration with deep learning techniques has promoted the field of deep RL with an impressive performance in complicated continuous control tasks. However, from a control-theoretic perspective, the first and most important property of a system to be guaranteed is stability. Unfortunately, stability is rarely assured in RL and remains an open question. In this paper, we propose a stability guaranteed RL framework which simultaneously learns a Lyapunov function along with the controller or policy, both of which are parameterized by deep neural networks, by borrowing the concept of Lyapunov function from control theory. Our framework can not only offer comparable or superior control performance over state-of-the-art RL algorithms, but also construct a Lyapunov function to validate the closed-loop stability. In the simulated experiments, our approach is evaluated on several well-known examples including classic CartPole balancing, 3-dimensional robot control and control of synthetic biology gene regulatory networks. Compared with RL algorithms without stability guarantee, our approach can enable the system to recover to the operating point when interfered by uncertainties such as unseen disturbances and system parametric variations to a certain extent. ""","""The authors propose a method to guarantee the stability of a learnt continuous controller by optimizing the objective through a Lyapunov critic. The method is demonstrated on low dimensional continuous control problems such as cart pole. The reviewers were mixed in their opinion of the paper, especially after the authors' rebuttal. The concerns center around some of the authors' claims regarding theoretical results, in particular that stability guarantees can be asserted for a model-free controller. This claim seems to be incorrect especially on novel data where stability cannot be guaranteed, thus indicating that 'robust controller' might be a better description. There are also concerns about the novelty and the contributions of the paper. Overall, the method is promising but the claims need to be carefully written. The recommendation is to reject the paper at this time."""
1592,"""Collaborative Generated Hashing for Market Analysis and Fast Cold-start Recommendation""","['Recommender system', 'generated model', 'market analysis', 'hash', 'cold start']","""Cold-start and efficiency issues of the Top-k recommendation are critical to large-scale recommender systems. Previous hybrid recommendation methods are effective to deal with the cold-start issues by extracting real latent factors of cold-start items(users) from side information, but they still suffer low efficiency in online recommendation caused by the expensive similarity search in real latent space. This paper presents a collaborative generated hashing (CGH) to improve the efficiency by denoting users and items as binary codes, which applies to various settings: cold-start users, cold-start items and warm-start ones. Specifically, CGH is designed to learn hash functions of users and items through the Minimum Description Length (MDL) principle; thus, it can deal with various recommendation settings. In addition, CGH initiates a new marketing strategy through mining potential users by a generative step. To reconstruct effective users, the MDL principle is used to learn compact and informative binary codes from the content data. Extensive experiments on two public datasets show the advantages for recommendations in various settings over competing baselines and analyze the feasibility of the application in marketing.""","""The presented work has worse accuracy than existing (and not all the baselines are given correctly) and does not provide the running time comparison. All reviewers recommend rejection, and I am with them."""
1593,"""Learning Self-Correctable Policies and Value Functions from Demonstrations with Negative Sampling""","['imitation learning', 'model-based imitation learning', 'model-based RL', 'behavior cloning', 'covariate shift']","""Imitation learning, followed by reinforcement learning algorithms, is a promising paradigm to solve complex control tasks sample-efficiently. However, learning from demonstrations often suffers from the covariate shift problem, which results in cascading errors of the learned policy. We introduce a notion of conservatively extrapolated value functions, which provably lead to policies with self-correction. We design an algorithm Value Iteration with Negative Sampling (VINS) that practically learns such value functions with conservative extrapolation. We show that VINS can correct mistakes of the behavioral cloning policy on simulated robotics benchmark tasks. We also propose the algorithm of using VINS to initialize a reinforcement learning algorithm, which is shown to outperform prior works in sample efficiency.""","""The paper introduces Value Iteration with Negative Sampling (VINS) algorithm as a method to accelerate RL using expert demonstrations. VINS learns an initial value function that has a smaller value at states not encounter during the demonstrations. The reviewers raised several issues regarding the assumptions, theoretical results, and experiments. The method seems to be most natural for robotic control problems. Nonetheless, it seems that the rebuttal addressed most of the concerns, and two of the reviewers increased their scores accordingly. Since we have three Weak Accepts, I believe this paper can be accepted at the conference."""
1594,"""CLN2INV: Learning Loop Invariants with Continuous Logic Networks""","['loop invariants', 'deep learning', 'logic learning']","""Program verification offers a framework for ensuring program correctness and therefore systematically eliminating different classes of bugs. Inferring loop invariants is one of the main challenges behind automated verification of real-world programs which often contain many loops. In this paper, we present the Continuous Logic Network (CLN), a novel neural architecture for automatically learning loop invariants directly from program execution traces. Unlike existing neural networks, CLNs can learn precise and explicit representations of formulas in Satisfiability Modulo Theories (SMT) for loop invariants from program execution traces. We develop a new sound and complete semantic mapping for assigning SMT formulas to continuous truth values that allows CLNs to be trained efficiently. We use CLNs to implement a new inference system for loop invariants, CLN2INV, that significantly outperforms existing approaches on the popular Code2Inv dataset. CLN2INV is the first tool to solve all 124 theoretically solvable problems in the Code2Inv dataset. Moreover, CLN2INV takes only 1.1 second on average for each problem, which is 40 times faster than existing approaches. We further demonstrate that CLN2INV can even learn 12 significantly more complex loop invariants than the ones required for the Code2Inv dataset.""","""This paper implements a novel architecture for inferring loop invariants in verification (though the paper bridges to compilers). The idea is novel and the paper is well executed. It is not the usual topic for ICLR, but not presents an important application of deep learning done well, and it has interesting implications for program synthesis. Therefore, I recommend acceptance."""
1595,"""Improved Sample Complexities for Deep Neural Networks and Robust Classification via an All-Layer Margin""","['deep learning theory', 'generalization bounds', 'adversarially robust generalization', 'data-dependent generalization bounds']","""For linear classifiers, the relationship between (normalized) output margin and generalization is captured in a clear and simple bound a large output margin implies good generalization. Unfortunately, for deep models, this relationship is less clear: existing analyses of the output margin give complicated bounds which sometimes depend exponentially on depth. In this work, we propose to instead analyze a new notion of margin, which we call the all-layer margin. Our analysis reveals that the all-layer margin has a clear and direct relationship with generalization for deep models. This enables the following concrete applications of the all-layer margin: 1) by analyzing the all-layer margin, we obtain tighter generalization bounds for neural nets which depend on Jacobian and hidden layer norms and remove the exponential dependency on depth 2) our neural net results easily translate to the adversarially robust setting, giving the first direct analysis of robust test error for deep networks, and 3) we present a theoretically inspired training algorithm for increasing the all-layer margin. Our algorithm improves both clean and adversarially robust test performance over strong baselines in practice.""","""This works presents a new and interesting notion of margin for deep neural networks (that incorporates representation at all layers). It then develops generalization bounds based on the introduced margin. The reviewers pointed some concerns, including some notation issues, complexity in case of residual networks, removal of exponential dependence on depth, and dependence on a hard to compute quantity - \kapp^{adv}. Some of these concerns were addressed by the authors. At the end, most of the reviewers find the notion of all-layer margin introduced in this paper a very novel and promising idea for characterizing generalization in deep networks. Agreeing with reviewers, I recommend accept. However, I request the authors to accommodate remaining comments /concerns raised by R1 in the final version of your paper. In particular, in your response to R1 you mentioned for one case you saw improvement even with dropout, but that is not mentioned in the revision; Please include related details in the draft. """
1596,"""SRDGAN: learning the noise prior for Super Resolution with Dual Generative Adversarial Networks""",['Super Resolution GAN Denoise'],"""Single Image Super Resolution (SISR) is the task of producing a high resolution (HR) image from a given low-resolution (LR) image. It is a well researched prob- lem with extensive commercial applications like digital camera, video compres- sion, medical imaging, etc. Most recent super resolution works focus on the fea- ture learning architecture, like Chao Dong (2016); Dong et al. (2016); Wang et al. (2018b); Ledig et al. (2017). However, these works suffer from the following chal- lenges: (1) The low-resolution (LR) training images are artificially synthesized us- ing HR images with bicubic downsampling, which have much more information than real demosaic-upscaled images. The mismatch between training and realistic mobile data heavily blocks the effect on practical SR problem. (2) These methods cannot effectively handle the blind distortions during super resolution in practical applications. In this work, an end-to-end novel framework, including high-to-low network and low-to-high network, is proposed to solve the above problems with dual Generative Adversarial Networks (GAN). First, the above mismatch prob- lems are well explored with the high-to-low network, where clear high-resolution image and the corresponding realistic low-resolution image pairs can be gener- ated. With high-to-low network, a large-scale General Mobile Super Resolution Dataset, GMSR, is proposed, which can be utilized for training or as a bench- mark for super resolution methods. Second, an effective low-to-high network (super resolution network) is proposed in the framework. Benefiting from the GMSR dataset and novel training strategies, the proposed super resolution model can effectively handle detail recovery and denoising at the same time.""","""All reviewers agree that the authors have done a great job identifying weaknesses with the current SOTA in super-resolution. However, there is also agreement that the proposed approach may be too simple to accurately capture a range of real camera distortions, and more comparisons to the SOTA are needed. While this paper certainly has merits and opens the door for strong work in the future, there is not enough support to accept the paper in its current form."""
1597,"""LEARNING DIFFICULT PERCEPTUAL TASKS WITH HODGKIN-HUXLEY NETWORKS""","['conductance-weighted averaging', 'neural modeling', 'normalization methods']","""This paper demonstrates that a computational neural network model using ion channel-based conductances to transmit information can solve standard computer vision datasets at near state-of-the-art performance. Although not fully biologically accurate, this model incorporates fundamental biophysical principles underlying the control of membrane potential and the processing of information by Ohmic ion channels. The key computational step employs Conductance-Weighted Averaging (CWA) in place of the traditional affine transformation, representing a fundamentally different computational principle. Importantly, CWA based networks are self-normalizing and range-limited. We also demonstrate for the first time that a network with excitatory and inhibitory neurons and nonnegative synapse strengths can successfully solve computer vision problems. Although CWA models do not yet surpass the current state-of-the-art in deep learning, the results are competitive on CIFAR-10. There remain avenues for improving these networks, e.g. by more closely modeling ion channel function and connectivity patterns of excitatory and inhibitory neurons found in the brain. ""","""The paper studies non-spiking Hudgkin-Huxley models and shows that under few simplifying assumptions the model can be trained using conventional backpropagation to yield accuracies almost comparable to state-of-the-art neural networks. Overall, the reviewers found the paper well-written, and the idea somewhat interesting, but criticized the experimental evaluation and potential low impact and interest to the community. While the method itself is sound, the overall assessment of the paper is somewhat below what's expected from papers accepted to ICLR, and Im thus recommending rejection."""
1598,"""PROVABLY BENEFITS OF DEEP HIERARCHICAL RL""","['hierarchical model', 'reinforcement learning', 'low regret', 'online learning', 'tabular reinforcement learning']","""Modern complex sequential decision-making problem often both low-level policy and high-level planning. Deep hierarchical reinforcement learning (Deep HRL) admits multi-layer abstractions which naturally model the policy in a hierarchical manner, and it is believed that deep HRL can reduce the sample complexity compared to the standard RL frameworks. We initiate the study of rigorously characterizing the complexity of Deep HRL. We present a model-based optimistic algorithm which demonstrates that the complexity of learning a near-optimal policy for deep HRL scales with the sum of number of states at each abstraction layer whereas standard RL scales with the product of number of states at each abstraction layer. Our algorithm achieves this goal by using the fact that distinct high-level states have similar low-level structures, which allows an efficient information exploitation and thus experiences from different high-level state-action pairs can be generalized to unseen state-actions. Overall, our result shows an exponential improvement using Deep HRL comparing to standard RL framework.""","""This paper pursues an ambitious goal to provide a theoretical analysis HRL in terms of regret bounds. However, the exposition of the ideas has severe clarity issues and the assumptions about HMDPs used are overly simplistic to have an impact in RL research. Finally, there is agreement between the reviewers and AC that the novelty of the proposed ideas is a weak factor and that the paper needs substantial revision."""
1599,"""Domain-invariant Learning using Adaptive Filter Decomposition""",[],"""Domain shifts are frequently encountered in real-world scenarios. In this paper, we consider the problem of domain-invariant deep learning by explicitly modeling domain shifts with only a small amount of domain-specific parameters in a Convolutional Neural Network (CNN). By exploiting the observation that a convolutional filter can be well approximated as a linear combination of a small set of basis elements, we show for the first time, both empirically and theoretically, that domain shifts can be effectively handled by decomposing a regular convolutional layer into a domain-specific basis layer and a domain-shared basis coefficient layer, while both remain convolutional. An input channel will now first convolve spatially only with each respective domain-specific basis to ``absorb"" domain variations, and then output channels are linearly combined using common basis coefficients trained to promote shared semantics across domains. We use toy examples, rigorous analysis, and real-world examples to show the framework's effectiveness in cross-domain performance and domain adaptation. With the proposed architecture, we need only a small set of basis elements to model each additional domain, which brings a negligible amount of additional parameters, typically a few hundred.""","""The paper proposes a domain-adaptive filter decomposition method via separating domain-specific and cross-domain features, towards learning invariant representations for unsupervised domain adaptation. Overall, this well-written paper is well motivated with a better technique for learning invariant representations using convolutional filters. Nonetheless, reviewers still have major concerns: 1) the novelty of the paper may be marginal given the significant line of recent work on learning domain-invariant representations; 2) when the label distributions differ, learning invariant representations can only lead to worse target generalizations; 3) the provided theory has an unclear connection to the presented filter decomposition method. The paper can be strengthened by further discussions on how to mitigate the aforementioned negative results. Hence I recommend rejection."""
1600,"""Measuring the Reliability of Reinforcement Learning Algorithms""","['reinforcement learning', 'metrics', 'statistics', 'reliability']","""Lack of reliability is a well-known issue for reinforcement learning (RL) algorithms. This problem has gained increasing attention in recent years, and efforts to improve it have grown substantially. To aid RL researchers and production users with the evaluation and improvement of reliability, we propose a set of metrics that quantitatively measure different aspects of reliability. In this work, we focus on variability and risk, both during training and after learning (on a fixed policy). We designed these metrics to be general-purpose, and we also designed complementary statistical tests to enable rigorous comparisons on these metrics. In this paper, we first describe the desired properties of the metrics and their design, the aspects of reliability that they measure, and their applicability to different scenarios. We then describe the statistical tests and make additional practical recommendations for reporting results. The metrics and accompanying statistical tools have been made available as an open-source library. We apply our metrics to a set of common RL algorithms and environments, compare them, and analyze the results.""","""Main content: This paper provides a unified way to provide robust statistics in evaluating the reliability of RL algorithms, especially deep RL algorithms. Though the metrics are not particularly novel, the investigation should be useful to the broader community as it compares seven specific evaluation metrics, including 'Dispersion across Time (DT): IQR across Time', 'Short-term Risk across Time (SRT): CVaR on Differences', 'Long-term Risk across Time (LRT): CVaR on Drawdown', 'Dispersion across Runs (DR): IQR across Runs', 'Risk across Runs (RR): CVaR across Runs', 'Dispersion across Fixed-Policy Rollouts (DF): IQR across Rollouts' and 'Risk across Fixed-Policy Rollouts (RF): CVaR across Rollouts'. The paper further proposed ranking and also confidence intervals based on bootstrapped samples, and compared against continuous control and discrete actions algorithms on Atari and OpenAI Gym. -- Discussion: The reviews clearly agree on accepting the paper, with a weak accept coming from a reviewer who does not know much about this subarea. Comments are mostly just directed at clarifications and completeness of description, which the authors have addressed. -- Recommendation and justification: This paper should be accepted due to its useful contributions toward doing a better job of measuring performance of RL."""
1601,"""Robust Local Features for Improving the Generalization of Adversarial Training""","['adversarial robustness', 'adversarial training', 'adversarial example', 'deep learning']","""Adversarial training has been demonstrated as one of the most effective methods for training robust models to defend against adversarial examples. However, adversarially trained models often lack adversarially robust generalization on unseen testing data. Recent works show that adversarially trained models are more biased towards global structure features. Instead, in this work, we would like to investigate the relationship between the generalization of adversarial training and the robust local features, as the robust local features generalize well for unseen shape variation. To learn the robust local features, we develop a Random Block Shuffle (RBS) transformation to break up the global structure features on normal adversarial examples. We continue to propose a new approach called Robust Local Features for Adversarial Training (RLFAT), which first learns the robust local features by adversarial training on the RBS-transformed adversarial examples, and then transfers the robust local features into the training of normal adversarial examples. To demonstrate the generality of our argument, we implement RLFAT in currently state-of-the-art adversarial training frameworks. Extensive experiments on STL-10, CIFAR-10 and CIFAR-100 show that RLFAT significantly improves both the adversarially robust generalization and the standard generalization of adversarial training. Additionally, we demonstrate that our models capture more local features of the object on the images, aligning better with human perception.""","""Earlier work suggests that adversarial examples exploit local features and that more robust models rely on global features. The authors propose to exploit this insight by performing data augmentation in adversarial training, by cutting and reshuffling image block. They demonstrate the idea empirically and witness interesting gains. I think the technique is an interesting contribution, but empirically and as a tool. """
1602,"""Sign Bits Are All You Need for Black-Box Attacks""","['Black-box adversarial attack models', 'Deep Nets', 'Adversarial Examples', 'Black-Box Optimization', 'Zeroth-Order Optimization']","""We present a novel black-box adversarial attack algorithm with state-of-the-art model evasion rates for query efficiency under pseudo-formula and pseudo-formula metrics. It exploits a \textit{sign-based}, rather than magnitude-based, gradient estimation approach that shifts the gradient estimation from continuous to binary black-box optimization. It adaptively constructs queries to estimate the gradient, one query relying upon the previous, rather than re-estimating the gradient each step with random query construction. Its reliance on sign bits yields a smaller memory footprint and it requires neither hyperparameter tuning or dimensionality reduction. Further, its theoretical performance is guaranteed and it can characterize adversarial subspaces better than white-box gradient-aligned subspaces. On two public black-box attack challenges and a model robustly trained against transfer attacks, the algorithm's evasion rates surpass all submitted attacks. For a suite of published models, the algorithm is pseudo-formula less failure-prone while spending pseudo-formula fewer queries versus the best combination of state of art algorithms. For example, it evades a standard MNIST model using just pseudo-formula queries on average. Similar performance is observed on a standard IMAGENET model with an average of pseudo-formula queries.""","""This paper presents a novel black-box adversarial attack algorithm, which exploits a sign-based rather than magnitude-based, gradient estimator for black-box optimization. It also adaptively constructs queries to estimate the gradient. The proposed approach outperforms many state-of-the-art black-box attack methods in terms of query complexity. There is a unanimous agreement to accept this paper."""
1603,"""A New Multi-input Model with the Attention Mechanism for Text Classification""","['Natural Language Processing', 'Text Classification', 'Densent', 'Multi-input Model', 'Attention Mechanism']","""Recently, deep learning has made extraordinary achievements in text classification. However, most of present models, especially convolutional neural network (CNN), do not extract long-range associations, global representations, and hierarchical features well due to their relatively shallow and simple structures. This causes a negative effect on text classification. Moreover, we find that there are many express methods of texts. It is appropriate to design the multi-input model to improve the classification effect. But most of models of text classification only use words or characters and do not use the multi-input model. Inspired by the above points and Densenet (Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 47004708, 2017.), we propose a new text classification model, which uses words, characters, and labels as input. The model, which is a deep CNN with a novel attention mechanism, can effectively leverage the input information and solve the above issues of the shallow model. We conduct experiments on six large text classification datasets. Our model achieves the state of the art results on all datasets compared to multiple baseline models.""","""This paper proposes a CNN-based text classification model that uses words, characters, and labels as its input. It also presents an attention block to replace the pooling operation that is typically used in a CNN. The proposed method is evaluated on six benchmark classification datasets, achieving reasonably good results. While the proposed method performs reasonably well compared to baselines in the papers, all reviewers pointed out that there is no discussion or comparison with existing SotA based on pretrained models (e.g., BERT, XLNet), which would strengthen the main claim of the paper. All three reviewers also suggested that the writing of the paper could be improved. The authors did not respond to these reviews, so there was little discussion needed to arrive at a consensus. I agree with all reviewers and recommend to reject the paper."""
1604,"""Neural Phrase-to-Phrase Machine Translation""",[],"""We present Neural Phrase-to-Phrase Machine Translation (\nppmt), a phrase-based translation model that uses a novel phrase-attention mechanism to discover relevant input (source) segments to generate output (target) phrases. We propose an efficient dynamic programming algorithm to marginalize over all possible segments at training time and use a greedy algorithm or beam search for decoding. We also show how to incorporate a memory module derived from an external phrase dictionary to \nppmt{} to improve decoding. %that allows %the model to be trained faster %\nppmt is significantly faster %than existing neural phrase-based %machine translation method by \cite{huang2018towards}. Experiment results demonstrate that \nppmt{} outperforms the best neural phrase-based translation model \citep{huang2018towards} both in terms of model performance and speed, and is comparable to a state-of-the-art Transformer-based machine translation system \citep{vaswani2017attention}.""","""This paper describes how they extend a previous phrase-based neural machine translation model to incorporate external dictionaries. The reviewers mention the small scale of the experiments, and the lack of clarity in the writing, and missing discussion on computational complexity. Even though the method seems to have the potential to impact the field, the paper is currently not strong enough for publication. The authors have not engaged in the discussion at all. """
1605,"""Stochastic Mirror Descent on Overparameterized Nonlinear Models""","['deep learning', 'optimization', 'overparameterized', 'stochastic gradient descent', 'mirror descent']","""Most modern learning problems are highly overparameterized, meaning that the model has many more parameters than the number of training data points, and as a result, the training loss may have infinitely many global minima (in fact, a manifold of parameter vectors that perfectly interpolates the training data). Therefore, it is important to understand which interpolating solutions we converge to, how they depend on the initialization point and the learning algorithm, and whether they lead to different generalization performances. In this paper, we study these questions for the family of stochastic mirror descent (SMD) algorithms, of which the popular stochastic gradient descent (SGD) is a special case. Recently it has been shown that, for overparameterized linear models, SMD converges to the global minimum that is closest (in terms of the Bregman divergence of the mirror used) to the initialization point, a phenomenon referred to as implicit regularization. Our contributions in this paper are both theoretical and experimental. On the theory side, we show that in the overparameterized nonlinear setting, if the initialization is close enough to the manifold of global optima, SMD with sufficiently small step size converges to a global minimum that is approximately the closest global minimum in Bregman divergence, thus attaining approximate implicit regularization. For highly overparametrized models, this closeness comes for free: the manifold of global optima is so high dimensional that with high probability an arbitrarily chosen initialization will be close to the manifold. On the experimental side, our extensive experiments on the MNIST and CIFAR-10 datasets, using various initializations, various mirror descents, and various Bregman divergences, consistently confirms that this phenomenon indeed happens in deep learning: SMD converges to the closest global optimum to the initialization point in the Bregman divergence of the mirror used. Our experiments further indicate that there is a clear difference in the generalization performance of the solutions obtained from different SMD algorithms. Experimenting on the CIFAR-10 dataset with different regularizers, l1 to encourage sparsity, l2 (yielding SGD) to encourage small Euclidean norm, and l10 to discourage large components in the parameter vector, consistently and definitively shows that, for small initialization vectors, l10-SMD has better generalization performance than SGD, which in turn has better generalization performance than l1-SMD. This surprising, and perhaps counter-intuitive, result strongly suggests the importance of a comprehensive study of the role of regularization, and the choice of the best regularizer, to improve the generalization performance of deep networks.""","""This paper takes results related to the convergence and implicit regularization of stochastic mirror descent, as previously applied within overparameterized linear models, and extends them to the nonlinear case. Among other things, conditions are derived for guaranteeing convergence to a global minimizer that is (nearly) closest to the initialization with respect to a divergence that depends upon the mirror potential. Overall the paper is well-written and likely at least somewhat accessible even for non-experts in this field. That being said, two reviewers voted to reject while one chose accept; however, during the rebuttal period the accept reviewer expressed a somewhat borderline sentiment. As for the reviewers that voted to reject, a common criticism was the perceived similarity with reference (Azizan and Hassibi, 2019), as well as unsettled concerns about the reasonableness of the assumptions involved (e.g., Assumption 1). With respect to the former, among other similarities the proof technique from both papers relies heavily on Lemma 6. It was then felt that this undercut the novelty somewhat. Beyond this though, even the accept reviewer raised an unsettled issue regarding the ease of finding an initialization point close to the manifold that nonetheless satisfies the conditions of Assumption 1. In other words, as networks become more complex such that points are closer to the manifold of optimal solutions, further non-convexity could be introduced such that the non-negativity of the stated divergence becomes more difficult to achieve. While the author response to this point is reasonable, it feels a bit like thoughtful speculation forged in the crunch time of a short rebuttal period, and possibly subject to change upon further reflection. In this regard a less time-constrained revision could be beneficial (including updates to address the other points mentioned above), and I am confident that this work can be positively received at another venue in the near future."""
1606,"""Toward Evaluating Robustness of Deep Reinforcement Learning with Continuous Control""","['deep learning', 'reinforcement learning', 'robustness', 'adversarial examples']","""Deep reinforcement learning has achieved great success in many previously difficult reinforcement learning tasks, yet recent studies show that deep RL agents are also unavoidably susceptible to adversarial perturbations, similar to deep neural networks in classification tasks. Prior works mostly focus on model-free adversarial attacks and agents with discrete actions. In this work, we study the problem of continuous control agents in deep RL with adversarial attacks and propose the first two-step algorithm based on learned model dynamics. Extensive experiments on various MuJoCo domains (Cartpole, Fish, Walker, Humanoid) demonstrate that our proposed framework is much more effective and efficient than model-free based attacks baselines in degrading agent performance as well as driving agents to unsafe states. ""","""This paper considers adversarial attacks in continuous action model-based deep reinforcement learning. An optimisation-based approach is presented, and evaluated on Mujoco tasks. There were two main concerns from the reviewers. The first was that the approach requires strong assumptions, but in the rebuttal some relaxations were demonstrated (e.g., not attacking every step). Additionally, there were issues raised with the choice of baselines, but in the discussion the reviewers did not agree on any other reasonable baselines to use. This is a novel and interesting contribution nonetheless, which could open the field to much additional discussion, and so should be accepted."""
1607,"""Deep Graph Matching Consensus""","['graph matching', 'graph neural networks', 'neighborhood consensus', 'deep learning']","""This work presents a two-stage neural architecture for learning and refining structural correspondences between graphs. First, we use localized node embeddings computed by a graph neural network to obtain an initial ranking of soft correspondences between nodes. Secondly, we employ synchronous message passing networks to iteratively re-rank the soft correspondences to reach a matching consensus in local neighborhoods between graphs. We show, theoretically and empirically, that our message passing scheme computes a well-founded measure of consensus for corresponding neighborhoods, which is then used to guide the iterative re-ranking process. Our purely local and sparsity-aware architecture scales well to large, real-world inputs while still being able to recover global correspondences consistently. We demonstrate the practical effectiveness of our method on real-world tasks from the fields of computer vision and entity alignment between knowledge graphs, on which we improve upon the current state-of-the-art.""","""The paper proposed an end-to-end network architecture for graph matching problems, where first a GNN is applied to compute the initial soft correspondence, and then a message passing network is applied to attempt to resolve structural mismatch. The reviewers agree that the second component (message passing) is novel, and after the rebuttal period, additional experiments were provided by the authors to demonstrate the effectiveness of this. Overall this is an interesting network solution for graph-matching, and would be a worthwhile addition to the literature."""
1608,"""Plug and Play Language Models: A Simple Approach to Controlled Text Generation""","['controlled text generation', 'generative models', 'conditional generative models', 'language modeling', 'transformer']","""Large transformer-based language models (LMs) trained on huge text corpora have shown unparalleled generation capabilities. However, controlling attributes of the generated language (e.g. switching topic or sentiment) is difficult without modifying the model architecture or fine-tuning on attribute-specific data and entailing the significant cost of retraining. We propose a simple alternative: the Plug and Play Language Model (PPLM) for controllable language generation, which combines a pretrained LM with one or more simple attribute classifiers that guide text generation without any further training of the LM. In the canonical scenario we present, the attribute models are simple classifiers consisting of a user-specified bag of words or a single learned layer with 100,000 times fewer parameters than the LM. Sampling entails a forward and backward pass in which gradients from the attribute model push the LM's hidden activations and thus guide the generation. Model samples demonstrate control over a range of topics and sentiment styles, and extensive automated and human annotated evaluations show attribute alignment and fluency. PPLMs are flexible in that any combination of differentiable attribute models may be used to steer text generation, which will allow for diverse and creative applications beyond the examples given in this paper.""","""This paper proposes a simple plug-and-play language model approach to the problem of controlled language generation. The problem is important and timely, and the approach is simple yet effective. Reviewers had some discussions whether 1) there is enough novelty, 2) evaluation task really shows effectiveness, and 3) this paper will inspire future research directions. After discussions of the above points, reviewers are leaning more positive, and I reflect their positive sentiment by recommending it to be accepted. I look forward to seeing this work presented at ICLR."""
1609,"""CROSS-DOMAIN CASCADED DEEP TRANSLATION""","['computer vision', 'image translation', 'generative adversarial networks']","""In recent years we have witnessed tremendous progress in unpaired image-to-image translation methods, propelled by the emergence of DNNs and adversarial training strategies. However, most existing methods focus on transfer of style and appearance, rather than on shape translation. The latter task is challenging, due to its intricate non-local nature, which calls for additional supervision. We mitigate this by descending the deep layers of a pre-trained network, where the deep features contain more semantics, and applying the translation between these deep features. Specifically, we leverage VGG, which is a classification network, pre-trained with large-scale semantic supervision. Our translation is performed in a cascaded, deep-to-shallow, fashion, along the deep feature hierarchy: we first translate between the deepest layers that encode the higher-level semantic content of the image, proceeding to translate the shallower layers, conditioned on the deeper ones. We show that our method is able to translate between different domains, which exhibit significantly different shapes. We evaluate our method both qualitatively and quantitatively and compare it to state-of-the-art image-to-image translation methods. Our code and trained models will be made available.""","""The paper addresses image translation by extending prior models, e.g. CycleGAN, to domain pairs that have significantly different shape variations. The main technical idea is to apply the translation directly on the deep feature maps (instead of on the pixel level). While acknowledging that the proposed model is potentially useful, the reviewers raised several important concerns: (1) ill-posed formulation of the problem and what is desirable, (2) using fine-tuned/pre-trained VGG features, (3) computational cost of the proposed approach, i.e. training a cascade of pairs of translators (one pair per layer). AC can confirm that all three reviewers have read the author responses. AC suggests, in its current state the manuscript is not ready for a publication. We hope the reviews are useful for improving and revising the paper. """
1610,"""Weighted Empirical Risk Minimization: Transfer Learning based on Importance Sampling""","['statistical learning theory', 'importance sampling', 'positive unlabeled (PU) learning', 'selection bias']","""We consider statistical learning problems, when the distribution pseudo-formula of the training observations \ldots,\; Z'_n differs from the distribution pseudo-formula involved in the risk one seeks to minimize (referred to as the \textit{test distribution}) but is still defined on the same measurable space as pseudo-formula and dominates it. In the unrealistic case where the likelihood ratio pseudo-formula is known, one may straightforwardly extends the Empirical Risk Minimization (ERM) approach to this specific \textit{transfer learning} setup using the same idea as that behind Importance Sampling, by minimizing a weighted version of the empirical risk functional computed from the 'biased' training data pseudo-formula with weights pseudo-formula . Although the \textit{importance function} pseudo-formula is generally unknown in practice, we show that, in various situations frequently encountered in practice, it takes a simple form and can be directly estimated from the pseudo-formula 's and some auxiliary information on the statistical population pseudo-formula . By means of linearization techniques, we then prove that the generalization capacity of the approach aforementioned is preserved when plugging the resulting estimates of the pseudo-formula 's into the weighted empirical risk. Beyond these theoretical guarantees, numerical results provide strong empirical evidence of the relevance of the approach promoted in this article.""","""This paper aims to address transfer learning by importance weighted ERM that estimates a density ratio from the given sample and some auxiliary information on the population. Several learning bounds were proven to promote the use of importance weighted ERM. Reviewers and AC feel that the novelty of this paper is modest given the rich relevant literature and the practical use of this paper may be limited. The discussion with related theoretical work such as generalization bound of PU learning can be expanded significantly. The presentation can be largely improved, especially in the experiment part. The rebuttal is somewhat subjective and unconvincing to address the concerns. Hence I recommend rejection."""
1611,"""Improving Sample Efficiency in Model-Free Reinforcement Learning from Images""","['reinforcement learning', 'model-free', 'off-policy', 'image-based reinforcement learning', 'continuous control']","""Training an agent to solve control tasks directly from high-dimensional images with model-free reinforcement learning (RL) has proven difficult. The agent needs to learn a latent representation together with a control policy to perform the task. Fitting a high-capacity encoder using a scarce reward signal is not only extremely sample inefficient, but also prone to suboptimal convergence. Two ways to improve sample efficiency are to learn a good feature representation and use off-policy algorithms. We dissect various approaches of learning good latent features, and conclude that the image reconstruction loss is the essential ingredient that enables efficient and stable representation learning in image-based RL. Following these findings, we devise an off-policy actor-critic algorithm with an auxiliary decoder that trains end-to-end and matches state-of-the-art performance across both model-free and model-based algorithms on many challenging control tasks. We release our code to encourage future research on image-based RL.""","""The paper investigates how sample efficiency of image based model-free RL can be improved by including an image reconstruction loss as an auxiliary task and applies it to soft actor-critic. The method is demonstrated to yield a substantial improvement compared to SAC learned directly from pixels, and comparable performance to other prior works, such as SLAC and PlaNet, but with a simpler learning setup. The reviewers generally appreciate the clarity of presentation and good experimental evaluation. However, all reviewers raise concerns regarding limited novelty, as auxiliary losses for RL have been studied before, and the contribution is mainly in the design choices of the implementation. In this view, and given that the results are on a par with SOTA, the contribution of this paper seems too incremental for publishing in this venue, and Im recommending rejection. """
1612,"""Adversarially Robust Generalization Just Requires More Unlabeled Data""","['Adversarial Robustness', 'Semi-supervised Learning']","""Neural network robustness has recently been highlighted by the existence of adversarial examples. Many previous works show that the learned networks do not perform well on perturbed test data, and significantly more labeled data is required to achieve adversarially robust generalization. In this paper, we theoretically and empirically show that with just more unlabeled data, we can learn a model with better adversarially robust generalization. The key insight of our results is based on a risk decomposition theorem, in which the expected robust risk is separated into two parts: the stability part which measures the prediction stability in the presence of perturbations, and the accuracy part which evaluates the standard classification accuracy. As the stability part does not depend on any label information, we can optimize this part using unlabeled data. We further prove that for a specific Gaussian mixture problem, adversarially robust generalization can be almost as easy as the standard generalization in supervised learning if a sufficiently large amount of unlabeled data is provided. Inspired by the theoretical findings, we further show that a practical adversarial training algorithm that leverages unlabeled data can improve adversarial robust generalization on MNIST and Cifar-10.""","""This work starts with a decomposition of the adversarial risk into two terms: the first is the usual risk, while the second is a stability term, that captures the possible effect of an adversarial perturbation. The insight of this work is that this second term can be dealt with using unlabelled data, which is often in plentiful supply. Unfortunately, the same ideas was developed concurrently and independently by several groups of authors. The reviewer all agreed that this particular version was not ready for publication. In two cases, the authors compared the work unfavorably with concurrent independent work. I will note that the main bound somewhat ignores the issue of overfitting that the second term deals with via the Rademacher bound. Unless one assumes one has unlimited unlabeled data, could one not get an arbitrarily biased view of robustness from the sample. Seems like a gap to fill."""
1613,"""GENN: Predicting Correlated Drug-drug Interactions with Graph Energy Neural Networks""","['graph neural networks', 'energy model', 'structure prediction', 'drug-drug-interaction']","""Gaining more comprehensive knowledge about drug-drug interactions (DDIs) is one of the most important tasks in drug development and medical practice. Recently graph neural networks have achieved great success in this task by modeling drugs as nodes and drug-drug interactions as links and casting DDI predictions as link prediction problems. However, correlations between link labels (e.g., DDI types) were rarely considered in existing works. We propose the graph energy neural network (\mname) to explicitly model link type correlations. We formulate the DDI prediction task as a structure prediction problem and introduce a new energy-based model where the energy function is defined by graph neural networks. Experiments on two real-world DDI datasets demonstrated that \mname is superior to many baselines without consideration of link type correlations and achieved pseudo-formula and pseudo-formula PR-AUC improvement on the two datasets, respectively. We also present a case study in which \mname can better capture meaningful DDI correlations compared with baseline models.""","""This paper studies the use of a graph neural network for drug-to-drug interaction (DDI) prediction task (an instance of a link prediction task with drugs as vertices and interaction as edges). In particular, the authors apply structured prediction energy networks (SPEN) and model the dependency structure of the labels by minimising an energy function. The authors empirically validate the proposed approach against feedforward GNNs on two DDI prediction tasks. The reviewers feel that understanding drug-drug interactions is an important task and that the work is well motivated. However, the reviewers argued that the proposed methodology is not novel enough to merit publication at ICLR and that some conclusions are not supported by the empirical analysis. For the former, the benefits of the semi-supervised design need to be clearly and concisely presented. For the latter, providing a more convincing practical benefit would greatly improve the manuscript. As such, I will recommend the rejection of this paper at the current state."""
1614,"""Toward Understanding The Effect of Loss Function on The Performance of Knowledge Graph Embedding""","['Knowledge graph embedding', 'Translation based embedding', 'loss function', 'relation pattern']","""Knowledge graphs (KGs) represent world's facts in structured forms. KG completion exploits the existing facts in a KG to discover new ones. Translation-based embedding model (TransE) is a prominent formulation to do KG completion. Despite the efficiency of TransE in memory and time, it suffers from several limitations in encoding relation patterns such as symmetric, reflexive etc. To resolve this problem, most of the attempts have circled around the revision of the score function of TransE i.e., proposing a more complicated score function such as Trans(A, D, G, H, R, etc) to mitigate the limitations. In this paper, we tackle this problem from a different perspective. We show that existing theories corresponding to the limitations of TransE are inaccurate because they ignore the effect of loss function. Accordingly, we pose theoretical investigations of the main limitations of TransE in the light of loss function. To the best of our knowledge, this has not been investigated so far comprehensively. We show that by a proper selection of the loss function for training the TransE model, the main limitations of the model are mitigated. This is explained by setting upper-bound for the scores of positive samples, showing the region of truth (i.e., the region that a triple is considered positive by the model). Our theoretical proofs with experimental results fill the gap between the capability of translation-based class of embedding models and the loss function. The theories emphasis the importance of the selection of the loss functions for training the models. Our experimental evaluations on different loss functions used for training the models justify our theoretical proofs and confirm the importance of the loss functions on the performance. ""","""The paper analyses the effect of different loss functions for TransE and argues that certain limitations of TransE can be mitigated by choosing more appropriate loss functions. The submission then proposes TransComplEx to further improve results. This paper received four reviews, with three recommending rejection, and one recommending weak acceptance. A main concern was in the clarity of motivating the different models. Another was in the relatively low performance of RotatE compared with [1], which was raised by multiple reviewers. The authors provided extensive responses to the concerns raised by the reviewers. However, at least the implementation of RotatE remains of concern, with the response of the authors indicating ""Please note that we couldnt use exactly the same setting of RotatE due to limitations in our infrastructure."" On the balance, a majority of reviewers felt that the paper was not suitable for publication in its current form."""
1615,"""Modelling the influence of data structure on learning in neural networks""","['Neural Networks', 'Generative models', 'Synthetic data sets', 'Generalisation', 'Stochastic Gradient descent']","""The lack of crisp mathematical models that capture the structure of real-world data sets is a major obstacle to the detailed theoretical understanding of deep neural networks. Here, we first demonstrate the effect of structured data sets by experimentally comparing the dynamics and the performance of two-layer networks trained on two different data sets: (i) an unstructured synthetic data set containing random i.i.d. inputs, and (ii) a simple canonical data set such as MNIST images. Our analysis reveals two phenomena related to the dynamics of the networks and their ability to generalise that only appear when training on structured data sets. Second, we introduce a generative model for data sets, where high-dimensional inputs lie on a lower-dimensional manifold and have labels that depend only on their position within this manifold. We call it the *hidden manifold model* and we experimentally demonstrate that training networks on data sets drawn from this model reproduces both the phenomena seen during training on MNIST.""","""The paper examines the idea that real world data is highly structured / lies on a low-dimensional manifold. The authors show differences in neural network dynamics when trained on structured (MNIST) vs. unstructured datasets (random), and show that ""structure"" can be captured by their new ""hidden manifold"" generative model that explicitly considers some low-dimensional manifold. The reviewers perceived a lack of actionable insights following the paper, since in general these ideas are known, and for MNIST to be a limited dataset, despite finding the paper generally clear and correct. Following the discussion, I must recommend rejection at this time, but highly encourage the authors to take the insights developed in the paper a bit further and submit to another venue. E.g. trying to improve our algorithms by considering the inductive bias of structure of the hidden manifold, or developing a systematic and quantifiable notion of structure for many different datasets that correlate with difficulty of training would both be great contributions."""
1616,"""Restricting the Flow: Information Bottlenecks for Attribution""","['Attribution', 'Informational Bottleneck', 'Interpretable Machine Learning', 'Explainable AI']","""Attribution methods provide insights into the decision-making of machine learning models like artificial neural networks. For a given input sample, they assign a relevance score to each individual input variable, such as the pixels of an image. In this work, we adopt the information bottleneck concept for attribution. By adding noise to intermediate feature maps, we restrict the flow of information and can quantify (in bits) how much information image regions provide. We compare our method against ten baselines using three different metrics on VGG-16 and ResNet-50, and find that our methods outperform all baselines in five out of six settings. The methods information-theoretic foundation provides an absolute frame of reference for attribution values (bits) and a guarantee that regions scored close to zero are not necessary for the network's decision. ""","""All three reviewers strongly recommend accepting this paper. It is clear, novel, and a significant contribution to the field. Please take their suggestions into account in a camera ready version. Thanks!"""
1617,"""Multitask Soft Option Learning""","['Hierarchical Reinforcement Learning', 'Reinforcement Learning', 'Control as Inference', 'Options', 'Multitask Learning']","""We present Multitask Soft Option Learning (MSOL), a hierarchical multi-task framework based on Planning-as-Inference. MSOL extends the concept of Options, using separate variational posteriors for each task, regularized by a shared prior. The learned soft-options are temporally extended, allowing a higher-level master policy to train faster on new tasks by making decisions with lower frequency. Additionally, MSOL allows fine-tuning of soft-options for new tasks without unlearning previously useful behavior, and avoids problems with local minima in multitask training. We demonstrate empirically that MSOL significantly outperforms both hierarchical and flat transfer-learning baselines in challenging multi-task environments.""","""Apologies for only receiving two reviews. R2 gave a WR and R3 gave an A. Given the lack of 3rd review and split nature of the scores, the AC has closely scrutinized the paper/reviews/comments/rebuttal. Thoughts: - Paper is on interesting topic. - AC agrees with R2's concern about the evaluation not using more complex environments like Mujoco. Without evaluation on a standard benchmark, it is difficult to know objectively if the approach works. - AC agrees with authors that the DISTRAL approach forms a strong baseline. - Nevertheless, the experiments aren't super compelling either. - AC has some concerns about scaling issues w.r.t. model size & #tasks. The paper is very borderline, but the AC sides with R2's concerns and unfortunately feels the paper cannot be accepted without a stronger evaluation. With this, it would make a compelling paper. """
1618,"""Accelerating Monte Carlo Bayesian Inference via Approximating Predictive Uncertainty over the Simplex""",[],"""Estimating the predictive uncertainty of a Bayesian learning model is critical in various decision-making problems, e.g., reinforcement learning, detecting adversarial attack, self-driving car. As the model posterior is almost always intractable, most efforts were made on finding an accurate approximation the true posterior. Even though a decent estimation of the model posterior is obtained, another approximation is required to compute the predictive distribution over the desired output. A common accurate solution is to use Monte Carlo (MC) integration. However, it needs to maintain a large number of samples, evaluate the model repeatedly and average multiple model outputs. In many real-world cases, this is computationally prohibitive. In this work, assuming that the exact posterior or a decent approximation is obtained, we propose a generic framework to approximate the output probability distribution induced by model posterior with a parameterized model and in an amortized fashion. The aim is to approximate the true uncertainty of a specific Bayesian model, meanwhile alleviating the heavy workload of MC integration at testing time. The proposed method is universally applicable to Bayesian classification models that allow for posterior sampling. Theoretically, we show that the idea of amortization incurs no additional costs on approximation performance. Empirical results validate the strong practical performance of our approach.""","""This paper proposes to speed up Bayesian deep learning at test time by training a student network to approximate the BNN's output distribution. The idea is certainly a reasonable thing to try, and the writing is mostly good (though as some reviewers point out, certain sections might not be necessary). The idea is fairly obvious, though, so the question is whether the experimental results are impressive enough by themselves to justify acceptance. The method is able to get close to the performance achieved by Monte Carlo estimators with much lower cost, although there is a nontrivial drop in accuracy. This is probably worth paying if it achieves 500x computation reduction as claimed in the paper, though the practical gains are probably much smaller since Monte Carlo methods are rarely used with 500 samples. Overall, this seems a bit below the bar for ICLR. """
1619,"""Objective Mismatch in Model-based Reinforcement Learning""","['Model-based Reinforcement learning', 'dynamics model', 'reinforcement learning']","""Model-based reinforcement learning (MBRL) has been shown to be a powerful framework for data-efficiently learning control of continuous tasks. Recent work in MBRL has mostly focused on using more advanced function approximators and planning schemes, leaving the general framework virtually unchanged since its conception. In this paper, we identify a fundamental issue of the standard MBRL framework -- what we call the objective mismatch issue. Objective mismatch arises when one objective is optimized in the hope that a second, often uncorrelated, metric will also be optimized. In the context of MBRL, we characterize the objective mismatch between training the forward dynamics model w.r.t. the likelihood of the one-step ahead prediction, and the overall goal of improving performance on a downstream control task. For example, this issue can emerge with the realization that dynamics models effective for a specific task do not necessarily need to be globally accurate, and vice versa globally accurate models might not be sufficiently accurate locally to obtain good control performance on a specific task. In our experiments, we study this objective mismatch issue and demonstrate that the likelihood of the one-step ahead prediction is not always correlated with downstream control performance. This observation highlights a critical flaw in the current MBRL framework which will require further research to be fully understood and addressed. We propose an initial method to mitigate the mismatch issue by re-weighting dynamics model training. Building on it, we conclude with a discussion about other potential directions of future research for addressing this issue.""","""As the reviewers point out, this paper has potentially interesting ideas but it is in too preliminary state for publication at ICLR. """
1620,"""Deep Relational Factorization Machines""",[],"""Factorization Machines (FMs) is an important supervised learning approach due to its unique ability to capture feature interactions when dealing with high-dimensional sparse data. However, FMs assume each sample is independently observed and hence incapable of exploiting the interactions among samples. On the contrary, Graph Neural Networks (GNNs) has become increasingly popular due to its strength at capturing the dependencies among samples. But unfortunately, it cannot efficiently handle high-dimensional sparse data, which is quite common in modern machine learning tasks. In this work, to leverage their complementary advantages and yet overcome their issues, we proposed a novel approach, namely Deep Relational Factorization Machines, which can capture both the feature interaction and the sample interaction. In particular, we disclosed the relationship between the feature interaction and the graph, which opens a brand new avenue to deal with high-dimensional features. Finally, we demonstrate the effectiveness of the proposed approach with experiments on several real-world datasets.""","""This paper proposes to combine FMs and GNNs. All reviewers voted reject, as the paper lacks experiments (eg ablation studies) and novelty. Writing can be significant improved - some information is missing. Authors did not respond to reviewers questions and concerns. For this reason, I recommend reject. """
1621,"""Detecting malicious PDF using CNN""","['Cybersecurity', 'Convolutional Neural Network', 'Malware']","""Malicious PDF files represent one of the biggest threats to computer security. To detect them, significant research has been done using handwritten signatures or machine learning based on manual feature extraction. Those approaches are both time-consuming, requires significant prior knowledge and the list of features has to be updated with each newly discovered vulnerability. In this work, we propose a novel algorithm that uses a Convolutional Neural Network (CNN) on the byte level of the file, without any handcrafted features. We show, using a data set of 130000 files, that our approach maintains a high detection rate (96%) of PDF malware and even detects new malicious files, still undetected by most antiviruses. Using automatically generated features from our CNN network, and applying a clustering algorithm, we also obtain high similarity between the antiviruses labels and the resulting clusters.""","""This submission addresses the problem of detecting malicious PDF files. The proposed solution trains existing CNN architectures on a collected dataset and verifies improved performance over available antivirus software. There were a number of concerns raised about this work. The main concern the reviewers had with this submission is lack of novelty. The issue is that the paper tackles a standard supervised classification problem which has been extensively explored in the literature and applies an off-the-shelf classification model. Though the particular application has seen less attention in the ICLR community, the problem setting and solution are well known. Thus, the contribution of the work is not sufficient for acceptance. """
1622,"""GraphQA: Protein Model Quality Assessment using Graph Convolutional Network""","['Protein Quality Assessment', 'Graph Networks', 'Representation Learning']","""Proteins are ubiquitous molecules whose function in biological processes is determined by their 3D structure. Experimental identification of a protein's structure can be time-consuming, prohibitively expensive, and not always possible. Alternatively, protein folding can be modeled using computational methods, which however are not guaranteed to always produce optimal results. GraphQA is a graph-based method to estimate the quality of protein models, that possesses favorable properties such as representation learning, explicit modeling of both sequential and 3D structure, geometric invariance and computational efficiency. In this work, we demonstrate significant improvements of the state-of-the-art for both hand-engineered and representation-learning approaches, as well as carefully evaluating the individual contributions of GraphQA.""","""This paper introduces an approach for estimating the quality of protein models. The proposed method consists in using graph convolutional networks (GCNs) to learn a representation of protein models and predict both a local and a global quality score. Experiments show that the proposed approach performs better than methods based on 1D and 3D CNNs. Overall, this is a borderline paper. The improvement over state of the art for this specific application is noticeable. However, a major drawback is the lack of methodological novelty, the proposed solution being a direct application of GCNs. It does not bring new insights in representation learning. The contribution would therefore be of interest to a limited audience, in light of which I recommend to reject this paper."""
1623,"""Predictive Coding for Boosting Deep Reinforcement Learning with Sparse Rewards""","['reinforcement learning', 'representation learning', 'reward shaping', 'predictive coding']","""While recent progress in deep reinforcement learning has enabled robots to learn complex behaviors, tasks with long horizons and sparse rewards remain an ongoing challenge. In this work, we propose an effective reward shaping method through predictive coding to tackle sparse reward problems. By learning predictive representations offline and using these representations for reward shaping, we gain access to reward signals that understand the structure and dynamics of the environment. In particular, our method achieves better learning by providing reward signals that 1) understand environment dynamics 2) emphasize on features most useful for learning 3) resist noise in learned representations through reward accumulation. We demonstrate the usefulness of this approach in different domains ranging from robotic manipulation to navigation, and we show that reward signals produced through predictive coding are as effective for learning as hand-crafted rewards.""","""The paper proposes to use the representation learned via CPC to do reward shaping via clustering the embedding and providing a reward based on the distance from the goal. The reviewers point out some conceptual issues with the paper, the key one being that the method is contingent on a random policy being able to reach the goal, which is not true for difficult environments that the paper claims to be motivated by. One reviewer noted limited experiment runs and lack of comparisons with other reward shaping methods. I recommend rejection, but hope the authors find the feedback helpful and submit a future version elsewhere."""
1624,"""At Stability's Edge: How to Adjust Hyperparameters to Preserve Minima Selection in Asynchronous Training of Neural Networks?""","['implicit bias', 'stability', 'neural networks', 'generalization gap', 'asynchronous SGD']","""Background: Recent developments have made it possible to accelerate neural networks training significantly using large batch sizes and data parallelism. Training in an asynchronous fashion, where delay occurs, can make training even more scalable. However, asynchronous training has its pitfalls, mainly a degradation in generalization, even after convergence of the algorithm. This gap remains not well understood, as theoretical analysis so far mainly focused on the convergence rate of asynchronous methods. Contributions: We examine asynchronous training from the perspective of dynamical stability. We find that the degree of delay interacts with the learning rate, to change the set of minima accessible by an asynchronous stochastic gradient descent algorithm. We derive closed-form rules on how the learning rate could be changed, while keeping the accessible set the same. Specifically, for high delay values, we find that the learning rate should be kept inversely proportional to the delay. We then extend this analysis to include momentum. We find momentum should be either turned off, or modified to improve training stability. We provide empirical experiments to validate our theoretical findings.""","""The paper considers the problem of training neural networks asynchronously, and the gap in generalization due to different local minima being accessible with different delays. The authors derive a theoretical model for the delayed gradients, which provide prescriptions for setting the learning rate and momentum. All reviewers agreed that this a nice paper with valuable theoretical and empirical contributions. """
1625,""" pseudo-formula Adversarial Robustness Certificates: a Randomized Smoothing Approach""",[],"""Robustness is an important property to guarantee the security of machine learning models. It has recently been demonstrated that strong robustness certificates can be obtained on ensemble classifiers generated by input randomization. However, tight robustness certificates are only known for symmetric norms including pseudo-formula and pseudo-formula , while for asymmetric norms like pseudo-formula , the existing techniques do not apply. By converting the likelihood ratio into a one-dimensional mixed random variable, we derive the first tight pseudo-formula robustness certificate under isotropic Laplace distributions. Empirically, the deep networks smoothed by Laplace distributions yield the state-of-the-art certified robustness in pseudo-formula norm on CIFAR-10 and ImageNet. ""","""After reading the author's response, all the reviewers agree that this paper is an incremental work. The presentation need to be polished before publish."""
1626,"""Generalized Natural Language Grounded Navigation via Environment-agnostic Multitask Learning""","['Natural Language Grounded Navigation', 'Multitask Learning', 'Agnostic Learning']","""Recent research efforts enable study for natural language grounded navigation in photo-realistic environments, e.g., following natural language instructions or dialog. However, existing methods tend to overfit training data in seen environments and fail to generalize well in previously unseen environments. In order to close the gap between seen and unseen environments, we aim at learning a generalizable navigation model from two novel perspectives: (1) we introduce a multitask navigation model that can be seamlessly trained on both Vision-Language Navigation (VLN) and Navigation from Dialog History (NDH) tasks, which benefits from richer natural language guidance and effectively transfers knowledge across tasks; (2) we propose to learn environment-agnostic representations for navigation policy that are invariant among environments, thus generalizing better on unseen environments. Extensive experiments show that our environment-agnostic multitask navigation model significantly reduces the performance gap between seen and unseen environments and outperforms the baselines on unseen environments by 16% (relative measure on success rate) on VLN and 120% (goal progress) on NDH, establishing the new state of the art for NDH task. ""","""The paper proposes a multitask navigation model that can be trained on both vision-language navigation (VLN) and navigation from dialog history (NDH) tasks. The authors provide experiments that demonstrate that their model can outperformance single-task baseline models. The paper received borderline scores with two weak accept and one weak reject. Overall, the reviewers found the paper to be well-written and easy to understand, with thorough experiments. The reviewers had minor concerns about the following: 1. The generalizability of the work. No results are reported on the test set, only on val. 2. The gains for val unseen are pretty small and there are other models (e.g. Ke et al, Tan et al) that have better results. Would the proposed environment-agnostic multitask learning be able to improve those models as well? Or is the gains limited to having a weak baseline? 3. It's unclear if the gains are due to the multitasking or just having more data available to train on. 4. There are some minor issues with the misspellings/typos. Some examples are given: Page 1: ""Manolis Savva* et al"" --> ""Savva et al"" Page 5: ""x_1, x2, ..., x_3"" --> Should the x_3 be something like x_k where k is the length of the utterance? The AC agrees with the reviewers that the paper is interesting and is mostly solid work. The AC also feels that there are some valid concerns about the generalizability of the work and that the paper would benefit from a more careful consideration of the issues raised by the reviewers. The authors are encouraged to refine the work and resubmit."""
1627,"""Data Augmentation in Training CNNs: Injecting Noise to Images""","['deep learning', 'data augmentation', 'convolutional neural networks', 'noise', 'image processing', 'SSIM']","""Noise injection is a fundamental tool for data augmentation, and yet there is no widely accepted procedure to incorporate it with learning frameworks. This study analyzes the effects of adding or applying different noise models of varying magnitudes to Convolutional Neural Network (CNN) architectures. Noise models that are distributed with different density functions are given common magnitude levels via Structural Similarity (SSIM) metric in order to create an appropriate ground for comparison. The basic results are conforming with the most of the common notions in machine learning, and also introduces some novel heuristics and recommendations on noise injection. The new approaches will provide better understanding on optimal learning procedures for image classification.""","""This paper studies the effect of various data augmentation methods on image classification tasks. The authors propose the structural similarity as a measure of the magnitude of the various types of data augmentation noise they consider and argue that it is outperforms PSNR as a measure of the intensity of the noise. The authors performed an empirical analysis showing that speckle noise leads to improved CNN models on two subsets of ImageNet. While there is merit in thoroughly analysing data augmentation schemes for training CNNs, the reviewers argued that the main claims of the work were not substantiated and the raised issues were not addressed in the rebuttal. I will hence recommend rejection of this paper. """
1628,"""The divergences minimized by non-saturating GAN training""",['GAN'],"""Interpreting generative adversarial network (GAN) training as approximate divergence minimization has been theoretically insightful, has spurred discussion, and has lead to theoretically and practically interesting extensions such as f-GANs and Wasserstein GANs. For both classic GANs and f-GANs, there is an original variant of training and a ""non-saturating"" variant which uses an alternative form of generator gradient. The original variant is theoretically easier to study, but for GANs the alternative variant performs better in practice. The non-saturating scheme is often regarded as a simple modification to deal with optimization issues, but we show that in fact the non-saturating scheme for GANs is effectively optimizing a reverse KL-like f-divergence. We also develop a number of theoretical tools to help compare and classify f-divergences. We hope these results may help to clarify some of the theoretical discussion surrounding the divergence minimization view of GAN training.""","""As the reviewers point out, the core contribution might be potentially important but the current execution of the paper makes it difficult to gauge this importance. In the light of this, this paper does not seem ready for appearance in a conference like ICLR."""
1629,"""AutoLR: A Method for Automatic Tuning of Learning Rate""","['Automatic Learning Rate', 'Deep Learning', 'Generalization', 'Stochastic Optimization']","""One very important hyperparameter for training deep neural networks is the learning rate of the optimizer. The choice of learning rate schedule determines the computational cost of getting close to a minima, how close you actually get to the minima, and most importantly the kind of local minima (wide/narrow) attained. The kind of minima attained has a significant impact on the generalization accuracy of the network. Current systems employ hand tuned learning rate schedules, which are painstakingly tuned for each network and dataset. Given that the state space of schedules is huge, finding a satisfactory learning rate schedule can be very time consuming. In this paper, we present AutoLR, a method for auto-tuning the learning rate as training proceeds. Our method works with any optimizer, and we demonstrate results on SGD, Momentum, and Adam optimizers. We extensively evaluate AutoLR on multiple datasets, models, and across multiple optimizers. We compare favorably against state of the art learning rate schedules for the given dataset and models, including for ImageNet on Resnet-50, Cifar-10 on Resnet-18, and SQuAD fine-tuning on BERT. For example, AutoLR achieves an EM score of 81.2 on SQuAD v1.1 with BERT_BASE compared to 80.8 reported in (Devlin et al. (2018)) by just auto-tuning the learning rate schedule. To the best of our knowledge, this is the first automatic learning rate tuning scheme to achieve state of the art generalization accuracy on these datasets with the given models. ""","""The authors propose a method for automatic tuning of learning rates. The reviewers liked the idea but felt that there are much more extensive experiments to be done especially better baselines. Also, clarifying what aspect is automated is important, because no method can be truly automatic: they all have some hyperparameters. """
1630,"""Towards A Unified Min-Max Framework for Adversarial Exploration and Robustness""","['Ensemble attack', 'adversarial training', 'diversity promotion']","""The worst-case training principle that minimizes the maximal adversarial loss, also known as adversarial training (AT), has shown to be a state-of-the-art approach for enhancing adversarial robustness against norm-ball bounded input perturbations. Nonetheless, min-max optimization beyond the purpose of AT has not been rigorously explored in the research of adversarial attack and defense. In particular, given a set of risk sources (domains), minimizing the maximal loss induced from the domain set can be reformulated as a general min-max problem that is different from AT. Examples of this general formulation include attacking model ensembles, devising universal perturbation under multiple inputs or data transformations, and generalized AT over different types of attack models. We show that these problems can be solved under a unified and theoretically principled min-max optimization framework. We also show that the self-adjusted domain weights learned from our method provides a means to explain the difficulty level of attack and defense over multiple domains. Extensive experiments show that our approach leads to substantial performance improvement over the conventional averaging strategy.""","""This submission studies an interesting problem. However, as some of the reviewers point out, the novelty of the proposed contributions is fairly limited."""
1631,"""Visual Hide and Seek""","['Embodied Learning', 'Self-supervised Learning', 'Reinforcement Learning']","""We train embodied agents to play Visual Hide and Seek where a prey must navigate in a simulated environment in order to avoid capture from a predator. We place a variety of obstacles in the environment for the prey to hide behind, and we only give the agents partial observations of their environment using an egocentric perspective. Although we train the model to play this game from scratch without any prior knowledge of its visual world, experiments and visualizations show that a representation of other agents automatically emerges in the learned representation. Furthermore, we quantitatively analyze how agent weaknesses, such as slower speed, effect the learned policy. Our results suggest that, although agent weaknesses make the learning problem more challenging, they also cause useful features to emerge in the representation.""","""This paper proposes a technique for training embodied agents to play Visual Hide and Seek where a prey must navigate in a simulated environment in order to avoid capture from a predator. The model is trained to play this game from scratch without any prior knowledge of its visual world, and experiments and visualizations show that a representation of other agents automatically emerges in the learned representation. Results suggest that, although agent weaknesses make the learning problem more challenging, they also cause useful features to emerge in the representation. While reviewers found the paper explores an interesting direction, concerns were raised that many claims are unjustified. For example, in the discussion phase a reviewer asked how can one infer ""hider learns to first turn away from the seeker then run away"" from a single transition frequency? Or, the rebuttal mentions ""The agent with visibility reward does not get the chance to learn features of self-visibility because of the limited speed hence the model received samples with significantly less variation of its self-visibility, which makes learning to discriminate self-visibility difficult"". What is the justification for this? There could be more details in the paper and I'd also like to know if these findings were reached purely by looking at the histograms or by combining visual analysis with the histograms. I suggest authors address these concerns and provide quantitative results for all of the claims in an improved iteration of this paper. """
1632,"""Watch the Unobserved: A Simple Approach to Parallelizing Monte Carlo Tree Search""","['parallel Monte Carlo Tree Search (MCTS)', 'Upper Confidence bound for Trees (UCT)', 'Reinforcement Learning (RL)']","""Monte Carlo Tree Search (MCTS) algorithms have achieved great success on many challenging benchmarks (e.g., Computer Go). However, they generally require a large number of rollouts, making their applications costly. Furthermore, it is also extremely challenging to parallelize MCTS due to its inherent sequential nature: each rollout heavily relies on the statistics (e.g., node visitation counts) estimated from previous simulations to achieve an effective exploration-exploitation tradeoff. In spite of these difficulties, we develop an algorithm, WU-UCT, to effectively parallelize MCTS, which achieves linear speedup and exhibits only limited performance loss with an increasing number of workers. The key idea in WU-UCT is a set of statistics that we introduce to track the number of on-going yet incomplete simulation queries (named as unobserved samples). These statistics are used to modify the UCT tree policy in the selection steps in a principled manner to retain effective exploration-exploitation tradeoff when we parallelize the most time-consuming expansion and simulation steps. Experiments on a proprietary benchmark and the Atari Game benchmark demonstrate the linear speedup and the superior performance of WU-UCT comparing to existing techniques.""","""The paper investigates parallelizing MCTS. The authors propose a simple method based on only updating the exploration bonus in (P)-UCT by taking into account the number of currently ongoing / unfinished simulations. The approach is extensively tested on a variety of environments, notably including ATARI games. This is a good paper. The approach is simple, well motivated and effective. The experimental results are convincing and the authors made a great effort to further improve the paper during the rebuttal period. I recommend an oral presentation of this work, as MCTS has become a core method in RL and planning, and therefore I expect a lot of interest in the community for this work. """
1633,"""Self-Supervised State-Control through Intrinsic Mutual Information Rewards""","['Intrinsic Reward', 'Deep Reinforcement Learning', 'Skill Discovery', 'Mutual Information', 'Self-Supervised Learning', 'Unsupervised Learning']","""Learning to discover useful skills without a manually-designed reward function would have many applications, yet is still a challenge for reinforcement learning. In this paper, we propose Mutual Information-based State-Control (MISC), a new self-supervised Reinforcement Learning approach for learning to control states of interest without any external reward function. We formulate the intrinsic objective as rewarding the skills that maximize the mutual information between the context states and the states of interest. For example, in robotic manipulation tasks, the context states are the robot states and the states of interest are the states of an object. We evaluate our approach for different simulated robotic manipulation tasks from OpenAI Gym. We show that our method is able to learn to manipulate the object, such as pushing and picking up, purely based on the intrinsic mutual information rewards. Furthermore, the pre-trained policy and mutual information discriminator can be used to accelerate learning to achieve high task rewards. Our results show that the mutual information between the context states and the states of interest can be an effective ingredient for overcoming challenges in robotic manipulation tasks with sparse rewards. A video showing experimental results is available at pseudo-url""","""The paper considers a setting where the state of a (robotics) environment can be divided roughly into ""context states"" (such as variables under the robot's direct control) and ""states of interest"" (such as the state variables of an object to be manipulated), and learn skills by maximizing a lower bound on the mutual information between these two components of the state. Experimental results compare to DDPG/SAC, and show that the learned discriminator is somewhat transferable between environments. Reviewers found the assumptions necessary on the degree of domain knowledge to be quite strong and domain-specific, and that even after revision, the authors were understating the degree to which this was necessary. The paper did improve based on reviewer feedback, and while R3 was more convinced by the follow-up experiments (though remarked that requiring environment variations to obtain new skills was a ""significant step backward from things like [Diversity is All You Need]""), the other reviewers remained unconvinced regarding domain knowledge and in particular how it interacts with the scalability of the proposed method to complex environments/robots. Given the reviewers' concerns regarding applicability and scalability, I recommend rejection in its present form. A future revision may be able to more convincingly demonstrate that limitations based on domain knowledge are less significant than they appear."""
1634,"""Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness of MAML""","['deep learning analysis', 'representation learning', 'meta-learning', 'few-shot learning']","""An important research direction in machine learning has centered around developing meta-learning algorithms to tackle few-shot learning. An especially successful algorithm has been Model Agnostic Meta-Learning (MAML), a method that consists of two optimization loops, with the outer loop finding a meta-initialization, from which the inner loop can efficiently learn new tasks. Despite MAML's popularity, a fundamental open question remains -- is the effectiveness of MAML due to the meta-initialization being primed for rapid learning (large, efficient changes in the representations) or due to feature reuse, with the meta initialization already containing high quality features? We investigate this question, via ablation studies and analysis of the latent representations, finding that feature reuse is the dominant factor. This leads to the ANIL (Almost No Inner Loop) algorithm, a simplification of MAML where we remove the inner loop for all but the (task-specific) head of the underlying neural network. ANIL matches MAML's performance on benchmark few-shot image classification and RL and offers computational improvements over MAML. We further study the precise contributions of the head and body of the network, showing that performance on the test tasks is entirely determined by the quality of the learned features, and we can remove even the head of the network (the NIL algorithm). We conclude with a discussion of the rapid learning vs feature reuse question for meta-learning algorithms more broadly.""","""Paper received mixed reviews: WR (R1), A (R2 and R3). AC has read reviews/rebuttal and examined paper. AC agrees that R1's concerns are misplaced and feels the paper should be accepted. """
1635,"""Auto Completion of User Interface Layout Design Using Transformer-Based Tree Decoders""","['Transformer', 'decoder', 'user interface', 'layout design']","""It has been of increasing interest in the field to develop automatic machineries to facilitate the design process. In this paper, we focus on assisting graphical user interface (UI) layout design, a crucial task in app development. Given a partial layout, which a designer has entered, our model learns to complete the layout by predicting the remaining UI elements with a correct position and dimension as well as the hierarchical structures. Such automation will significantly ease the effort of UI designers and developers. While we focus on interface layout prediction, our model can be generally applicable for other layout prediction problems that involve tree structures and 2-dimensional placements. Particularly, we design two versions of Transformer-based tree decoders: Pointer and Recursive Transformer, and experiment with these models on a public dataset. We also propose several metrics for measuring the accuracy of tree prediction and ground these metrics in the domain of user experience. These contribute a new task and methods to deep learning research.""","""The paper introduces an interesting application of GNNs, but the reviewers find that the contribution is too limited and the motivation is too weak."""
1636,"""``""Best-of-Many-Samples"" Distribution Matching""","['Distribution Matching', 'Generative Adversarial Networks', 'Variational Autoencoders']","""Generative Adversarial Networks (GANs) can achieve state-of-the-art sample quality in generative modelling tasks but suffer from the mode collapse problem. Variational Autoencoders (VAE) on the other hand explicitly maximize a reconstruction-based data log-likelihood forcing it to cover all modes, but suffer from poorer sample quality. Recent works have proposed hybrid VAE-GAN frameworks which integrate a GAN-based synthetic likelihood to the VAE objective to address both the mode collapse and sample quality issues, with limited success. This is because the VAE objective forces a trade-off between the data log-likelihood and divergence to the latent prior. The synthetic likelihood ratio term also shows instability during training. We propose a novel objective with a ``""Best-of-Many-Samples"" reconstruction cost and a stable direct estimate of the synthetic likelihood. This enables our hybrid VAE-GAN framework to achieve high data log-likelihood and low divergence to the latent prior at the same time and shows significant improvement over both hybrid VAE-GANS and plain GANs in mode coverage and quality.""","""This paper proposed an improvement on VAE-GAN which draws multiple samples from the reparameterized latent distribution for each inferred q(z|x), and only backpropagates reconstruction error for the resulting G(z) which has the lowest reconstruction. While the idea is interesting, the novelty is not high compared with existing similar works, and the improvement is not significant."""
1637,"""Improving Confident-Classifiers For Out-of-distribution Detection""","['Out-of-distribution detection', 'Manifold', 'Nullspace', 'Variational Auto-encoder', 'GAN', 'Confident-classifier']","""Discriminatively trained neural classifiers can be trusted, only when the input data comes from the training distribution (in-distribution). Therefore, detecting out-of-distribution (OOD) samples is very important to avoid classification errors. In the context of OOD detection for image classification, one of the recent approaches proposes training a classifier called confident-classifier by minimizing the standard cross-entropy loss on in-distribution samples and minimizing the KLdivergence between the predictive distribution of OOD samples in the low-densityboundary of in-distribution and the uniform distribution (maximizing the entropy of the outputs). Thus, the samples could be detected as OOD if they have low confidence or high entropy. In this paper, we analyze this setting both theoretically and experimentally. We also propose a novel algorithm to generate theboundary OOD samples to train a classifier with an explicit reject class for OOD samples. We compare our approach against several recent classifier-based OOD detectors including the confident-classifiers on MNIST and Fashion-MNISTdatasets. Overall the proposed approach consistently performs better than others across most of the experiments.""","""The paper improves the previous method for detecting out-of-distribution (OOD) samples. Some theoretical analysis/motivation is interesting as pointed out by a reviewer. I think the paper is well written in overall and has some potential. However, as all reviewers pointed out, I think experimental results are quite below the borderline to be accepted (considering the ICLR audience), i.e., the authors should consider non-MNIST-like and more realistic datasets. This indicates the limitation on the scalability of the proposed method. Hence, I recommend rejection."""
1638,"""On the Evaluation of Conditional GANs""","['FJD', 'Frechet Joint Distance', 'GAN', 'cGAN', 'generative adversarial network', 'conditional', 'evaluation', 'metric', 'FID', 'Frechet Inception Distance']","""Conditional Generative Adversarial Networks (cGANs) are finding increasingly widespread use in many application domains. Despite outstanding progress, quantitative evaluation of such models often involves multiple distinct metrics to assess different desirable properties, such as image quality, conditional consistency, and intra-conditioning diversity. In this setting, model benchmarking becomes a challenge, as each metric may indicate a different ""best"" model. In this paper, we propose the Frechet Joint Distance (FJD), which is defined as the Frechet distance between joint distributions of images and conditioning, allowing it to implicitly capture the aforementioned properties in a single metric. We conduct proof-of-concept experiments on a controllable synthetic dataset, which consistently highlight the benefits of FJD when compared to currently established metrics. Moreover, we use the newly introduced metric to compare existing cGAN-based models for a variety of conditioning modalities (e.g. class labels, object masks, bounding boxes, images, and text captions). We show that FJD can be used as a promising single metric for model benchmarking.""","""The paper presents an extension of FID for conditional generation settings. While it's an important problem to address, the reviewers were concerned about the novelty and advantage of the proposed method over the existing methods. The evaluation is reported on toy datasets, and the significance is limited."""
1639,"""Lookahead: A Far-sighted Alternative of Magnitude-based Pruning""",['network magnitude-based pruning'],"""Magnitude-based pruning is one of the simplest methods for pruning neural networks. Despite its simplicity, magnitude-based pruning and its variants demonstrated remarkable performances for pruning modern architectures. Based on the observation that magnitude-based pruning indeed minimizes the Frobenius distortion of a linear operator corresponding to a single layer, we develop a simple pruning method, coined lookahead pruning, by extending the single layer optimization to a multi-layer optimization. Our experimental results demonstrate that the proposed method consistently outperforms magnitude-based pruning on various networks, including VGG and ResNet, particularly in the high-sparsity regime. See pseudo-url for codes.""","""This paper introduces a pruning criterion which is similar to magnitude-based pruning, but which accounts for the interactions between layers. The reviewers have gone through the paper carefully, and after back-and-forth with the authors, they are all satisfied with the paper and support acceptance."""
1640,"""Online and stochastic optimization beyond Lipschitz continuity: A Riemannian approach""","['Online optimization', 'stochastic optimization', 'Poisson inverse problems']","""Motivated by applications to machine learning and imaging science, we study a class of online and stochastic optimization problems with loss functions that are not Lipschitz continuous; in particular, the loss functions encountered by the optimizer could exhibit gradient singularities or be singular themselves. Drawing on tools and techniques from Riemannian geometry, we examine a RiemannLipschitz (RL) continuity condition which is tailored to the singularity landscape of the problems loss functions. In this way, we are able to tackle cases beyond the Lipschitz framework provided by a global norm, and we derive optimal regret bounds and last iterate convergence results through the use of regularized learning methods (such as online mirror descent). These results are subsequently validated in a class of stochastic Poisson inverse problems that arise in imaging science.""","""This is a mostly theoretical paper concerning online and stochastic optimization for convex loss functions that are not Lipschitz continuous. The authors propose a method for replacing the Lipschitz continuity condition with a more general Riemann-Lipschitz continuity condition, under which they are able to provide regret bounds for the online mirror descent algorithm, as well as extending to the stochastic setting. They follow up by evaluating their algorithm on Poisson inverse problems. The reviewers all agree that this is a well-written paper that makes a clear contribution. To the best of our knowledge, the theory and derivations are correct, and the authors were highly responsive to reviewers (minor) comments. Im therefore happy to recommend acceptance."""
1641,"""Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness""","['mode connectivity', 'adversarial robustness', 'backdoor attack', 'error-injection attack', 'evasion attacks', 'loss landscapes']","""Mode connectivity provides novel geometric insights on analyzing loss landscapes and enables building high-accuracy pathways between well-trained neural networks. In this work, we propose to employ mode connectivity in loss landscapes to study the adversarial robustness of deep neural networks, and provide novel methods for improving this robustness. Our experiments cover various types of adversarial attacks applied to different network architectures and datasets. When network models are tampered with backdoor or error-injection attacks, our results demonstrate that the path connection learned using limited amount of bonafide data can effectively mitigate adversarial effects while maintaining the original accuracy on clean data. Therefore, mode connectivity provides users with the power to repair backdoored or error-injected models. We also use mode connectivity to investigate the loss landscapes of regular and robust models against evasion attacks. Experiments show that there exists a barrier in adversarial robustness loss on the path connecting regular and adversarially-trained models. A high correlation is observed between the adversarial robustness loss and the largest eigenvalue of the input Hessian matrix, for which theoretical justifications are provided. Our results suggest that mode connectivity offers a holistic tool and practical means for evaluating and improving adversarial robustness.""","""This paper investigates improving robustness to adversarial examples by using mode connectivity in the loss function. The paper received three reviews by experts working in related areas. In a strongly positive review, R1 recommends Accept, but gives some specific technical questions. The authors submitted a response to these questions; in post-review comments, R1 was satisfied and maintained the highly positive review. R2 recommended Weak Reject and also asked specific technical questions, including some additional details on experiments, statistical significance, etc. The author response also convincingly responded to these concerns. R3 recommended Weak Accept but suggested improving the writing, which authors have done in their revision. Given that R1 and R3 are highly positive and R2's concerns were addressed in the response and revision, we now recommend (weak) Accept."""
1642,"""The Curious Case of Neural Text Degeneration""","['generation', 'text', 'NLG', 'NLP', 'natural language', 'natural language generation', 'language model', 'neural', 'neural language model']","""Despite considerable advances in neural language modeling, it remains an open question what the best decoding strategy is for text generation from a language model (e.g. to generate a story). The counter-intuitive empirical observation is that even though the use of likelihood as training objective leads to high quality models for a broad range of language understanding tasks, maximization-based decoding methods such as beam search lead to degeneration output text that is bland, incoherent, or gets stuck in repetitive loops. To address this we propose Nucleus Sampling, a simple but effective method to draw considerably higher quality text out of neural language models than previous decoding strategies. Our approach avoids text degeneration by truncating the unreliable tail of the probability distribution, sampling from the dynamic nucleus of tokens containing the vast majority of the probability mass. To properly examine current maximization-based and stochastic decoding methods, we compare generations from each of these methods to the distribution of human text along several axes such as likelihood, diversity, and repetition. Our results show that (1) maximization is an inappropriate decoding objective for open-ended text generation, (2) the probability distributions of the best current language models have an unreliable tail which needs to be truncated during generation and (3) Nucleus Sampling is currently the best available decoding strategy for generating long-form text that is both high-quality as measured by human evaluation and as diverse as human-written text.""","""This paper presents nucleus sampling, a sampling method that truncates the tail of a probability distribution and samples from a dynamic nucleus containing the majority of the probability mass. Likelihood and human evaluations show that the proposed method is a better alternative to a standard sampling method and top-k sampling. This is a well-written paper and I think the proposed sampling method will be useful in language modeling. All reviewers agree that the paper addresses an important problem. Two reviewers have concerns regarding the technical contribution of the paper (i.e., nucleus sampling is a straightforward extension of top-k sampling), and whether it is enough for publications at a venue such as ICLR. R2 suggests to have a better theoretical framework for nucleus sampling. I think these are valid concerns. However, given the potential widespread application of the proposed method and the strong empirical results, I recommend to accept the paper. Also, a minor comment, I think there is something wrong with your style file (e.g., the bottom margin appears too large compared to other submissions)."""
1643,"""Fast Task Inference with Variational Intrinsic Successor Features""","['Reinforcement Learning', 'Variational Intrinsic Control', 'Successor Features']","""It has been established that diverse behaviors spanning the controllable subspace of a Markov decision process can be trained by rewarding a policy for being distinguishable from other policies. However, one limitation of this formulation is the difficulty to generalize beyond the finite set of behaviors being explicitly learned, as may be needed in subsequent tasks. Successor features provide an appealing solution to this generalization problem, but require defining the reward function as linear in some grounded feature space. In this paper, we show that these two techniques can be combined, and that each method solves the other's primary limitation. To do so we introduce Variational Intrinsic Successor FeatuRes (VISR), a novel algorithm which learns controllable features that can be leveraged to provide enhanced generalization and fast task inference through the successor features framework. We empirically validate VISR on the full Atari suite, in a novel setup wherein the rewards are only exposed briefly after a long unsupervised phase. Achieving human-level performance on 12 games and beating all baselines, we believe VISR represents a step towards agents that rapidly learn from limited feedback.""","""This work uses a variational autoencoder-based approach to combine the benefits of recent methods that learn policies with behavioral diversity with the advantages of successor representations, addressing the generalization and slow inference problems of competing methods such as DIAYN. After discussion of the author rebuttal, the reviewers all agreed on the significant contribution of the paper and that concerns about clarity were sufficiently addressed. Thus, I recommend this paper for acceptance."""
1644,"""Why Learning of Large-Scale Neural Networks Behaves Like Convex Optimization""","['function space', 'canonical space', 'neural networks', 'stochastic gradient descent', 'disparity matrix']","""In this paper, we present some theoretical work to explain why simple gradient descent methods are so successful in solving non-convex optimization problems in learning large-scale neural networks (NN). After introducing a mathematical tool called canonical space, we have proved that the objective functions in learning NNs are convex in the canonical model space. We further elucidate that the gradients between the original NN model space and the canonical space are related by a pointwise linear transformation, which is represented by the so-called disparity matrix. Furthermore, we have proved that gradient descent methods surely converge to a global minimum of zero loss provided that the disparity matrices maintain full rank. If this full-rank condition holds, the learning of NNs behaves in the same way as normal convex optimization. At last, we have shown that the chance to have singular disparity matrices is extremely slim in large NNs. In particular, when over-parameterized NNs are randomly initialized, the gradient decent algorithms converge to a global minimum of zero loss in probability. ""","""This paper studies the problem of optimization for neural networks, by comparing the optimization problem in parameter space with the corresponding problem in function space. It argues that overparametrised models leads to a convex problem formulation leading to global optimality. All reviewers agreed that this paper lacks mathematical rigor and novelty relative to the current works on overparametrised neural networks. Its arguments need to be substantially reworked before it can be considered for publication, and as a consequence the AC recommends rejection. """
1645,"""Deep neuroethology of a virtual rodent""","['computational neuroscience', 'motor control', 'deep RL']","""Parallel developments in neuroscience and deep learning have led to mutually productive exchanges, pushing our understanding of real and artificial neural networks in sensory and cognitive systems. However, this interaction between fields is less developed in the study of motor control. In this work, we develop a virtual rodent as a platform for the grounded study of motor activity in artificial models of embodied control. We then use this platform to study motor activity across contexts by training a model to solve four complex tasks. Using methods familiar to neuroscientists, we describe the behavioral representations and algorithms employed by different layers of the network using a neuroethological approach to characterize motor activity relative to the rodent's behavior and goals. We find that the model uses two classes of representations which respectively encode the task-specific behavioral strategies and task-invariant behavioral kinematics. These representations are reflected in the sequential activity and population dynamics of neural subpopulations. Overall, the virtual rodent facilitates grounded collaborations between deep reinforcement learning and motor neuroscience.""","""This paper is somewhat unorthodox in what it sets out to do: use neuroscience methods to understand a trained deep network controlling an embodied agent. This is exciting, but the actual training of the virtual rodent and the performance it exhibits is also impressive in its own right. All reviewers liked the papers. The question that recurred among all reviewers was what was actually learned in this analysis. The authors responded to this convincingly by listing a number of interesting findings. I think this paper represents an interesting new direction that many will be interested in."""
1646,"""GPNET: MONOCULAR 3D VEHICLE DETECTION BASED ON LIGHTWEIGHT WHEEL GROUNDING POINT DETECTION NETWORK""","['applications in vision', 'audio', 'speech', 'natural language processing', 'robotics']","""We present a method to infer 3D location and orientation of vehicles on a single image. To tackle this problem, we optimize the mapping relation between the vehicles wheel grounding point on the image and the real location of the wheel in the 3D real world coordinate. Here we also integrate three task priors, including a ground plane constraint and vehicle wheel grounding point position, as well as a small projection error from the image to the ground plane. And a robust light network for grounding point detection in autopilot is proposed based on the vehicle and wheel detection result. In the light grounding point detection network, the DSNT key point regression method is used for balancing the speed of convergence and the accuracy of position, which has been proved more robust and accurate compared with the other key point detection methods. With more, the size of grounding point detection network is less than 1 MB, which can be executed quickly on the embedded environment. The code will be available soon.""","""This paper aims to estimate the 3D location and orientation of vehicle from a 2D image. Instead of using a CNN-based 3D detection pipeline, the authors propose to detect the vehicles wheel grounding points and then using the ground plane constraint for the estimation. All three reviewers provided unanimous rating of rejection. Many concerns are raised by the reviewers, including poor generalization to new situations, small improvement over prior work, low presentation quality, the lack of detailed description of the experiments, etc. The authors did not respond to the reviewers comments. The AC agrees with the reviewers comments, and recommend rejection."""
1647,"""Fast Machine Learning with Byzantine Workers and Servers""","['Distributed machine learning', 'Byzantine resilience', 'Fault tolerance']","""Machine Learning (ML) solutions are nowadays distributed and are prone to various types of component failures, which can be encompassed in so-called Byzantine behavior. This paper introduces LiuBei, a Byzantine-resilient ML algorithm that does not trust any individual component in the network (neither workers nor servers), nor does it induce additional communication rounds (on average), compared to standard non-Byzantine resilient algorithms. LiuBei builds upon gradient aggregation rules (GARs) to tolerate a minority of Byzantine workers. Besides, LiuBei replicates the parameter server on multiple machines instead of trusting it. We introduce a novel filtering mechanism that enables workers to filter out replies from Byzantine server replicas without requiring communication with all servers. Such a filtering mechanism is based on network synchrony, Lipschitz continuity of the loss function, and the GAR used to aggregate workers gradients. We also introduce a protocol, scatter/gather, to bound drifts between models on correct servers with a small number of communication messages. We theoretically prove that LiuBei achieves Byzantine resilience to both servers and workers and guarantees convergence. We build LiuBei using TensorFlow, and we show that LiuBei tolerates Byzantine behavior with an accuracy loss of around 5% and around 24% convergence overhead compared to vanilla TensorFlow. We moreover show that the throughput gain of LiuBei compared to another stateoftheart Byzantineresilient ML algorithm (that assumes network asynchrony) is 70%.""","""This paper is concerned with learning in the context of so-called Byzantine failures. This is relevant for for example distributed computation of gradients of mini-batches and parameter updates. The paper introduces the concept and Byzantine servers and gives theoretical and practical results for algorithm for this setting. The reviewers had a hard time evaluating this paper and the AC was unable to find an expert reviewer. Still, the feedback from the reviewers painted a clear picture that the paper did not do enough to communicate the novel concepts used in the paper. Rejection is recommended with a strong encouragement to use the feedback to improve the paper for the next conference."""
1648,"""On the Tunability of Optimizers in Deep Learning""","['Optimization', 'Benchmarking', 'Hyperparameter optimization']","""There is no consensus yet on the question whether adaptive gradient methods like Adam are easier to use than non-adaptive optimization methods like SGD. In this work, we fill in the important, yet ambiguous concept of ease-of-use by defining an optimizers tunability: How easy is it to find good hyperparameter configurations using automatic random hyperparameter search? We propose a practical and universal quantitative measure for optimizer tunability that can form the basis for a fair optimizer benchmark. Evaluating a variety of optimizers on an extensive set of standard datasets and architectures, we find that Adam is the most tunable for the majority of problems, especially with a low budget for hyperparameter tuning.""","""The paper proposed a new metric to define the quality of optimizers as a weighted average of the scores reached after a certain number of hyperparameters have been tested. While reviewers (and myself) understood the need to better be able to compare optimizers, they failed to be convinced by the proposed solutions. In particular (setting aside several complaints of the reviewers with which I disagree), by defining a very versatile metric, this paper lacks a strong conclusion as the ranking of optimizers would clearly depend on the instantiation of that metric. Although that is to be expected, by the very behaviour of these optimizers, it makes it unclear what the added value of the metric is. As one reviewer pointed out, all the points made could have been similarly made with other, more common plots. Ultimately, it wasn't clear to me what the paper was trying to achieve beyond defining a mathematical formula encompassing all ""standard"" evaluation metric, which I unfortunately see of limited value."""
1649,"""Alternating Recurrent Dialog Model with Large-Scale Pre-Trained Language Models""","['NLP', 'Pre-training', 'GPT-2', 'Text Generation', 'Dialog Generation']","""Existing dialog system models require extensive human annotations and are difficult to generalize to different tasks. The recent success of large pre-trained language models such as BERT and GPT-2 have suggested the effectiveness of incorporating language priors in down-stream NLP tasks. However, how much pre-trained language models can help dialog response generation is still under exploration. In this paper, we propose a simple, general, and effective framework: Alternating Recurrent Dialog Model (ARDM). ARDM models each speaker separately and takes advantage of the large pre-trained language model. It requires no supervision from human annotations such as belief states or dialog acts to achieve effective conversations. ARDM outperforms or is on par with state-of-the-art methods on two popular task-oriented dialog datasets: CamRest676 and MultiWOZ. Moreover, we can generalize ARDM to more challenging, non-collaborative tasks such as persuasion. In persuasion tasks, ARDM is capable of generating human-like responses to persuade people to donate to a charity.""","""This paper proposes an alternating dialog model based on transformers and GPT-2, that model each conversation side separately and aim to eliminate human supervision. Results on two dialog corpora are either better than or comparable to state-of-the-art. Two of the reviewers raise concerns about the novel contributions of the paper, and did not change their scores after authors' rebuttal. Furthermore, one reviewer raises concerns about the lack of detailed experiments aiming to explain where the improvements come from. Hence, I suggest rejecting the paper."""
1650,"""Teacher-Student Compression with Generative Adversarial Networks""",[],"""More accurate machine learning models often demand more computation and memory at test time, making them difficult to deploy on CPU- or memory-constrained devices. Teacher-student compression (TSC), also known as distillation, alleviates this burden by training a less expensive student model to mimic the expensive teacher model while maintaining most of the original accuracy. However, when fresh data is unavailable for the compression task, the teacher's training data is typically reused, leading to suboptimal compression. In this work, we propose to augment the compression dataset with synthetic data from a generative adversarial network (GAN) designed to approximate the training data distribution. Our GAN-assisted TSC (GAN-TSC) significantly improves student accuracy for expensive models such as large random forests and deep neural networks on both tabular and image datasets. Building on these results, we propose a comprehensive metricthe TSC Scoreto evaluate the quality of synthetic datasets based on their induced TSC performance. The TSC Score captures both data diversity and class affinity, and we illustrate its benefits over the popular Inception Score in the context of image classification.""","""This paper uses GAN for data augmentation to improve the performance of knowledge distillation. Reviewers and AC commonly think the paper suffers from limited novelty and insufficient experimental supports/details. Hence, I recommend rejection."""
1651,"""Denoising and Regularization via Exploiting the Structural Bias of Convolutional Generators""","['theory for deep learning', 'convolutional network', 'deep image prior', 'deep decoder', 'dynamics of gradient descent', 'overparameterization']","""Convolutional Neural Networks (CNNs) have emerged as highly successful tools for image generation, recovery, and restoration. A major contributing factor to this success is that convolutional networks impose strong prior assumptions about natural images. A surprising experiment that highlights this architectural bias towards natural images is that one can remove noise and corruptions from a natural image without using any training data, by simply fitting (via gradient descent) a randomly initialized, over-parameterized convolutional generator to the corrupted image. While this over-parameterized network can fit the corrupted image perfectly, surprisingly after a few iterations of gradient descent it generates an almost uncorrupted image. This intriguing phenomenon enables state-of-the-art CNN-based denoising and regularization of other inverse problems. In this paper, we attribute this effect to a particular architectural choice of convolutional networks, namely convolutions with fixed interpolating filters. We then formally characterize the dynamics of fitting a two-layer convolutional generator to a noisy signal and prove that early-stopped gradient descent denoises/regularizes. Our proof relies on showing that convolutional generators fit the structured part of an image significantly faster than the corrupted portion. ""","""This paper studies the question of why a network trained to reproduce a single image often de-noises the image early in training. This an interesting question and, post discussion, all three reviewers agree that it will be of general interest to the community and is worth publishing. Therefore I recommend it be accepted. """
1652,"""LambdaNet: Probabilistic Type Inference using Graph Neural Networks""","['Type inference', 'Graph neural network', 'Programming languages', 'Pointer network']","""As gradual typing becomes increasingly popular in languages like Python and TypeScript, there is a growing need to infer type annotations automatically. While type annotations help with tasks like code completion and static error catching, these annotations cannot be fully inferred by compilers and are tedious to annotate by hand. This paper proposes a probabilistic type inference scheme for TypeScript based on a graph neural network. Our approach first uses lightweight source code analysis to generate a program abstraction called a type dependency graph, which links type variables with logical constraints as well as name and usage information. Given this program abstraction, we then use a graph neural network to propagate information between related type variables and eventually make type predictions. Our neural architecture can predict both standard types, like number or string, as well as user-defined types that have not been encountered during training. Our experimental results show that our approach outperforms prior work in this space by 14% (absolute) on library types, while having the ability to make type predictions that are out of scope for existing techniques. ""","""This paper proposes an approach to type inference in dynamically typed languages using graph neural networks. The reviewers (and the area chair) love this novel and useful application of GNNs to a practical problem, the presentation, the results. Clear accept."""
1653,"""On Symmetry and Initialization for Neural Networks""","['Neural Network Theory', 'Symmetry']","""This work provides an additional step in the theoretical understanding of neural networks. We consider neural networks with one hidden layer and show that when learning symmetric functions, one can choose initial conditions so that standard SGD training efficiently produces generalization guarantees. We empirically verify this and show that this does not hold when the initial conditions are chosen at random. The proof of convergence investigates the interaction between the two layers of the network. Our results highlight the importance of using symmetry in the design of neural networks.""","""The two main concerns raised by reviewers is that whether the results are significant, and a potential issue in the proof. While the rebuttal clarified some steps in the proof, the main concerns about the significance remain. The authors are encouraged to make this significance more clear. Note that one reviewer argued theoretical papers are not suitable for ICLR. This is false, as a theoretical understanding of neural networks remains a key research area that is of wide interest to the community. Consequently, this review was not considered in the final evaluation."""
1654,"""Learning to Balance: Bayesian Meta-Learning for Imbalanced and Out-of-distribution Tasks""","['meta-learning', 'few-shot learning', 'Bayesian neural network', 'variational inference', 'learning to learn', 'imbalanced and out-of-distribution tasks for few-shot learning']","""While tasks could come with varying the number of instances and classes in realistic settings, the existing meta-learning approaches for few-shot classification assume that number of instances per task and class is fixed. Due to such restriction, they learn to equally utilize the meta-knowledge across all the tasks, even when the number of instances per task and class largely varies. Moreover, they do not consider distributional difference in unseen tasks, on which the meta-knowledge may have less usefulness depending on the task relatedness. To overcome these limitations, we propose a novel meta-learning model that adaptively balances the effect of the meta-learning and task-specific learning within each task. Through the learning of the balancing variables, we can decide whether to obtain a solution by relying on the meta-knowledge or task-specific learning. We formulate this objective into a Bayesian inference framework and tackle it using variational inference. We validate our Bayesian Task-Adaptive Meta-Learning (Bayesian TAML) on two realistic task- and class-imbalanced datasets, on which it significantly outperforms existing meta-learning approaches. Further ablation study confirms the effectiveness of each balancing component and the Bayesian learning framework. ""","""The reviewers generally agreed that the paper presents a compelling method that addresses an important problem. This paper should clearly be accepted, and I would suggest for it to be considered for an oral presentation. I would encourage the authors to take into account the reviewers' suggestions (many of which were already addressed in the rebuttal period) and my own suggestion. The main suggestion I would have in regard to improving the paper is to position it a bit more carefully in regard to prior work on Bayesian meta-learning. This is an active research field, with quite a number of papers. There are two that are especially close to the VI method that the authors are proposing: Gordon et al. and Finn et al. (2018). For example, the graphical model in Figure 2 looks nearly identical to the ones presented in these two prior papers, as does the variational inference procedure. There is nothing wrong with that, but it would be appropriate for the authors to discuss this prior work a bit more diligently -- currently the relationship to these prior works is not at all apparent from their discussion in the related work section. A more appropriate way to present this would be to begin Section 3.2 by stating that this framework follows prior work -- there is nothing wrong with building on prior work, and the significant and important contribution of this paper is no way diminished by being up-front about which parts are inspired by previous papers."""
1655,"""A Simple Dynamic Learning Rate Tuning Algorithm For Automated Training of DNNs""","['adaptive LR tuning algorithm', 'generalization']","""Training neural networks on image datasets generally require extensive experimentation to find the optimal learning rate regime. Especially, for the cases of adversarial training or for training a newly synthesized model, one would not know the best learning rate regime beforehand. We propose an automated algorithm for determining the learning rate trajectory, that works across datasets and models for both natural and adversarial training, without requiring any dataset/model specific tuning. It is a stand-alone, parameterless, adaptive approach with no computational overhead. We theoretically discuss the algorithm's convergence behavior. We empirically validate our algorithm extensively. Our results show that our proposed approach \emph{consistently} achieves top-level accuracy compared to SOTA baselines in the literature in natural training, as well as in adversarial training.""","""This paper proposes an automatic tuning procedure for the learning rate of SGD. Reviewers were in agreement over several of the shortcomings of the paper, in particular its heuristic nature. They also took the time to provide several ways of improving the work which I suggest the authors follow should they decide to resubmit it to a later conference."""
1656,"""Robust Subspace Recovery Layer for Unsupervised Anomaly Detection""","['robust subspace recovery', 'unsupervised anomaly detection', 'outliers', 'latent space', 'autoencoder']","""We propose a neural network for unsupervised anomaly detection with a novel robust subspace recovery layer (RSR layer). This layer seeks to extract the underlying subspace from a latent representation of the given data and removes outliers that lie away from this subspace. It is used within an autoencoder. The encoder maps the data into a latent space, from which the RSR layer extracts the subspace. The decoder then smoothly maps back the underlying subspace to a ``manifold"" close to the original inliers. Inliers and outliers are distinguished according to the distances between the original and mapped positions (small for inliers and large for outliers). Extensive numerical experiments with both image and document datasets demonstrate state-of-the-art precision and recall. ""","""Three reviewers have assessed this paper and they have scored it as 6/6/6/6 after rebuttal. Nonetheless, the reviewers have raised a number of criticisms and the authors are encouraged to resolve them for the camera-ready submission."""
1657,"""Deep Multi-View Learning via Task-Optimal CCA""","['multi-view', 'components analysis', 'CCA', 'representation learning', 'deep learning']","""Canonical Correlation Analysis (CCA) is widely used for multimodal data analysis and, more recently, for discriminative tasks such as multi-view learning; however, it makes no use of class labels. Recent CCA methods have started to address this weakness but are limited in that they do not simultaneously optimize the CCA projection for discrimination and the CCA projection itself, or they are linear only. We address these deficiencies by simultaneously optimizing a CCA-based and a task objective in an end-to-end manner. Together, these two objectives learn a non-linear CCA projection to a shared latent space that is highly correlated and discriminative. Our method shows a significant improvement over previous state-of-the-art (including deep supervised approaches) for cross-view classification (8.5% increase), regularization with a second view during training when only one view is available at test time (2.2-3.2%), and semi-supervised learning (15%) on real data.""","""The main contribution of this paper is the training of a supervised model jointly with deep CCA for improving the representations learned in a setting where the training data is multi-view. The claimed technical contribution is modifications to deep CCA to enable it to play nicely with the minibatch gradient-based training used for the supervised loss. Pros: This is an important problem with many applications. Cons: The novelty is minimal. Some previous work has done joint training of supervised models with CCA, and some has addressed training deep CCA in a stochastic setting. The reviewers (and I) are unconvinced that the differences from previous work are sufficient, and the paper does not carefully compare with the previous work. The contribution to the tasks may be quite significant, however, so the paper may fit in well in an application-oriented conference/journal."""
1658,"""When Covariate-shifted Data Augmentation Increases Test Error And How to Fix It""","['data augmentation', 'adversarial training', 'interpolation', 'overparameterized']","""Empirically, data augmentation sometimes improves and sometimes hurts test error, even when only adding points with labels from the true conditional distribution that the hypothesis class is expressive enough to fit. In this paper, we provide precise conditions under which data augmentation hurts test accuracy for minimum norm estimators in linear regression. To mitigate the failure modes of augmentation, we introduce X-regularization, which uses unlabeled data to regularize the parameters towards the non-augmented estimate. We prove that our new estimator never hurts test error and exhibits significant improvements over adversarial data augmentation on CIFAR-10.""","""This paper describes situations whereby data augmentation (particularly drawn from a true distribution) can lead to increased generalization error even when the model being optimized is appropriately formulated. The authors propose ""X-regularization"" which requires that models trained on standard and augmented data produce similar predictions on unlabeled data. The paper includes a few experiments on a toy staircase regression problem as well as some ResNet experiments on CIFAR-10. This paper received 2 recommendations for rejection, and one weak accept recommendation. After the rebuttal phase, the author who recommended weak acceptance indicated their willingness to let the paper be rejected in light of the other reviews. The reviewer highlighted: ""I think the authors could still to better to relate their theory to practice, and expand on the discussion/presentation of X-regularization."" The main open issue is that the theoretical contributions of the paper are not sufficiently linked to the proposed algorithm."""
1659,"""Group-Connected Multilayer Perceptron Networks""",[],"""Despite the success of deep learning in domains such as image, voice, and graphs, there has been little progress in deep representation learning for domains without a known structure between features. For instance, a tabular dataset of different demographic and clinical factors where the feature interactions are not given as a prior. In this paper, we propose Group-Connected Multilayer Perceptron (GMLP) networks to enable deep representation learning in these domains. GMLP is based on the idea of learning expressive feature combinations (groups) and exploiting them to reduce the network complexity by defining local group-wise operations. During the training phase, GMLP learns a sparse feature grouping matrix using temperature annealing softmax with an added entropy loss term to encourage the sparsity. Furthermore, an architecture is suggested which resembles binary trees, where group-wise operations are followed by pooling operations to combine information; reducing the number of groups as the network grows in depth. To evaluate the proposed method, we conducted experiments on five different real-world datasets covering various application areas. Additionally, we provide visualizations on MNIST and synthesized data. According to the results, GMLP is able to successfully learn and exploit expressive feature combinations and achieve state-of-the-art classification performance on different datasets.""","""The authors propose Group Connected Multilayer Perceptron Networks which allow expressive feature combinations to learn meaningful deep representations. They experiment with different datasets and show that the proposed method gives improved performance. The authors have done a commendable job of replying to the queries of the reviewers and addresses many of their concerns. However, the main concern still remains: The improvements are not very significant on most datasets except the MNIST dataset. I understand the author's argument that other papers have also reported small improvements on these datasets and hence it is ok to report small improvements. However, the reviewers and the AC did not find this argument very convincing. Given that this is not a theoretical paper and that the novelty is not very high (as pointed out by R1) strong empirical results are accepted. Hence, at this point, I recommend that the paper cannot be accepted. """
1660,"""Granger Causal Structure Reconstruction from Heterogeneous Multivariate Time Series""","['causal inference', 'Granger causality', 'time series', 'inductive', 'LSTM', 'attention']","""Granger causal structure reconstruction is an emerging topic that can uncover causal relationship behind multivariate time series data. In many real-world systems, it is common to encounter a large amount of multivariate time series data collected from heterogeneous individuals with sharing commonalities, however there are ongoing concerns regarding its applicability in such large scale complex scenarios, presenting both challenges and opportunities for Granger causal reconstruction. To bridge this gap, we propose a Granger cAusal StructurE Reconstruction (GASER) framework for inductive Granger causality learning and common causal structure detection on heterogeneous multivariate time series. In particular, we address the problem through a novel attention mechanism, called prototypical Granger causal attention. Extensive experiments, as well as an online A/B test on an E-commercial advertising platform, demonstrate the superior performances of GASER.""","""This paper proposes a solution to learn Granger temporal-causal network for multivariate time series by adding attention named prototypical Granger causal attention in LSTM. The work aims to address an important problem. The proposed solution seems effective empirically. However, two major issues have not been fully addressed in the current version: (1) the connection between Granger causality and the attention mechanism is not fully justified; (2) the complex design overkills the whole concept of Granger causality (since its popularity is due to the simplicity). The paper would be a strong publication in the future if the two issues can be addressed in a satisfactory way. """
1661,"""Learning to Defense by Learning to Attack""",[],"""Adversarial training provides a principled approach for training robust neural networks. From an optimization perspective, the adversarial training is essentially solving a minimax robust optimization problem. The outer minimization is trying to learn a robust classifier, while the inner maximization is trying to generate adversarial samples. Unfortunately, such a minimax problem is very difficult to solve due to the lack of convex-concave structure. This work proposes a new adversarial training method based on a generic learning-to-learn (L2L) framework. Specifically, instead of applying the existing hand-designed algorithms for the inner problem, we learn an optimizer, which is parametrized as a convolutional neural network. At the same time, a robust classifier is learned to defense the adversarial attack generated by the learned optimizer. Our experiments over CIFAR-10 and CIFAR-100 datasets demonstrate that the L2L outperforms existing adversarial training methods in both classification accuracy and computational efficiency. Moreover, our L2L framework can be extended to the generative adversarial imitation learning and stabilize the training.""","""This paper considers solving the minimax formulation of adversarial training, where it proposes a new method based on a generic learning-to-learn (L2L) framework. Particularly, instead of applying the existing hand-designed algorithms for the inner problem, it learns an optimizer parametrized as a convolutional neural network. A robust classifier is learned to defense the adversarial attack generated by the learned optimizer. The idea is using L2L is sensible. However, main concerns on empirical studies remain after rebuttal. """
1662,"""The problem with DDPG: understanding failures in deterministic environments with sparse rewards""","['ddpg', 'reinforcement learning', 'deep learning', 'policy gradient']","""In environments with continuous state and action spaces, state-of-the-art actor-critic reinforcement learning algorithms can solve very complex problems, yet can also fail in environments that seem trivial, but the reason for such failures is still poorly understood. In this paper, we contribute a formal explanation of these failures in the particular case of sparse reward and deterministic environments. First, using a very elementary control problem, we illustrate that the learning process can get stuck into a fixed point corresponding to a poor solution. Then, generalizing from the studied example, we provide a detailed analysis of the underlying mechanisms which results in a new understanding of one of the convergence regimes of these algorithms. The resulting perspective casts a new light on already existing solutions to the issues we have highlighted, and suggests other potential approaches.""","""This paper provides an extensive investigation of the robustness of Deep Deterministic Policy Gradient algorithm. Papers providing extensive and qualitative empirical studies, illustrative benchmark domains, identification of problems with existing methods, and new insights can be immensely valuable, and this paper is certainly in this direction, if not quite there yet. The vast majority of this paper investigates one deep learning algorithm in designed domain. There is some theory but it's relegated to the appendix. There are a few issues with this approach: (1) there is no concrete evidence that this is a general issue beyond the provided example (more on that below). (2) Even in the designed domain the problem is extremely rare. (3) The study and perhaps even the issue is only shown for one particular architecture (with a whole host of unspecified meta-parameter details). Why not just use SAC it works? DDPG has other issues, why is it of interest to study and fix this particular architecture? The motivation that it is the first and most popular algorithm is not well developed enough to be convincing. (4) There is really no reasoning to suggest that the particular 1D is representative or interesting in general. The authors including Mujoco results to address #1. But the error bars overlap, its completely unclear if the baseline was tuned at all---this is very problematic as the domains were variants created by the authors. If DDPG was not tuned for the variant then the plots are not representative. In general, there are basically no implementation details (how parameters were tested, how experiments were conducted)or general methodological details given in the paper. Given the evidence provided in this paper its difficult to claim this is a general and important issue. I encourage the authors to look at John Langfords hard exploration tasks, and broaden their view of this work general learning mechanisms. """
1663,"""DO-AutoEncoder: Learning and Intervening Bivariate Causal Mechanisms in Images""","['Causality discovery', 'AutoEncoder', 'Deep representation learning', 'Do-calculus']","""Some fundamental limitations of deep learning have been exposed such as lacking generalizability and being vunerable to adversarial attack. Instead, researchers realize that causation is much more stable than association relationship in data. In this paper, we propose a new framework called do-calculus AutoEncoder(DO-AE) for deep representation learning that fully capture bivariate causal relationship in the images which allows us to intervene in images generation process. DO-AE consists of two key ingredients: causal relationship mining in images and intervention-enabling deep causal structured representation learning. The goal here is to learn deep representations that correspond to the concepts in the physical world as well as their causal structure. To verify the proposed method, we create a dataset named PHY2D, which contains abstract graphic description in accordance with the laws of physics. Our experiments demonstrate our method is able to correctly identify the bivariate causal relationship between concepts in images and the representation learned enables a do-calculus manipulation to images, which generates artificial images that might possibly break the physical law depending on where we intervene the causal system.""","""The idea of integrating causality into an auto-encoder is interesting and very timely. While the reviewers find this paper to contain some interesting ideas, the technical contributions and mathematical rigor, scope of the method, and the presentation of results would need to be significantly improved in order for this work to reach the quality bar of ICLR."""
1664,"""Self-Induced Curriculum Learning in Neural Machine Translation""","['curriculum learning', 'neural machine translation', 'self-supervised learning']","""Self-supervised neural machine translation (SS-NMT) learns how to extract/select suitable training data from comparable (rather than parallel) corpora and how to translate, in a way that the two tasks support each other in a virtuous circle. SS-NMT has been shown to be competitive with state-of-the-art unsupervised NMT. In this study we provide an in-depth analysis of the sampling choices the SS-NMT model takes during training. We show that, without it having been told to do so, the model selects samples of increasing (i) complexity and (ii) task-relevance in combination with (iii) a denoising curriculum. We observe that the dynamics of the mutual-supervision of both system internal representation types is vital for the extraction and hence translation performance. We show that in terms of the human Gunning-Fog Readability index (GF), SS-NMT starts by extracting and learning from Wikipedia data suitable for high school (GF=10--11) and quickly moves towards content suitable for first year undergraduate students (GF=13).""","""This paper presents a method for curriculum learning based on extracting parallel sentences from comparable corpora (wikipedia), and continuously retraining the model based on these examples. Two reviewers pointed out that the initial version of the paper lacked references and baselines from methods of mining parallel sentences from comparable corpora such as Wikipedia. The authors have responded at length and included some of the requested baseline results. This changed one reviewer's score but has not tipped the balance strongly enough for considering this for publication. """
1665,"""Cover Filtration and Stable Paths in the Mapper""","['cover and nerve', 'Jaccard distance', 'stable paths in filtration', 'Mapper', 'recommender systems', 'explainable machine learning']","""The contributions of this paper are two-fold. We define a new filtration called the cover filtration built from a single cover based on a generalized Steinhaus distance, which is a generalization of Jaccard distance. We then develop a language and theory for stable paths within this filtration, inspired by ideas of persistent homology. This framework can be used to develop several new learning representations in applications where an obvious metric may not be defined but a cover is readily available. We demonstrate the utility of our framework as applied to recommendation systems and explainable machine learning. We demonstrate a new perspective for modeling recommendation system data sets that does not require manufacturing a bespoke metric. As a direct application, we find that the stable paths identified by our framework in a movies data set represent a sequence of movies constituting a gentle transition and ordering from one genre to another. For explainable machine learning, we apply the Mapper for model induction, providing explanations in the form of paths between subpopulations. Our framework provides an alternative way of building a filtration from a single mapper that is then used to explore stable paths. As a direct illustration, we build a mapper from a supervised machine learning model trained on the FashionMNIST data set. We show that the stable paths in the cover filtration provide improved explanations of relationships between subpopulations of images. ""","""The paper proposes a filtration based on the covers of data sets and demonstrates its effectiveness in recommendation systems and explainable machine learning. The paper is theory focused, and the discussion was mainly centered around one very detailed and thorough review. The main concerns raised in the reviews and reiterated at the end of the rebuttal cycle was lack of clarity, relatively incremental contribution, and limited experimental evaluation. Due to my limited knowledge of this particular field, I base my recommendation mostly on R1's assessment and recommend rejecting this submission."""
1666,"""FleXOR: Trainable Fractional Quantization""","['Quantization', 'Model Compression', 'Trainable Compression', 'XOR', 'Encryption']","""Parameter quantization is a popular model compression technique due to its regular form and high compression ratio. In particular, quantization based on binary codes is gaining attention because each quantized bit can be directly utilized for computations without dequantization using look-up tables. Previous attempts, however, only allow for integer numbers of quantization bits, which ends up restricting the search space for compression ratio and accuracy. Moreover, quantization bits are usually obtained by minimizing quantization loss in a local manner that does not directly correspond to minimizing the loss function. In this paper, we propose an encryption algorithm/architecture to compress quantized weights in order to achieve fractional numbers of bits per weight and new compression configurations further optimize accuracy/compression trade-offs. Decryption is implemented using XOR gates added into the neural network model and described as pseudo-formula , which enable gradient calculations superior to the straight-through gradient method. We perform experiments using MNIST, CIFAR-10, and ImageNet to show that inserting XOR gates learns quantization/encrypted bit decisions through training and obtains high accuracy even for fractional sub 1-bit weights.""","""This work studies parameter quantization using binary codes and proposes an encryption algorithm/architecture to compress quantized weights and achieve fractional numbers of bits per weight, and to perform decryption using XOR gates. The authors conduct experiments on datasets including ImageNet to evaluate their scheme. Much of the concern from reviewers relates to baseline comparison and details around that. Specifically, R1 believes that the submission could have a bigger impact if authors could conduct more thorough experiments, e.g. compressing more widely-used and challenging architecture of ResNet-50, or trying tasks such as image detection (Mask R-CNN). The authors' responded to that and mentioned their choice of the current experimental setting is to facilitate comparison with previous works (baselines), which use similar experimental settings. Nevertheless, the baseline methods could have been attempted by the authors on broader tasks, or more widely-used architectures could have been investigated by authors on the baseline methods. As a result, R1 was not convinced. To ensure the paper receives the attention it deserves, I recommend considering a more thorough evaluation of the proposed method against baseline methods."""
1667,"""Poincar Wasserstein Autoencoder""","['Variational inference', 'hyperbolic geometry', 'hierarchical latent space', 'representation learning']","""This work presents the Poincar Wasserstein Autoencoder, a reformulation of the recently proposed Wasserstein autoencoder framework on a non-Euclidean manifold, the Poincar ball model of the hyperbolic space H n . By assuming the latent space to be hyperbolic, we can use its intrinsic hierarchy to impose structure on the learned latent space representations. We show that for datasets with latent hierarchies, we can recover the structure in a low-dimensional latent space. We also demonstrate the model in the visual domain to analyze some of its properties and show competitive results on a graph link prediction task.""","""The paper received 3, 3, 6. All reviewers agree that the method is technically interesting. The main concern shared by the reviewers are the experiments which are somewhat underwhelming. The AC believes that this is a solid technical paper that needs a little bit more work. The authors are encouraged to strengthen their evaluation and resubmit to a future conference."""
1668,"""Graph Neural Networks Exponentially Lose Expressive Power for Node Classification""","['Graph Neural Network', 'Deep Learning', 'Expressive Power']","""Graph Neural Networks (graph NNs) are a promising deep learning approach for analyzing graph-structured data. However, it is known that they do not improve (or sometimes worsen) their predictive performance as we pile up many layers and add non-lineality. To tackle this problem, we investigate the expressive power of graph NNs via their asymptotic behaviors as the layer size tends to infinity. Our strategy is to generalize the forward propagation of a Graph Convolutional Network (GCN), which is a popular graph NN variant, as a specific dynamical system. In the case of a GCN, we show that when its weights satisfy the conditions determined by the spectra of the (augmented) normalized Laplacian, its output exponentially approaches the set of signals that carry information of the connected components and node degrees only for distinguishing nodes. Our theory enables us to relate the expressive power of GCNs with the topological information of the underlying graphs inherent in the graph spectra. To demonstrate this, we characterize the asymptotic behavior of GCNs on the Erd\H{o}s -- R\'{e}nyi graph. We show that when the Erd\H{o}s -- R\'{e}nyi graph is sufficiently dense and large, a broad range of GCNs on it suffers from the ``information loss"" in the limit of infinite layers with high probability. Based on the theory, we provide a principled guideline for weight normalization of graph NNs. We experimentally confirm that the proposed weight scaling enhances the predictive performance of GCNs in real data. Code is available at pseudo-url.""","""The paper provides a theoretical analysis of graph neural networks, as the number of layers goes to infinity. For the graph convolutional network, they relate the expressive power of the network with the graph spectra. In particular for Erdos-Renyi graphs, they show that very deep graphs lose information, and propose a new weight normalization scheme based on this insight. The authors responded well to reviewer comments. It is nice to see that the open review nature has also resulted in a new connection. Unfortunately one of the reviewers did not engage further in the discussion with respect to the author rebuttals. Overall, the paper provides a nice theoretical analysis of a widely used graph neural network architecture, and characterises its behaviour on a popular class of graphs. The fact that the theory provides a new approach for weight normalization is a bonus."""
1669,"""Adversarial Paritial Multi-label Learning""",[],"""Partial multi-label learning (PML), which tackles the problem of learning multi-label prediction models from instances with overcomplete noisy annotations, has recently started gaining attention from the research community. In this paper, we propose a novel adversarial learning model, PML-GAN, under a generalized encoder-decoder framework for partial multi-label learning. The PML-GAN model uses a disambiguation network to identify noisy labels and uses a multi-label prediction network to map the training instances to the disambiguated label vectors, while deploying a generative adversarial network as an inverse mapping from label vectors to data samples in the input feature space. The learning of the overall model corresponds to a minimax adversarial game, which enhances the correspondence of input features with the output labels. Extensive experiments are conducted on multiple datasets, while the proposed model demonstrates the state-of-the-art performance for partial multi-label learning.""","""The paper considers a problem of clearly practical importance: multi-label classification where the ground truth label sets are noisy, specifically they are known (or at least assumed) to be a superset of the true ground truth labels. Learning a classifier in this setting require simultaneous identification of irrelevant labels. The proposed solution is a 4-part neural architecture, wherein a multi-label classifier is composed with a disambiguation or ""cleanup"" network, which is used as conditioning input to a conditional GAN which learns an inverse mapping, trained via an adversarial loss and also a least squares reconstruction loss (""generation loss""). Reviews were split 2 to 1 in favour of rejection, and the discussion phase did not resolve this split, as two reviewers did not revisit their assessments. R2 and R3 were concerned about the overall novelty and degeneracy of the inverse mapping problem. R1 increased their score after the rebuttal phase as they felt their concerns were addressed in comments (regarding issues surrounding the related work, the possibility of trivial solutions, and intuition for why the adversarial objective helps), but these were not addressed in the text as no updates were made. I agree with the authors that PML is an important problem (one that receives perhaps less attention than it should from our community), and their empirical validation seems to support that their method outperforms (marginally, in many cases) methods from the literature. While the ablation study offers preliminary evidence that the inverse mapping is responsible for some of the gains, there are a lot of moving parts here and the authors haven't done a great job of motivating why this should help, or investigating why it in fact does. Based on the scores and my own reading of the paper, I'd recommend rejection at this time. My own comments for the authors: I'd urge efforts to clarify the motivation for learning the inverse mapping, in particular adversarially (rather than just with the generation loss) in the text of the paper as you have in your rebuttals, and to improve the notation (the use of both D-tilde and D is confusing, and the omega notation seems unnecessary). I'm also not entirely clear whether the generator is stochastic or not, as the notation doesn't mention a randomly sampled latent variable (the traditional ""z"" here is a conditioning vector). Either way, the answer should be made more explicit."""
1670,"""Learning Cluster Structured Sparsity by Reweighting""","['Sparse Recovery', 'Sparse Representation', 'Structured Sparsity']","""Recently, the paradigm of unfolding iterative algorithms into finite-length feed-forward neural networks has achieved a great success in the area of sparse recovery. Benefit from available training data, the learned networks have achieved state-of-the-art performance in respect of both speed and accuracy. However, the structure behind sparsity, imposing constraint on the support of sparse signals, is often an essential prior knowledge but seldom considered in the existing networks. In this paper, we aim at bridging this gap. Specifically, exploiting the iterative reweighted pseudo-formula minimization (IRL1) algorithm, we propose to learn the cluster structured sparsity (CSS) by rewegihting adaptively. In particular, we first unfold the Reweighted Iterative Shrinkage Algorithm (RwISTA) into an end-to-end trainable deep architecture termed as RW-LISTA. Then instead of the element-wise reweighting, the global and local reweighting manner are proposed for the cluster structured sparse learning. Numerical experiments further show the superiority of our algorithm against both classical algorithms and learning-based networks on different tasks. ""","""The paper is proposed a rejection based on majority reviews."""
1671,"""Explanation by Progressive Exaggeration""","['Explain', 'deep learning', 'black box', 'GAN', 'counterfactual']","""As machine learning methods see greater adoption and implementation in high stakes applications such as medical image diagnosis, the need for model interpretability and explanation has become more critical. Classical approaches that assess feature importance (eg saliency maps) do not explain how and why a particular region of an image is relevant to the prediction. We propose a method that explains the outcome of a classification black-box by gradually exaggerating the semantic effect of a given class. Given a query input to a classifier, our method produces a progressive set of plausible variations of that query, which gradually change the posterior probability from its original class to its negation. These counter-factually generated samples preserve features unrelated to the classification decision, such that a user can employ our method as a ``tuning knob'' to traverse a data manifold while crossing the decision boundary. Our method is model agnostic and only requires the output value and gradient of the predictor with respect to its input.""","""This paper presents an idea for interpolating between two points in the decision-space of a black-box classifier in the image-space, while producing plausible images along the interpolation path. The presentation is clear and the experiments support the premise of the model. While the proposed technique can be used to help understanding how a classifier works, I have strong reservations in calling the generated samples ""explanations"". In particular, there is no reason for the true explanation of how the classifier works to lie in the manifold of plausible images. This constraint is more of a feature to please humans rather than to explain the geometry of the decision boundary. I believe this paper will be well-received and I suggested acceptance, but I believe it will be of limited usefulness for robust understanding of the decision boundary of classifiers."""
1672,"""A Neural Dirichlet Process Mixture Model for Task-Free Continual Learning""","['continual learning', 'task-free', 'task-agnostic']","""Despite the growing interest in continual learning, most of its contemporary works have been studied in a rather restricted setting where tasks are clearly distinguishable, and task boundaries are known during training. However, if our goal is to develop an algorithm that learns as humans do, this setting is far from realistic, and it is essential to develop a methodology that works in a task-free manner. Meanwhile, among several branches of continual learning, expansion-based methods have the advantage of eliminating catastrophic forgetting by allocating new resources to learn new data. In this work, we propose an expansion-based approach for task-free continual learning. Our model, named Continual Neural Dirichlet Process Mixture (CN-DPM), consists of a set of neural network experts that are in charge of a subset of the data. CN-DPM expands the number of experts in a principled way under the Bayesian nonparametric framework. With extensive experiments, we show that our model successfully performs task-free continual learning for both discriminative and generative tasks such as image classification and image generation.""","""This paper proposes an expansion-based approach for task-free continual learning, using a Bayesian nonparametric framework (a Dirichlet process mixture model). It was well-reviewed, with reviewers agreeing that the paper is well-written, the experiments are thorough, and the results are impressive. Another positive is that the code has been released, meaning its likely to be reproducible. The main concern shared among reviewers is the limited novelty of the approach, which I also share. Reviewers all mentioned that the approach itself isnt novel, but they like the contribution of applying it to task-free continual learning. This wasnt mentioned, but Im concerned about the overlap between this approach and CURL (Rao et al 2019) published in NeurIPS 2019, which also deals with task-free continual learning using a generative, nonparametric approach. Could the authors comment on this in their final version? In sum, it seems that this paper is well-done, with reproducible experiments and impressive results, but limited novelty. Given that reviewers are all satisfied with this, Im willing to recommend acceptance. """
1673,"""Hierarchical Bayes Autoencoders""",[],"""Autoencoders are powerful generative models for complex data, such as images. However, standard models like the variational autoencoder (VAE) typically have unimodal Gaussian decoders, which cannot effectively represent the possible semantic variations in the space of images. To address this problem, we present a new probabilistic generative model called the \emph{Hierarchical Bayes Autoencoder (HBAE)}. The HBAE contains a multimodal decoder in the form of an energy-based model (EBM), instead of the commonly adopted unimodal Gaussian distribution. The HBAE can be trained using variational inference, similar to a VAE, to recover latent codes conditioned on inputs. For the decoder, we use an adversarial approximation where a conditional generator is trained to match the EBM distribution. During inference time, the HBAE consists of two sampling steps: first a latent code for the input is sampled, and then this code is passed to the conditional generator to output a stochastic reconstruction. The HBAE is also capable of modeling sets, by inferring a latent code for a set of examples, and sampling set members through the multimodal decoder. In both single image and set cases, the decoder generates plausible variations consistent with the input data, and generates realistic unconditional samples. To the best our knowledge, Set-HBAE is the first model that is able to generate complex image sets.""","""This paper introduces a probabilistic generative model which mixes a variational autoencoder (VAE) with an energy based model (EBM). As mentioned by all reviewers (i) the motivation of the model is not well justified (ii) experimental results are not convincing enough. In addition (iii) handling sets is not specific to the proposed approach, and thus claims regarding sets should be revised. """
1674,"""Training binary neural networks with real-to-binary convolutions""",['binary networks'],"""This paper shows how to train binary networks to within a few percent points (~3-5%) of the full precision counterpart. We first show how to build a strong baseline, which already achieves state-of-the-art accuracy, by combining recently proposed advances and carefully adjusting the optimization procedure. Secondly, we show that by attempting to minimize the discrepancy between the output of the binary and the corresponding real-valued convolution, additional significant accuracy gains can be obtained. We materialize this idea in two complementary ways: (1) with a loss function, during training, by matching the spatial attention maps computed at the output of the binary and real-valued convolutions, and (2) in a data-driven manner, by using the real-valued activations, available during inference prior to the binarization process, for re-scaling the activations right after the binary convolution. Finally, we show that, when putting all of our improvements together, the proposed model beats the current state of the art by more than 5% top-1 accuracy on ImageNet and reduces the gap to its real-valued counterpart to less than 3% and 5% top-1 accuracy on CIFAR-100 and ImageNet respectively when using a ResNet-18 architecture. Code available at pseudo-url""","""This paper proposes methodology to train binary neural networks. The reviewers and authors engaged in a constructive discussion. All the reviewers like the contributions of the paper. Acceptance is therefore recommended."""
1675,"""NADS: Neural Architecture Distribution Search for Uncertainty Awareness""","['Neural Architecture Search', 'Bayesian ensembling', 'out-of-distribution detection', 'uncertainty quantification', 'density estimation']","""Machine learning systems often encounter Out-of-Distribution (OoD) errors when dealing with testing data coming from a different distribution from the one used for training. With their growing use in critical applications, it becomes important to develop systems that are able to accurately quantify its predictive uncertainty and screen out these anomalous inputs. However, unlike standard learning tasks, there is currently no well established guiding principle for designing architectures that can accurately quantify uncertainty. Moreover, commonly used OoD detection approaches are prone to errors and even sometimes assign higher likelihoods to OoD samples. To address these problems, we first seek to identify guiding principles for designing uncertainty-aware architectures, by proposing Neural Architecture Distribution Search (NADS). Unlike standard neural architecture search methods which seek for a single best performing architecture, NADS searches for a distribution of architectures that perform well on a given task, allowing us to identify building blocks common among all uncertainty aware architectures. With this formulation, we are able to optimize a stochastic outlier detection objective and construct an ensemble of models to perform OoD detection. We perform multiple OoD detection experiments and observe that our NADS performs favorably compared to state-of-the-art OoD detection methods.""","""This paper introduces a neural architecture search method that is geared towards yielding good uncertainty estimates for out-of-distribution (OOD) samples. The reviewers found that the OOD prediction results are strong, but criticized various points, including the presentation of the OOD results, novelty as a NAS paper, missing citations to some recent papers, and a lack of baselines with simpler ensembles. The authors improved the presentation of their OOD results and provided new experiments, which causes one reviewer to increase his/her score from a weak reject to an accept. The other reviewers appreciated the rebuttal, but preferred not to change their scores from a weak reject and a reject, mostly due to lack of novelty as a NAS paper. I also read the paper, and my personal opinion is that it would definitely be very novel to have a good neural architecture search for handling uncertainty in deep learning; it is by no means the case that ""NAS for X"" is not interesting just because there are now a few papers for ""NAS for Y"". As long as X is relevant (which uncertainty in deep learning definitely is), and NAS finds a new state-of-the-art, I think this is great. For such an ""application"" paper of the NAS methodology, I do not find it necessary to introduce a novel NAS method, but just applying an existing one would be fine. The problem is more that the paper claims to introduce a new method, but that that method is too similar to existing ones, without a comparison; actually just using an existing NAS method would therefore make the contribution and the emphasis on the application domain clearer. I have one small question to the authors about a part that I did not understand: to optimize WAIC (Eq 1), why is it not optimal to just set the parameterization \phi such that the variance is minimized, i.e., return a delta distribution p_\phi that always returns the same architecture (one with a strong prediction)? Surely, that's not what the authors want, but wouldn't that minimize WAIC? I hope the authors will clarify this in a future version. In the private discussion of reviewers and AC, the most positive reviewer emphasized that the OOD results are strong, but admitted that the mixed sentiment is understandable since people who do not follow OOD detection could miss the importance and context of the results, and that the paper could definitely improve its messaging. The other reviewers' scores remained at 1 and 3, but the reviewers indicated that they would be positive about a future version of the paper that fixed the identified issues. My recommendation is to reject the paper and encourage the authors to continue this work and resubmit an improved version to a future venue."""
1676,"""Conservative Uncertainty Estimation By Fitting Prior Networks""","['uncertainty quantification', 'deep learning', 'Gaussian process', 'epistemic uncertainty', 'random network', 'prior', 'Bayesian inference']","""Obtaining high-quality uncertainty estimates is essential for many applications of deep neural networks. In this paper, we theoretically justify a scheme for estimating uncertainties, based on sampling from a prior distribution. Crucially, the uncertainty estimates are shown to be conservative in the sense that they never underestimate a posterior uncertainty obtained by a hypothetical Bayesian algorithm. We also show concentration, implying that the uncertainty estimates converge to zero as we get more data. Uncertainty estimates obtained from random priors can be adapted to any deep network architecture and trained using standard supervised learning pipelines. We provide experimental evaluation of random priors on calibration and out-of-distribution detection on typical computer vision tasks, demonstrating that they outperform deep ensembles in practice.""","""The paper provides theoretical justification for a previously proposed method for uncertainty estimation based on sampling from a prior distribution (Osband et al., Burda et al.). The reviewers initially raised concerns about significance, clarity and experimental evaluation, but the author rebuttal addressed most of these concerns. In the end, all the reviewers agreed that the paper deserves to be accepted."""
1677,"""TrojanNet: Exposing the Danger of Trojan Horse Attack on Neural Networks""",['machine learning security'],"""The complexity of large-scale neural networks can lead to poor understanding of their internal details. We show that this opaqueness provides an opportunity for adversaries to embed unintended functionalities into the network in the form of Trojan horse attacks. Our novel framework hides the existence of a malicious network within a benign transport network. Our attack is flexible, easy to execute, and difficult to detect. We prove theoretically that the malicious network's detection is computationally infeasible and demonstrate empirically that the transport network does not compromise its disguise. Our attack exposes an important, previously unknown loophole that unveils a new direction in machine learning security.""","""This paper presents a very creative threat model for neural networks. The proposed attack requires systems-level intervention by the attacker, which prompts the reviewers to question how realistic the attack is, and whether it is well motivated by the authors. After conversing with the reviewers on this topic, they have not changed their mind about these issues. As an AC, I think the threat model is both interesting and potentially realistic in some scenarios, however I agree with the reviewers that the motivation for the threat model could be more powerful. For example the authors could focus more on realistic types of malicious behaviors that a developer could embed into a neural network. I also think there's lots of opportunities for a range of applications that exploit the type of ""two nets in one"" behavior that the authors study. Despite the interesting ideas in this paper, the post-rebuttal scores are not strong enough to accept it. I encourage the authors to address some of these presentation issues, and resubmit this interesting paper to another venue."""
1678,"""Regularizing activations in neural networks via distribution matching with the Wasserstein metric""","['regularization', 'Wasserstein metric', 'deep learning']","""Regularization and normalization have become indispensable components in training deep neural networks, resulting in faster training and improved generalization performance. We propose the projected error function regularization loss (PER) that encourages activations to follow the standard normal distribution. PER randomly projects activations onto one-dimensional space and computes the regularization loss in the projected space. PER is similar to the Pseudo-Huber loss in the projected space, thus taking advantage of both pseudo-formula and pseudo-formula regularization losses. Besides, PER can capture the interaction between hidden units by projection vector drawn from a unit sphere. By doing so, PER minimizes the upper bound of the Wasserstein distance of order one between an empirical distribution of activations and the standard normal distribution. To the best of the authors' knowledge, this is the first work to regularize activations via distribution matching in the probability distribution space. We evaluate the proposed method on the image classification task and the word-level language modeling task. ""","""This paper presents an interesting and novel idea that is likely to be of interest to the community. The most negative reviewer did not acknowledge the author response. The AC recommends acceptance."""
1679,"""Do Deep Neural Networks for Segmentation Understand Insideness?""","['Image Segmentation', 'Deep Networks for Spatial Relationships', 'Visual Routines', 'Recurrent Neural Networks']","""Image segmentation aims at grouping pixels that belong to the same object or region. At the heart of image segmentation lies the problem of determining whether a pixel is inside or outside a region, which we denote as the ""insideness"" problem. Many Deep Neural Networks (DNNs) variants excel in segmentation benchmarks, but regarding insideness, they have not been well visualized or understood: What representations do DNNs use to address the long-range relationships of insideness? How do architectural choices affect the learning of these representations? In this paper, we take the reductionist approach by analyzing DNNs solving the insideness problem in isolation, i.e. determining the inside of closed (Jordan) curves. We demonstrate analytically that state-of-the-art feed-forward and recurrent architectures can implement solutions of the insideness problem for any given curve. Yet, only recurrent networks could learn these general solutions when the training enforced a specific ""routine"" capable of breaking down the long-range relationships. Our results highlights the need for new training strategies that decompose the learning into appropriate stages, and that lead to the general class of solutions necessary for DNNs to understand insideness.""","""This paper investigates a notion of recognizing insideness (i.e., whether a pixel is inside a closed curve/shape in the image) with deep networks. It's an interesting problem, and the authors provide analysis on the limitations of existing architectures (e.g., feedforward and recurrent networks) and present a trick to handle the long-range relationships. While the topic is interesting, the constructed datasets are quite artificial and it's unclear how this study can lead to practically useful results (e.g., improvement in semantic segmentation, etc.). """
1680,"""Relation-based Generalized Zero-shot Classification with the Domain Discriminator on the shared representation""",[],"""Generalized zero-shot learning (GZSL) is the task of predicting a test image from seen or unseen classes using pre-defined class-attributes and images from the seen classes. Typical ZSL models assign the class corresponding to the most relevant attribute as the predicted label of the test image based on the learned relation between the attribute and the image. However, this relation-based approach presents a difficulty: many of the test images are predicted as biased to the seen domain, i.e., the \emph{domain bias problem}. Recently, many methods have addressed this difficulty using a synthesis-based approach that, however, requires generation of large amounts of high-quality unseen images after training and the additional training of classifier given them. Therefore, for this study, we aim at alleviating this difficulty in the manner of the relation-based approach. First, we consider the requirements for good performance in a ZSL setting and introduce a new model based on a variational autoencoder that learns to embed attributes and images into the shared representation space which satisfies those requirements. Next, we assume that the domain bias problem in GZSL derives from a situation in which embedding of the unseen domain overlaps that of the seen one. We introduce a discriminator that distinguishes domains in a shared space and learns jointly with the above embedding model to prevent this situation. After training, we can obtain prior knowledge from the discriminator of which domain is more likely to be embedded anywhere in the shared space. We propose combination of this knowledge and the relation-based classification on the embedded shared space as a mixture model to compensate class prediction. Experimentally obtained results confirm that the proposed method significantly improves the domain bias problem in relation-based settings and achieves almost equal accuracy to that of high-cost synthesis-based methods. ""","""This paper proposes a relation-based model that extends VAE to explicitly alleviate the domain bias problem between seen and unseen classes in the setting of generalized zero-shot learning. Reviewers and AC think that the studied problem is interesting, the reported experimental results are strong, and the writing is clear, but the proposed model and its scientific reasoning for convincing why the proposed method is valuable is somewhat limited. Thus the authors are encouraged to further improve in these directions. In particular: - The idea of using a variant of the widely-used domain discriminator to make seen and unseen classes distinguishable is somewhat contradicted to the basic principle of zero-shot learning. How to trade off the balance between seen and unseen classes has been an important problem in generalized ZSL. These problems need further elaboration. - The proposed model itself is not a real ""VAE"", making the value of an extensive derivation based on variational inference less prominent. - There is also the need to compare with the baselines mentioned by the reviewers. Overall, this is a borderline paper. Since the above concerns were not addressed convincingly in the rebuttal, I am leaning towards rejection."""
1681,"""Input Complexity and Out-of-distribution Detection with Likelihood-based Generative Models""","['OOD', 'generative models', 'likelihood']","""Likelihood-based generative models are a promising resource to detect out-of-distribution (OOD) inputs which could compromise the robustness or reliability of a machine learning system. However, likelihoods derived from such models have been shown to be problematic for detecting certain types of inputs that significantly differ from training data. In this paper, we pose that this problem is due to the excessive influence that input complexity has in generative models' likelihoods. We report a set of experiments supporting this hypothesis, and use an estimate of input complexity to derive an efficient and parameter-free OOD score, which can be seen as a likelihood-ratio, akin to Bayesian model comparison. We find such score to perform comparably to, or even better than, existing OOD detection approaches under a wide range of data sets, models, model sizes, and complexity estimates.""","""I have read the paper and the reviews carefully. Despite the numerical scores, I think this paper is above the bar for ICLR, and recommend acceptance. This paper addresses the now-well-known problem that generative models often assign higher likelihoods to out-of-distribution examples, rendering likelihoods useless for OOD detection. They diagnose this as resulting from differences in compressibility of the input, and propose to compensate for this by comparing the log-likelihood to the description length from a strong image compressor. They show this performs well against a variety of OOD detection methods. The idea is a natural one, and certainly should have been one of the first things tried in addressing this phenomenon. I'm a little surprised it hasn't been done before, but none of the reviewers or I are aware of a prior reference, so AFAIK it's novel. One reviewer believes the contribution is small; while it's simple, I think the field will benefit from a careful implementation and testing of this approach. Multiple reviewers raise the concern of whether generative models' bias towards low-complexity inputs is just a matter of needing better generative models. I don't think so: even arbitrarily good generative models will still be limited by the inherent compressibility of an input (e.g. as measured by Kolmogorov complexity). I'm also not concerned about the lack of an explicit threshold; if one has proposed a good score function, there are many ways one could choose a threshold, depending on the task. """
1682,"""PROTOTYPE-ASSISTED ADVERSARIAL LEARNING FOR UNSUPERVISED DOMAIN ADAPTATION""","['Domain Adaptation', 'Transfer Learning', 'Adversarial Learning']","""This paper presents a generic framework to tackle the crucial class mismatch problem in unsupervised domain adaptation (UDA) for multi-class distributions. Previous adversarial learning methods condition domain alignment only on pseudo labels, but noisy and inaccurate pseudo labels may perturb the multi-class distribution embedded in probabilistic predictions, hence bringing insufficient alleviation to the latent mismatch problem. Compared with pseudo labels, class prototypes are more accurate and reliable since they summarize over all the instances and are able to represent the inherent semantic distribution shared across domains. Therefore, we propose a novel Prototype-Assisted Adversarial Learning (PAAL) scheme, which incorporates instance probabilistic predictions and class prototypes together to provide reliable indicators for adversarial domain alignment. With the PAAL scheme, we align both the instance feature representations and class prototype representations to alleviate the mismatch among semantically different classes. Also, we exploit the class prototypes as proxy to minimize the within-class variance in the target domain to mitigate the mismatch among semantically similar classes. With these novelties, we constitute a Prototype-Assisted Conditional Domain Adaptation (PACDA) framework which well tackles the class mismatch problem. We demonstrate the good performance and generalization ability of the PAAL scheme and also PACDA framework on two UDA tasks, i.e., object recognition (Office-Home,ImageCLEF-DA, andOffice) and synthetic-to-real semantic segmentation (GTA5CityscapesandSynthiaCityscapes).""","""The paper focuses on adversarial domain adaptation, and proposes an approach inspired from the DANN. The contribution lies in additional terms in the loss, aimed to i) align the source and target prototypes in each class (using pseudo labels for target examples); ii) minimize the variance of the latent representations for each class in the target domain. Reviews point out that the expected benefits of target prototypes might be ruined if the pseudo-labels are too noisy; they note that the specific problem needs be more clearly formalized and they regret the lack of clarity of the text. The sensitivity w.r.t. the hyper-parameter values needs be assessed more thoroughly. One also notes that SAFN is one of the baseline methods; but its best variant (with entropic regularization) is not considered, while the performance thereof is on par or greater than that of PACFA for ImageCLEF-Da; idem for AdapSeg (consider its multi-level variant) or AdvEnt with MinEnt. For these reasons, the paper seems premature for publication at ICLR 2020. """
1683,"""Understanding Attention Mechanisms""","['Attention', 'deep learning', 'sample complexity', 'self-attention']","""Attention mechanisms have advanced the state of the art in several machine learning tasks. Despite significant empirical gains, there is a lack of theoretical analyses on understanding their effectiveness. In this paper, we address this problem by studying the landscape of population and empirical loss functions of attention-based neural networks. Our results show that, under mild assumptions, every local minimum of a two-layer global attention model has low prediction error, and attention models require lower sample complexity than models not employing attention. We then extend our analyses to the popular self-attention model, proving that they deliver consistent predictions with a more expressive class of functions. Additionally, our theoretical results provide several guidelines for designing attention mechanisms. Our findings are validated with satisfactory experimental results on MNIST and IMDB reviews dataset.""","""This paper aims to theoretically understand the the benefit of attention mechanisms. The reviewers agreed that better understanding of attention mechanisms is an important direction. However, the paper studies a weaker form of attention which does not correspond well to the attention models using in the literature. The paper should better motivate why the theoretical results for this restrained model would carry over to more realistic mechanisms."""
1684,"""Mirror Descent View For Neural Network Quantization""","['mirror descent', 'network quantization', 'numerical stability']","""Quantizing large Neural Networks (NN) while maintaining the performance is highly desirable for resource-limited devices due to reduced memory and time complexity. NN quantization is usually formulated as a constrained optimization problem and optimized via a modified version of gradient descent. In this work, by interpreting the continuous parameters (unconstrained) as the dual of the quantized ones, we introduce a Mirror Descent (MD) framework (Bubeck (2015)) for NN quantization. Specifically, we provide conditions on the projections (i.e., mapping from continuous to quantized ones) which would enable us to derive valid mirror maps and in turn the respective MD updates. Furthermore, we discuss a numerically stable implementation of MD by storing an additional set of auxiliary dual variables (continuous). This update is strikingly analogous to the popular Straight Through Estimator (STE) based method which is typically viewed as a trick to avoid vanishing gradients issue but here we show that it is an implementation method for MD for certain projections. Our experiments on standard classification datasets (CIFAR-10/100, TinyImageNet) with convolutional and residual architectures show that our MD variants obtain fully-quantized networks with accuracies very close to the floating-point networks.""","""The paper proposes to use the mirror descent algorithm for the binary network. It is easy to read. However, novelty over ProxQuant is somehow limited. The theoretical analysis is weak, in that there is no analysis on the convergence and neither how to choose the projection for mirror mapping construction. Experimental results can also be made more convincing, by adding comparisons with bigger datasets, STOA networks, and ablation study to demonstrate why mirror descent is better than proximal gradient descent in this application."""
1685,"""Double Neural Counterfactual Regret Minimization""","['Counterfactual Regret Minimization', 'Imperfect Information game', 'Neural Strategy', 'Deep Learning', 'Robust Sampling']","""Counterfactual regret minimization (CFR) is a fundamental and effective technique for solving Imperfect Information Games (IIG). However, the original CFR algorithm only works for discrete states and action spaces, and the resulting strategy is maintained as a tabular representation. Such tabular representation limits the method from being directly applied to large games. In this paper, we propose a double neural representation for the IIGs, where one neural network represents the cumulative regret, and the other represents the average strategy. Such neural representations allow us to avoid manual game abstraction and carry out end-to-end optimization. To make the learning efficient, we also developed several novel techniques including a robust sampling method and a mini-batch Monte Carlo Counterfactual Regret Minimization (MCCFR) method, which may be of independent interests. Empirically, on games tractable to tabular approaches, neural strategies trained with our algorithm converge comparably to their tabular counterparts, and significantly outperform those based on deep reinforcement learning. On extremely large games with billions of decision nodes, our approach achieved strong performance while using hundreds of times less memory than the tabular CFR. On head-to-head matches of hands-up no-limit texas hold'em, our neural agent beat the strong agent ABS-CFR by pseudo-formula chips per game. It's a successful application of neural CFR in large games. ""","""Double conterfactual regret minimization is an extension of neural counterfactual regret minimization that uses separate policy and regret networks (reminiscent of similar extensions of the basic RL formula in reinforcement learning). Several new algorithmic modifications are added to improve the performance. The reviewers agree that this paper is novel, sound, and interesting. One of the reviewers had a set of questions that the authors responded to, seemingly satisfactorily. Given that this seems to be a high-quality paper with no obvious issues, it should be accepted."""
1686,"""Adjustable Real-time Style Transfer""","['Image Style Transfer', 'Deep Learning']","""Artistic style transfer is the problem of synthesizing an image with content similar to a given image and style similar to another. Although recent feed-forward neural networks can generate stylized images in real-time, these models produce a single stylization given a pair of style/content images, and the user doesn't have control over the synthesized output. Moreover, the style transfer depends on the hyper-parameters of the model with varying ``optimum"" for different input images. Therefore, if the stylized output is not appealing to the user, she/he has to try multiple models or retrain one with different hyper-parameters to get a favorite stylization. In this paper, we address these issues by proposing a novel method which allows adjustment of crucial hyper-parameters, after the training and in real-time, through a set of manually adjustable parameters. These parameters enable the user to modify the synthesized outputs from the same pair of style/content images, in search of a favorite stylized image. Our quantitative and qualitative experiments indicate how adjusting these parameters is comparable to retraining the model with different hyper-parameters. We also demonstrate how these parameters can be randomized to generate results which are diverse but still very similar in style and content.""","""This paper offers an innovative approach to adjusting style transfer parameters. The reviewers were consistent, and all recommend acceptance. I concur. """
1687,"""Learning Nearly Decomposable Value Functions Via Communication Minimization""","['Multi-agent reinforcement learning', 'Nearly decomposable value function', 'Minimized communication', 'Multi-agent systems']","""Reinforcement learning encounters major challenges in multi-agent settings, such as scalability and non-stationarity. Recently, value function factorization learning emerges as a promising way to address these challenges in collaborative multi-agent systems. However, existing methods have been focusing on learning fully decentralized value functions, which are not efficient for tasks requiring communication. To address this limitation, this paper presents a novel framework for learning nearly decomposable Q-functions (NDQ) via communication minimization, with which agents act on their own most of the time but occasionally send messages to other agents in order for effective coordination. This framework hybridizes value function factorization learning and communication learning by introducing two information-theoretic regularizers. These regularizers are maximizing mutual information between agents' action selection and communication messages while minimizing the entropy of messages between agents. We show how to optimize these regularizers in a way that is easily integrated with existing value function factorization methods such as QMIX. Finally, we demonstrate that, on the StarCraft unit micromanagement benchmark, our framework significantly outperforms baseline methods and allows us to cut off more than pseudo-formula of communication without sacrificing the performance. The videos of our experiments are available at pseudo-url.""","""The paper extends recent value function factorization methods for the case where limited agent communication is allowed. The work is interesting and well motivated. The reviewers brought up a number of mostly minor issues, such as unclear terms and missing implementation details. As far as I can see, the reviewers have addressed these issues successfully in their updated version. Hence, my recommendation is accept."""
1688,"""Disentangled Representation Learning with Sequential Residual Variational Autoencoder""","['Disentangled Representation Learning', 'Variational Autoencoder', 'Residual Learning']","""Recent advancements in unsupervised disentangled representation learning focus on extending the variational autoencoder (VAE) with an augmented objective function to balance the trade-off between disentanglement and reconstruction. We propose Sequential Residual Variational Autoencoder (SR-VAE) that defines a ""Residual learning"" mechanism as the training regime instead of the augmented objective function. Our proposed solution deploys two important ideas in a single framework: (1) learning from the residual between the input data and the accumulated reconstruction of sequentially added latent variables; (2) decomposing the reconstruction into decoder output and a residual term. This formulation encourages the disentanglement in the latent space by inducing explicit dependency structure, and reduces the bottleneck of VAE by adding the residual term to facilitate reconstruction. More importantly, SR-VAE eliminates the hyperparameter tuning, a crucial step for the prior state-of-the-art performance using the objective function augmentation approach. We demonstrate both qualitatively and quantitatively that SR-VAE improves the state-of-the-art unsupervised disentangled representation learning on a variety of complex datasets.""","""This paper that defines a Residual learning mechanism as the training regime for variational autoencoder. The method gradually activates individual latent variables to reconstruct residuals. There are two main concerns from the reviewers. First, residual learning is a common trick now, hence authors should provide insights on why residual learning works for VAE. The other problem is computational complexity. Currently, reviews argue that it seems not really fair to compare to a bruteforce parameter search. The authors rebuttal partially addresses these problems but meet the standard of the reviewers. Based on the reviewers comments, I choose to reject the paper. """
1689,"""NESTED LEARNING FOR MULTI-GRANULAR TASKS""",['Nested learning'],"""Standard deep neural networks (DNNs) used for classification are trained in an end-to-end fashion for very specific tasks - object recognition, face identification, character recognition, etc. This specificity often leads to overconfident models that generalize poorly to samples that are not from the original training distribution. Moreover, they do not allow to leverage information from heterogeneously annotated data, where for example, labels may be provided with different levels of granularity. Finally, standard DNNs do not produce results with simultaneous different levels of confidence for different levels of detail, they are most commonly an all or nothing approach. To address these challenges, we introduce the problem of nested learning: how to obtain a hierarchical representation of the input such that a coarse label can be extracted first, and sequentially refine this representation to obtain successively refined predictions, all of them with the corresponding confidence. We explicitly enforce this behaviour by creating a sequence of nested information bottlenecks. Looking at the problem of nested learning from an in formation theory perspective, we design a network topology with two important properties. First, a sequence of low dimensional (nested) feature embeddings are enforced. Then we show how the explicit combination of nested outputs can improve both robustness and finer predictions. Experimental results on CIFAR-10, MNIST, and FASHION-MNIST demonstrate that nested learning outperforms the same network trained in the standard end-to-end fashion. Since the network can be naturally trained with mixed data labeled at different levels of nested details, we also study what is the most efficient way of annotating data, when a fixed training budget is given and the cost of labels increases with the levels in the nested hierarchy.""","""This paper proposes a model architecture and training procedure for multiple nested label sets of varying granularities and shows improvements in efficiency over simple baselines in the number of fine-grained training labels needed to reach a given level of performance. Reviewers did not raise any serious concerns about the method that was presented, but they were also not convinced that it represented a sufficiently novel or impactful contribution to an open problem. Without any reviewer advocating for the paper, even after discussion, I have no choice but to recommend rejection. I'm open to the possibility that there is substantial technical value here, but I think this work would be well served by more extensive comparisons and a potentially revamped motivation to try to make the case for it that value more directly."""
1690,"""Generalized Inner Loop Meta-Learning""",['meta-learning'],"""Many (but not all) approaches self-qualifying as ""meta-learning"" in deep learning and reinforcement learning fit a common pattern of approximating the solution to a nested optimization problem. In this paper, we give a formalization of this shared pattern, which we call GIMLI, prove its general requirements, and derive a general-purpose algorithm for implementing similar approaches. Based on this analysis and algorithm, we describe a library of our design, unnamedlib, which we share with the community to assist and enable future research into these kinds of meta-learning approaches. We end the paper by showcasing the practical applications of this framework and library through illustrative experiments and ablation studies which they facilitate.""","""The reviewers agree that the technical innovations presented in this paper are not great enough to justify acceptance. The authors correctly point out to the reviewers that the ICLR CFP states that the topics of ""implementation issues, parallelization, software platforms, hardware are acceptable. I would point out that most papers in these spaces describe *technical innovations* that enable improvements in ""parallelization, software platforms, hardware"" rather than implementations of these improvements. However, it is certainly true that a software package is an acceptable (although less common) basis for a publication, provided is it sufficiently unique and impactful. After pointing this out to the reviewers and collecting opinions, the reviewers do not feel the combined technical and software contributions of this paper are enough to justify acceptance. """
1691,"""Cost-Effective Testing of a Deep Learning Model through Input Reduction""","['Software Testing', 'Deep Learning', 'Input Data Reduction']","""With the increasing adoption of Deep Learning (DL) models in various applications, testing DL models is vitally important. However, testing DL models is costly and expensive, especially when developers explore alternative designs of DL models and tune the hyperparameters. To reduce testing cost, we propose to use only a selected subset of testing data, which is small but representative enough for quick estimation of the performance of DL models. Our approach, called DeepReduce, adopts a two-phase strategy. At first, our approach selects testing data for the purpose of satisfying testing adequacy. Then, it selects more testing data in order to approximate the distribution between the whole testing data and the selected data leveraging relative entropy minimization. Experiments with various DL models and datasets show that our approach can reduce the whole testing data to 4.6\% on average, and can reliably estimate the performance of DL models. Our approach significantly outperforms the random approach, and is more stable and reliable than the state-of-the-art approach.""","""This paper presents a method which creates a representative subset of testing examples so that the model can be tested quickly during the training. The procedure makes use of the famous HGS selection algorithm which identifies and then eliminates the redundant and obsolete test cases based on two criteria: (1) structural coverage as measured by the number of neurons activated beyond a certain threshold, and (2) distribution mismatch (as measured by KL divergence) of the last layer activations. The algorithm has two-phases: (1) a greedy subset selection based on the coverage, and (2) an iterative phase were additional test examples are added until the KL divergence (as defined above) falls below some threshold. This approach is incremental in nature -- the resulting multi-objective optimisation problem is not a significant improvement over BOT. After the discussion phase, we believe that the advantages over BOT were not clearly demonstrated and that the main drawback of BOT (requiring the number of samples) is not hindering practical applications. Finally, the empirical evaluation is performed on very small data sets and I do not see an efficient way to apply it to larger data sets where this reduction could be significant. Hence, I will recommend the rejection of this paper. To merit acceptance to ICLR the authors need to provide a cleaner presentation (especially of the algorithms), with a focus on the incremental improvements over BOT, an empirical analysis on larger datasets, and a detailed look into the computational aspects of the proposed approach. """
1692,"""Learning The Difference That Makes A Difference With Counterfactually-Augmented Data""","['humans in the loop', 'annotation artifacts', 'text classification', 'sentiment analysis', 'natural language inference']","""Despite alarm over the reliance of machine learning systems on so-called spurious patterns, the term lacks coherent meaning in standard statistical frameworks. However, the language of causality offers clarity: spurious associations are due to confounding (e.g., a common cause), but not direct or indirect causal effects. In this paper, we focus on natural language processing, introducing methods and resources for training models less sensitive to spurious patterns. Given documents and their initial labels, we task humans with revising each document so that it (i) accords with a counterfactual target label; (ii) retains internal coherence; and (iii) avoids unnecessary changes. Interestingly, on sentiment analysis and natural language inference tasks, classifiers trained on original data fail on their counterfactually-revised counterparts and vice versa. Classifiers trained on combined datasets perform remarkably well, just shy of those specialized to either domain. While classifiers trained on either original or manipulated data alone are sensitive to spurious features (e.g., mentions of genre), models trained on the combined data are less sensitive to this signal. Both datasets are publicly available.""","""This paper introduces the idea of a counterfactually augmented dataset, in which each example is paired with a manually constructed example with a different label that makes the minimal possible edit to the original example that makes that label correct. The paper justifies the value of these datasets as an aid in both understanding and building classifiers that are robust to spurious features, and releases two small examples. On my reading, this paper presents a very substantially new idea that is relevant to a major ongoing debate in the applied machine learning literature: How do we build models that learn some intended behavior, where the primary evidence we have of that behavior comes in the form of datasets with spurious correlations/artifacts. One reviewer argued for rejection on the grounds that dataset papers are not appropriate for publication at a main conference. I don't find that argument compelling, and I'm also not sure that it's accurate to call this paper primarily a dataset paper. We could not reach a complete consensus after further discussion. The other reviews raised some additional concerns about the paper, but the revised manuscript appears to have address them to the extent possible."""
1693,"""Learning from Positive and Unlabeled Data with Adversarial Training""",['Positive and Unlabeled learning'],"""Positive-unlabeled (PU) learning learns a binary classifier using only positive and unlabeled examples without labeled negative examples. This paper shows that the GAN (Generative Adversarial Networks) style of adversarial training is quite suitable for PU learning. GAN learns a generator to generate data (e.g., images) to fool a discriminator which tries to determine whether the generated data belong to a (positive) training class. PU learning is similar and can be naturally casted as trying to identify (not generate) likely positive data from the unlabeled set also to fool a discriminator that determines whether the identified likely positive data from the unlabeled set (U) are indeed positive (P). A direct adaptation of GAN for PU learning does not produce a strong classifier. This paper proposes a more effective method called Predictive Adversarial Networks (PAN) using a new objective function based on KL-divergence, which performs much better.~Empirical evaluation using both image and text data shows the effectiveness of PAN. ""","""Thanks for your feedback to the reviewers, which helped us a lot to better understand your paper. Through the discussion, the overall evaluation of this paper was significantly improved. However, given the very high competition at ICLR2020, this submission is still below the bar unfortunately. We hope that the discussion with the reviewers will help you improve your paper for potential future publication."""
1694,"""Short and Sparse Deconvolution --- A Geometric Approach""",[],"""Short-and-sparse deconvolution (SaSD) is the problem of extracting localized, recurring motifs in signals with spatial or temporal structure. Variants of this problem arise in applications such as image deblurring, microscopy, neural spike sorting, and more. The problem is challenging in both theory and practice, as natural optimization formulations are nonconvex. Moreover, practical deconvolution problems involve smooth motifs (kernels) whose spectra decay rapidly, resulting in poor conditioning and numerical challenges. This paper is motivated by recent theoretical advances \citep{zhang2017global,kuo2019geometry}, which characterize the optimization landscape of a particular nonconvex formulation of SaSD. This is used to derive a provable algorithm that exactly solves certain non-practical instances of the SaSD problem. We leverage the key ideas from this theory (sphere constraints, data-driven initialization) to develop a practical algorithm, which performs well on data arising from a range of application areas. We highlight key additional challenges posed by the ill-conditioning of real SaSD problems and suggest heuristics (acceleration, continuation, reweighting) to mitigate them. Experiments demonstrate the performance and generality of the proposed method.""","""The work considers sparse and short blind deconvolution problem, which is to inverse a convolution of a sparse source (such as spikes at cell locations in microscopy) with a short (of limited spatial size) kernel or point spread function, not known in advance. This is posed as a bilinear lasso optimization problem. The work applies a non-linear optimization method with some practical improvements (such as data-driven initialization, momentum, homotopy continuation). The paper extends the work by Kuo et al. (2019) by providing a practical algorithm for solving those inverse problems. A focus of the paper is to solve the bilinear lasso instead of the approximate bilinear lasso, because this approximation is poor for coherent problems. Having read the rebuttal and the paper, I believe the authors addressed the issues raised by Reviewer #2 in a sufficient way. small things: - it would be good to define pseudo-formula (zero-padding operator) in (1) - it would be good to define p_0$ just below (3). They seem to be appearing out of the blue without any direct relation to anything mentioned prior in section 2. - it would be good to cite some older/historic references for various optimization methods , e.g. [1] below. [1] Richter & deCarlo Continuation methods: Theory and applications IEEE Transactions on Systems, Man, and Cybernetics, 1983 pseudo-url"""
1695,"""Generative Ratio Matching Networks""","['deep generative model', 'deep learning', 'maximum mean discrepancy', 'density ratio estimation']","""Deep generative models can learn to generate realistic-looking images, but many of the most effective methods are adversarial and involve a saddlepoint optimization, which requires a careful balancing of training between a generator network and a critic network. Maximum mean discrepancy networks (MMD-nets) avoid this issue by using kernel as a fixed adversary, but unfortunately, they have not on their own been able to match the generative quality of adversarial training. In this work, we take their insight of using kernels as fixed adversaries further and present a novel method for training deep generative models that does not involve saddlepoint optimization. We call our method generative ratio matching or GRAM for short. In GRAM, the generator and the critic networks do not play a zero-sum game against each other, instead, they do so against a fixed kernel. Thus GRAM networks are not only stable to train like MMD-nets but they also match and beat the generative quality of adversarially trained generative networks.""","""The paper proposes a training method for generative adversarial network that avoids solving a zero-sum game between the generator and the critic, hence leading to more stable optimization problems. It is similar to MMD-GAN, in which MMD is computed on a projected low-dim space, but the projection is trained to match the density ratio between the observed and the latent space. The reviewers raised several questions. Most of them have been addressed after several rounds of discussions. Overall, they are all positive about this paper, so I recommend acceptance. I encourage the authors to incorporate those discussions in their revised paper."""
1696,"""Observational Overfitting in Reinforcement Learning""","['observational', 'overfitting', 'reinforcement', 'learning', 'generalization', 'implicit', 'regularization', 'overparametrization']","""A major component of overfitting in model-free reinforcement learning (RL) involves the case where the agent may mistakenly correlate reward with certain spurious features from the observations generated by the Markov Decision Process (MDP). We provide a general framework for analyzing this scenario, which we use to design multiple synthetic benchmarks from only modifying the observation space of an MDP. When an agent overfits to different observation spaces even if the underlying MDP dynamics is fixed, we term this observational overfitting. Our experiments expose intriguing properties especially with regards to implicit regularization, and also corroborate results from previous works in RL generalization and supervised learning (SL). ""","""The paper proposes a way to analyze overfitting to non-relevant parts of the state space in RL and proposes a framework to measure this type of generalization error. All reviewers agree that the formulation is interesting and useful for practical RL."""
1697,"""Learning Time-Aware Assistance Functions for Numerical Fluid Solvers""","['PDEs', 'convolutional neural networks', 'numerical simulation', 'fluids']","""Improving the accuracy of numerical methods remains a central challenge in many disciplines and is especially important for nonlinear simulation problems. A representative example of such problems is fluid flow, which has been thoroughly studied to arrive at efficient simulations of complex flow phenomena. This paper presents a data-driven approach that learns to improve the accuracy of numerical solvers. The proposed method utilizes an advanced numerical scheme with a fine simulation resolution to acquire reference data. We, then, employ a neural network that infers a correction to move a coarse thus quickly obtainable result closer to the reference data. We provide insights into the targeted learning problem with different learning approaches: fully supervised learning methods with a naive and an optimized data acquisition as well as an unsupervised learning method with a differentiable Navier-Stokes solver. While our approach is very general and applicable to arbitrary partial differential equation models, we specifically highlight gains in accuracy for fluid flow simulations.""","""This paper provides a data-driven approach that learns to improve the accuracy of numerical solvers. It solves an important problem and provides some promising direction. However, the presented paper is not novel in terms of ML methodology. The presentation can be significantly improved for ML audience (e.g., it would be preferred to explicitly state the problem setting in the beginning of Section 3)."""
1698,"""Enhancing Attention with Explicit Phrasal Alignments""","['NMT', 'Phrasal Attention', 'Machine Translation', 'Language Modeling']","""The attention mechanism is an indispensable component of any state-of-the-art neural machine translation system. However, existing attention methods are often token-based and ignore the importance of phrasal alignments, which are the backbone of phrase-based statistical machine translation. We propose a novel phrase-based attention method to model n-grams of tokens as the basic attention entities, and design multi-headed phrasal attentions within the Transformer architecture to perform token-to-token and token-to-phrase mappings. Our approach yields improvements in English-German, English-Russian and English-French translation tasks on the standard WMT'14 test set. Furthermore, our phrasal attention method shows improvements on the one-billion-word language modeling benchmark. ""","""This paper proposes a phrase-based attention method to model word n-grams (as opposed to single words) as the basic attention units. Multi-headed phrasal attentions are designed within the Transformer architecture to perform token-to-token and token-to-phrase mappings. Some improvements are shown in English-German, English-Russian and English-French translation tasks on the standard WMT'14 test set, and on the one-billion-word language modeling benchmark. While the proposed approach is interesting and takes inspiration in the notion of phrases used in phrase-based machine translation, with some positive empirical results, the technical novelty of this paper is rather limited, and the experiments could be more solid. While it is understandable that lack of computational resources made it hard to experiment with larger models (e.g. Transformer-big), perhaps it would be interesting to try on language pairs with fewer resources (smaller datasets), where base models are more competitive."""
1699,"""Probing Emergent Semantics in Predictive Agents via Question Answering""","['question-answering', 'predictive models']","""Recent work has demonstrated how predictive modeling can endow agents with rich knowledge of their surroundings, improving their ability to act in complex environments. We propose question-answering as a general paradigm to decode and understand the representations that such agents develop, applying our method to two recent approaches to predictive modeling action-conditional CPC (Guo et al., 2018) and SimCore (Gregor et al., 2019). After training agents with these predictive objectives in a visually-rich, 3D environment with an assortment of objects, colors, shapes, and spatial configurations, we probe their internal state representations with a host of synthetic (English) questions, without backpropagating gradients from the question-answering decoder into the agent. The performance of different agents when probed in this way reveals that they learn to encode detailed, and seemingly compositional, information about objects, properties and spatial relations from their physical environment. Our approach is intuitive, i.e. humans can easily interpret the responses of the model as opposed to inspecting continuous vectors, and model-agnostic, i.e. applicable to any modeling approach. By revealing the implicit knowledge of objects, quantities, properties and relations acquired by agents as they learn, question-conditional agent probing can stimulate the design and development of stronger predictive learning objectives.""","""This paper proposes question-answering as a general paradigm to decode and understand the representations that agents develop, with application to two recent approaches to predictive modeling. During rebuttal, some critical issues still exist, e.g., as Reviewer#3 pointed out, the submission in its current form lacks experimental analysis of the proposed conditional probes, especially the trade-offs on the reliability of the representation analysis when performed with a conditional probe as well as a clear motivation for the need of a language interface. The authors are encouraged to incorporate the refined motivation and add more comprehensive experimental evaluation for a possible resubmission."""
1700,"""Learning Entailment-Based Sentence Embeddings from Natural Language Inference""","['sentence embeddings', 'textual entailment', 'natural language inference', 'interpretability']","""Large datasets on natural language inference are a potentially valuable resource for inducing semantic representations of natural language sentences. But in many such models the embeddings computed by the sentence encoder goes through an MLP-based interaction layer before predicting its label, and thus some of the information about textual entailment is encoded in the interpretation of sentence embeddings given by this parameterised MLP. In this work we propose a simple interaction layer based on predefined entailment and contradiction scores applied directly to the sentence embeddings. This parameter-free interaction model achieves results on natural language inference competitive with MLP-based models, demonstrating that the trained sentence embeddings directly represent the information needed for textual entailment, and the inductive bias of this model leads to better generalisation to other related datasets.""","""This paper proposes a method for learning sentence embeddings such that entailment and contradiction relationships between sentence pairs can be inferred by a simple parameter-free operation on the vectors for the two sentences. Reviewers found the method and the results interesting, but in private discussion, couldn't reach a consensus on what (if any) substantial valuable contributions the paper had proven. The performance of the method isn't compellingly strong in absolute or relative terms, yielding doubts about the value of the method for entailment applications, and the reviewers didn't see a strong enough motivation for the line of work to justify publishing it as a tentative or exploratory effort at ICLR."""
1701,"""Revisiting Self-Training for Neural Sequence Generation""","['self-training', 'semi-supervised learning', 'neural sequence generatioin']","""Self-training is one of the earliest and simplest semi-supervised methods. The key idea is to augment the original labeled dataset with unlabeled data paired with the model's prediction (i.e. the pseudo-parallel data). While self-training has been extensively studied on classification problems, in complex sequence generation tasks (e.g. machine translation) it is still unclear how self-training works due to the compositionality of the target space. In this work, we first empirically show that self-training is able to decently improve the supervised baseline on neural sequence generation tasks. Through careful examination of the performance gains, we find that the perturbation on the hidden states (i.e. dropout) is critical for self-training to benefit from the pseudo-parallel data, which acts as a regularizer and forces the model to yield close predictions for similar unlabeled inputs. Such effect helps the model correct some incorrect predictions on unlabeled data. To further encourage this mechanism, we propose to inject noise to the input space, resulting in a noisy version of self-training. Empirical study on standard machine translation and text summarization benchmarks shows that noisy self-training is able to effectively utilize unlabeled data and improve the performance of the supervised baseline by a large margin.""","""This paper analyzes self-training for sequence-to-sequence models and proposes a noisy version of self training. An empirical study shows the proposed noisy version improves results for machine translation and summarization tasks. All reviewers appreciate the interesting contributions of the research, as well as clear writing. They also offer several comments for the revision of the paper. We look forward to seeing this paper presented at the conference!"""
1702,"""Non-Autoregressive Dialog State Tracking""","['task-oriented', 'dialogues', 'dialogue state tracking', 'non-autoregressive']","""Recent efforts in Dialogue State Tracking (DST) for task-oriented dialogues have progressed toward open-vocabulary or generation-based approaches where the models can generate slot value candidates from the dialogue history itself. These approaches have shown good performance gain, especially in complicated dialogue domains with dynamic slot values. However, they fall short in two aspects: (1) they do not allow models to explicitly learn signals across domains and slots to detect potential dependencies among \textit{(domain, slot)} pairs; and (2) existing models follow auto-regressive approaches which incur high time cost when the dialogue evolves over multiple domains and multiple turns. In this paper, we propose a novel framework of Non-Autoregressive Dialog State Tracking (NADST) which can factor in potential dependencies among domains and slots to optimize the models towards better prediction of dialogue states as a complete set rather than separate slots. In particular, the non-autoregressive nature of our method not only enables decoding in parallel to significantly reduce the latency of DST for real-time dialogue response generation, but also detect dependencies among slots at token level in addition to slot and domain level. Our empirical results show that our model achieves the state-of-the-art joint accuracy across all domains on the MultiWOZ 2.1 corpus, and the latency of our model is an order of magnitude lower than the previous state of the art as the dialogue history extends over time. ""","""(Please note that I am basing the meta-review on two reviews plus my own thorough read of the paper) This paper proposes an interesting adaptation of the non-autoregressive neural encoder-decoder models previously proposed for machine translation to dialog state tracking. Experimental results demonstrate state-of-the-art for the MultiWOZ, multi-domain dialog corpus. The reviewers suggest that while the NA approach is not novel, author's adaptation of the approach to dialog state tracking and detailed experimental analysis are interesting and convincing. Hence I suggest accepting the paper as a poster presentation."""
1703,"""IS THE LABEL TRUSTFUL: TRAINING BETTER DEEP LEARNING MODEL VIA UNCERTAINTY MINING NET""","['Semi-supervised Learning', 'Robust Learning', 'Deep Generative Model']","""In this work, we consider a new problem of training deep neural network on partially labeled data with label noise. As far as we know, there have been very few efforts to tackle such problems. We present a novel end-to-end deep generative pipeline for improving classifier performance when dealing with such data problems. We call it Uncertainty Mining Net (UMN). During the training stage, we utilize all the available data (labeled and unlabeled) to train the classifier via a semi-supervised generative framework. During training, UMN estimates the uncertainly of the labels to focus on clean data for learning. More precisely, UMN applies the sample-wise label uncertainty estimation scheme. Extensive experiments and comparisons against state-of-the-art methods on several popular benchmark datasets demonstrate that UMN can reduce the effects of label noise and significantly improve classifier performance.""","""The paper presents an interesting idea but all reviewers pointed out problems with the writing (eg clarity of the motivation) and with the motivation of the experiments and link to the contest. The rebuttal helped, but it is clear that the paper requires more work before being acceptable to ICLR."""
1704,"""Rethinking Neural Network Quantization""","['Deep Learning', 'Convolutional Network', 'Network Quantization', 'Efficient Learning']","""Quantization reduces computation costs of neural networks but suffers from performance degeneration. Is this accuracy drop due to the reduced capacity, or inefficient training during the quantization procedure? After looking into the gradient propagation process of neural networks by viewing the weights and intermediate activations as random variables, we discover two critical rules for efficient training. Recent quantization approaches violates the two rules and results in degenerated convergence. To deal with this problem, we propose a simple yet effective technique, named scale-adjusted training (SAT), to comply with the discovered rules and facilitates efficient training. We also analyze the quantization error introduced in calculating the gradient in the popular parameterized clipping activation (PACT) technique. Through SAT together with gradient-calibrated PACT, quantized models obtain comparable or even better performance than their full-precision counterparts, achieving state-of-the-art accuracy with consistent improvement over previous quantization methods on a wide spectrum of models including MobileNet-V1/V2 and PreResNet-50.""","""The submission proposes methodology for quantizing neural networks. The reviewers were unanimous in their opinion that the paper is not suitable for publication at ICLR. Concerns included novelty over previous works, comparatively weak baseline comparisons, and overly restrictive assumptions."""
1705,"""ROBUST GENERATIVE ADVERSARIAL NETWORK""","['Generative Adversarial Network', 'Robustness', 'Deep Learning']","""Generative adversarial networks (GANs) are powerful generative models, but usually suffer from instability which may lead to poor generations. Most existing works try to alleviate this problem by focusing on stabilizing the training of the discriminator, which unfortunately ignores the robustness of generator and discriminator. In this work, we consider the robustness of GANs and propose a novel robust method called robust generative adversarial network (RGAN). Particularly, we design a robust optimization framework where the generator and discriminator compete with each other in a worst-case setting within a small Wasserstein ball. The generator tries to map the worst input distribution (rather than a specific input distribution, typically a Gaussian distribution used in most GANs) to the real data distribution, while the discriminator attempts to distinguish the real and fake distribution with the worst perturbation. We have provided theories showing that the generalization of the new robust framework can be guaranteed. A series of experiments on CIFAR-10, STL-10 and CelebA datasets indicate that our proposed robust framework can improve consistently on four baseline GAN models. We also provide ablation analysis and visualization showing the efficacy of our method on both generator and discriminator quantitatively and qualitatively.""","""This work proposes a robust variant of GAN, in which the generator and discriminator compete with each other in a worst-case setting within a small Wasserstein ball. Unfortunately, the reviewers have raised some critical concerns in terms of theoretical analysis and empirical support. The authors did not submit rebuttals in time. We encourage the authors to improve the work based on reviewer's comments. """
1706,"""CGT: Clustered Graph Transformer for Urban Spatio-temporal Prediction""","['Unsmooth spatiotemporal forecasting', 'Clustered graph neural network', 'Graph-Transformer', 'Urban computing']","""Deep learning based approaches have been widely used in various urban spatio-temporal forecasting problems, but most of them fail to account for the unsmoothness issue of urban data in their architecture design, which significantly deteriorates their prediction performance. The aim of this paper is to develop a novel clustered graph transformer framework that integrates both graph attention network and transformer under an encoder-decoder architecture to address such unsmoothness issue. Specifically, we propose two novel structural components to refine the architectures of those existing deep learning models. In spatial domain, we propose a gradient-based clustering method to distribute different feature extractors to regions in different contexts. In temporal domain, we propose to use multi-view position encoding to address the periodicity and closeness of urban time series data. Experiments on real datasets obtained from a ride-hailing business show that our method can achieve 10\%-25\% improvement than many state-of-the-art baselines. ""","""This paper proposes an approach to handle the problem of unsmoothness while modeling spatio-temporal urban data. However all reviewers have pointed major issues with the presentation of the work, and whether the method's complexity is justified. """
1707,"""Hybrid Weight Representation: A Quantization Method Represented with Ternary and Sparse-Large Weights""","['quantized neural networks', 'centralized quantization', 'hybrid weight representation', 'weighted ridge', 'ternary weight']","""Previous ternarizations such as the trained ternary quantization (TTQ), which quantized weights to three values (e.g., {Wn, 0,+Wp}), achieved the small model size and efficient inference process. However, the extreme limit on the number of quantization steps causes some degradation in accuracy. To solve this problem, we propose a hybrid weight representation (HWR) method which produces a network consisting of two types of weights, i.e., ternary weights (TW) and sparse-large weights (SLW). The TW is similar to the TTQs and requires three states to be stored in memory with 2 bits. We utilize the one remaining state to indicate the SLW which is referred to as very rare and greater than TW. In HWR, we represent TW with values while SLW with indices of values. By encoding SLW, the networks can preserve their model size with improving their accuracy. To fully utilize HWR, we also introduce a centralized quantization (CQ) process with a weighted ridge (WR) regularizer. They aim to reduce the entropy of weight distributions by centralizing weights toward ternary values. Our comprehensive experiments show that HWR outperforms the state-of-the-art compressed models in terms of the trade-off between model size and accuracy. Our proposed representation increased the AlexNet performance on CIFAR-100 by 4.15% with only1.13% increase in model size.""","""The paper proposes a hybrid weighs representation method in deep networks. The authors propose to utilize the extra state in 2-bit ternary representation to encode large weight values. The idea is simple and straightforward. The main concern is on the experimental results. The use of mixed bit width for neural network quantization is not new, but the authors only compare with basic quantization method in the original submission. In the revised version of the paper, the proposed method performs significantly worse than recent quantization methods such as PACT and QIL. Moreover, writing can be improved, and parts of the paper need to be clarified."""
1708,"""MelNet: A Generative Model for Audio in the Frequency Domain""",[],"""Capturing high-level structure in audio waveforms is challenging because a single second of audio spans tens of thousands of timesteps. While long-range dependencies are difficult to model directly in the time domain, we show that they can be more tractably modelled in two-dimensional time-frequency representations such as spectrograms. By leveraging this representational advantage, in conjunction with a highly expressive probabilistic model and a multiscale generation procedure, we design a model capable of generating high-fidelity audio samples which capture structure at timescales which time-domain models have yet to achieve. We demonstrate that our model captures longer-range dependencies than time-domain models such as WaveNet across a diverse set of unconditional generation tasks, including single-speaker speech generation, multi-speaker speech generation, and music generation.""","""The paper proposed an autoregressive model with a multiscale generative representation of the spectrograms to better modeling the long term dependencies in audio signals. The techniques developed in the paper are novel and interesting. The main concern is the validation of the method. The paper presented some human listening studies to compare long-term structure on unconditional samples, which as also mentioned by reviewers are not particularly useful. Including justifications on the usefulness of the learned representation for any downstream task would make the work much more solid. """
1709,"""Noisy Machines: Understanding noisy neural networks and enhancing robustness to analog hardware errors using distillation""","['network noise robustness', 'analog accelerator', 'noise injection', 'distillation', 'error rate']","""The success of deep learning has brought forth a wave of interest in computer hardware design to better meet the high demands of neural network inference. In particular, analog computing hardware has been heavily motivated specifically for accelerating neural networks, based on either electronic, optical or photonic devices, which may well achieve lower power consumption than conventional digital electronics. However, these proposed analog accelerators suffer from the intrinsic noise generated by their physical components, which makes it challenging to achieve high accuracy on deep neural networks. Hence, for successful deployment on analog accelerators, it is essential to be able to train deep neural networks to be robust to random continuous noise in the network weights, which is a somewhat new challenge in machine learning. In this paper, we advance the understanding of noisy neural networks. We outline how a noisy neural network has reduced learning capacity as a result of loss of mutual information between its input and output. To combat this, we propose using knowledge distillation combined with noise injection during training to achieve more noise robust networks, which is demonstrated experimentally across different networks and datasets, including ImageNet. Our method achieves models with as much as 2X greater noise tolerance compared with the previous best attempts, which is a significant step towards making analog hardware practical for deep learning.""","""This paper argues that NNs deployed to hardware needs to robust to additive noise and introduces two methods to achieve this. The reviewers liked aspects of the paper and the paper is borderline. However, all in all sufficient reservations were raised to put the paper below the threshold. The criticism was constructive and can be used in an updated version submitted to next conference. Rejection is recommended."""
1710,"""ODE Analysis of Stochastic Gradient Methods with Optimism and Anchoring for Minimax Problems and GANs""","['GAN', 'minimax problems', 'stochastic gradients']","""Despite remarkable empirical success, the training dynamics of generative adversarial networks (GAN), which involves solving a minimax game using stochastic gradients, is still poorly understood. In this work, we analyze last-iterate convergence of simultaneous gradient descent (simGD) and its variants under the assumption of convex-concavity, guided by a continuous-time analysis with differential equations. First, we show that simGD, as is, converges with stochastic sub-gradients under strict convexity in the primal variable. Second, we generalize optimistic simGD to accommodate an optimism rate separate from the learning rate and show its convergence with full gradients. Finally, we present anchored simGD, a new method, and show convergence with stochastic subgradients.""","""Motivated by GANs, the authors study the convergence of stochastic subgradient descent on convex-concave minimax games. They introduced an improved ""anchored"" SGD variant, that provably converges under milder assumptions that the base algorithm. It is applied to training GANs on MNIST and CIFAR-10, partially showing improvements over alternative training methods. A main point of criticism that the reviewers identify is the strength of the assumptions needed for the analysis. Furthermore, the experimental results were deemed weak as the reported scores are far away from the SOTA, and only simple baselines were compared against. """
1711,"""Adversarial Video Generation on Complex Datasets""","['GAN', 'generative model', 'generative adversarial network', 'video prediction']","""Generative models of natural images have progressed towards high fidelity samples by the strong leveraging of scale. We attempt to carry this success to the field of video modeling by showing that large Generative Adversarial Networks trained on the complex Kinetics-600 dataset are able to produce video samples of substantially higher complexity and fidelity than previous work. Our proposed model, Dual Video Discriminator GAN (DVD-GAN), scales to longer and higher resolution videos by leveraging a computationally efficient decomposition of its discriminator. We evaluate on the related tasks of video synthesis and video prediction, and achieve new state-of-the-art Frchet Inception Distance for prediction for Kinetics-600, as well as state-of-the-art Inception Score for synthesis on the UCF-101 dataset, alongside establishing a strong baseline for synthesis on Kinetics-600.""","""This paper addresses the tasks of video generation and prediction and shows impressive results on the datasets such as Kinetics-600. There is a reviewer disagreement on this paper. AC can confirm that all three reviewers have read the rebuttal and have contributed to a long discussion. The reviewers have raised the following concerns that were viewed as critical issues when making the final decision: R1 and R3 expressed the concerns regarding limited technical novelty of the proposed approach in light of the prior works, e.g. MoCoGAN and TGANv2. R3 suggests, that the proposed method shows advantage that might be due to the large computational resources available to train the model. Providing a comparison of the proposed model and the relevant baselines on the Kinetics dataset is desirable to access the benefits of the proposed approach (R1). AC also agrees with the R2 about the potential impact this work could have in the community. However, given that the reviewers have raised important concerns and have given suggestions, the paper needs too many revisions for acceptance at this time. We hope the reviews are useful for improving and revising the paper."""
1712,"""Energy-Aware Neural Architecture Optimization with Fast Splitting Steepest Descent""","['Neural architecture optimization', 'splitting steepest descent']","""Designing energy-efficient networks is of critical importance for enabling state-of-the-art deep learning in mobile and edge settings where the computation and energy budgets are highly limited. Recently, Wu et al. (2019) framed the search of efficient neural architectures into a continuous splitting process: it iteratively splits existing neurons into multiple off-springs to achieve progressive loss minimization, thus finding novel architectures by gradually growing the neural network. However, this method was not specifically tailored for designing energy-efficient networks, and is computationally expensive on large-scale benchmarks. In this work, we substantially improve Wu et al. (2019) in two significant ways: 1) we incorporate the energy cost of splitting different neurons to better guide the splitting process, thereby discovering more energy-efficient network architectures; 2) we substantially speed up the splitting process of Wu et al. (2019), which requires expensive eigen-decomposition, by proposing a highly scalable Rayleigh-quotient stochastic gradient algorithm. Our fast algorithm allows us to reduce the computational cost of splitting to the same level of typical back-propagation updates and enables efficient implementation on GPU. Extensive empirical results show that our method can train highly accurate and energy-efficient networks on challenging datasets such as ImageNet, improving a variety of baselines, including the pruning-based methods and expert-designed architectures.""","""This paper extends previous work on searching for good neural architectures by iteratively growing a network, including energy-aware metrics during the process. There was discussion about the extent of the novelty of this work and how well it was evaluated, and in the end the reviewers felt it was not quite ready for publicaiton."""
1713,"""POP-Norm: A Theoretically Justified and More Accelerated Normalization Approach""","['Batch Normalization', 'Optimization', 'Accelerate Training']","""Batch Normalization (BatchNorm) has been a default module in modern deep networks due to its effectiveness for accelerating training deep neural networks. It is widely accepted that the great success of BatchNorm is owing to reduction of internal covariate shift (ICS), but recently it is demonstrated that the link between them is fairly weak. The intrinsic reason behind effectiveness of BatchNorm is still unrevealed that limits it to be made better use. In light of this, we propose a new normalization approach, referred to as Pre-Operation Normalization (POP-Norm), which is theoretically ensured to speed up the training convergence. Not surprisingly, POP-Norm and BatchNorm are largely the same. Hence the similarities can help us to theoretically interpret the root of BatchNorm's effectiveness. There are still some significant distinctions between the two approaches. Just the distinctions make POP-Norm achieve faster convergence rate and better performance than BatchNorm, which are validated in extensive experiments on benchmark datasets: CIFAR10, CIFAR100 and ILSVRC2012.""","""The authors propose an alternative to batch norm, which they call POP-norm, and provide theoretical justification for POP-norm in nonconvex optimization on the basis of variance reduction. They then present empirical arguments. One of the most cogent reviewers believed the theoretical results were known and the empirical arguments unconvincing because the method is similar to batch norm up to a change in learning rate and some minor differences. Unfortunately, the reviewers did not engage with the author rebuttals at all. The authors seem to have addressed most points. However, if the reviewers are unwilling to engage, despite multiple emails, there's not much I can do, short of redoing the whole process from scratch. And I'll take the lack of engagement as lack of interest by the reviewers. Not being an expert in optimization myself, I'm not going to override the scores. I do know enough to know that there are standard bounds for both convex and nonconvex optimization that improve with decreased variance. """
1714,"""Maxmin Q-learning: Controlling the Estimation Bias of Q-learning""","['reinforcement learning', 'bias and variance reduction']","""Q-learning suffers from overestimation bias, because it approximates the maximum action value using the maximum estimated action value. Algorithms have been proposed to reduce overestimation bias, but we lack an understanding of how bias interacts with performance, and the extent to which existing algorithms mitigate bias. In this paper, we 1) highlight that the effect of overestimation bias on learning efficiency is environment-dependent; 2) propose a generalization of Q-learning, called \emph{Maxmin Q-learning}, which provides a parameter to flexibly control bias; 3) show theoretically that there exists a parameter choice for Maxmin Q-learning that leads to unbiased estimation with a lower approximation variance than Q-learning; and 4) prove the convergence of our algorithm in the tabular case, as well as convergence of several previous Q-learning variants, using a novel Generalized Q-learning framework. We empirically verify that our algorithm better controls estimation bias in toy environments, and that it achieves superior performance on several benchmark problems. ""","""The authors propose the use of an ensembling scheme to remove over-estimation bias in Q-Learning. The idea is simple but well-founded on theory and backed by experimental evidence. The authors also extensively clarified distinctions between their idea and similar ideas in the reinforcement learning literature in response to reviewer concerns."""
1715,"""Rotation-invariant clustering of neuronal responses in primary visual cortex""","['computational neuroscience', 'neural system identification', 'functional cell types', 'deep learning', 'rotational equivariance']","""Similar to a convolutional neural network (CNN), the mammalian retina encodes visual information into several dozen nonlinear feature maps, each formed by one ganglion cell type that tiles the visual space in an approximately shift-equivariant manner. Whether such organization into distinct cell types is maintained at the level of cortical image processing is an open question. Predictive models building upon convolutional features have been shown to provide state-of-the-art performance, and have recently been extended to include rotation equivariance in order to account for the orientation selectivity of V1 neurons. However, generally no direct correspondence between CNN feature maps and groups of individual neurons emerges in these models, thus rendering it an open question whether V1 neurons form distinct functional clusters. Here we build upon the rotation-equivariant representation of a CNN-based V1 model and propose a methodology for clustering the representations of neurons in this model to find functional cell types independent of preferred orientations of the neurons. We apply this method to a dataset of 6000 neurons and visualize the preferred stimuli of the resulting clusters. Our results highlight the range of non-linear computations in mouse V1.""","""This paper is enthusiastically supported by all three reviewers. Thus an accept is recommended."""
1716,"""Gradients as Features for Deep Representation Learning""","['representation learning', 'gradient features', 'deep learning']","""We address the challenging problem of deep representation learning -- the efficient adaption of a pre-trained deep network to different tasks. Specifically, we propose to explore gradient-based features. These features are gradients of the model parameters with respect to a task-specific loss given an input sample. Our key innovation is the design of a linear model that incorporates both gradient and activation of the pre-trained network. We demonstrate that our model provides a local linear approximation to an underlying deep model, and discuss important theoretical insights. Moreover, we present an efficient algorithm for the training and inference of our model without computing the actual gradients. Our method is evaluated across a number of representation-learning tasks on several datasets and using different network architectures. Strong results are obtained in all settings, and are well-aligned with our theoretical insights.""","""The paper makes a reasonable contribution to extracting useful features from a pre-trained neural network. The approach is conceptually simple and sufficient evidence is provided of its effectiveness. In addition to the connection to tangent kernels there also appears to be a relationship to holographic feature representations of deep networks. The authors did do a reasonable job of providing additional ablation studies, but the paper would be improved if a clearer study were added to investigate applying the technique to different layers. All of the reviewer comments appear worthwhile, but AnonReviewer2 in particular provides important guidance for improving the paper."""
1717,"""Learning Boolean Circuits with Neural Networks""","['neural-networks', 'deep learning theory']","""Training neural-networks is computationally hard. However, in practice they are trained efficiently using gradient-based algorithms, achieving remarkable performance on natural data. To bridge this gap, we observe the property of local correlation: correlation between small patterns of the input and the target label. We focus on learning deep neural-networks with a variant of gradient-descent, when the target function is a tree-structured Boolean circuit. We show that in this case, the existence of correlation between the gates of the circuit and the target label determines whether the optimization succeeds or fails. Using this result, we show that neural-networks can learn the (log n)-parity problem for most product distributions. These results hint that local correlation may play an important role in differentiating between distributions that are hard or easy to learn.""","""The paper puts forward a theoretical investigation of the learnability of tree-structured Boolean circuits with neural networks. The authors identify *local correlations*, ie correlation of each internal target circuit gate with the target output, as critical property for characterizing learnability by layerwise training. The reviewers agree that the paper is well written and content to be correct (to the best of their knowledge). However, they have reservations about the strength of the assumptions about the target functions as well as the layerwise training procedure. I think this paper is slightly below acceptance threshold for ICLR, which is a quite applied conference. The assumptions are quite strong, ie local correlations and the topology of the circuit to be known as well as layerwise training, and possibly too far removed from current deep learning practice."""
1718,"""Evo-NAS: Evolutionary-Neural Hybrid Agent for Architecture Search""",[],"""Neural Architecture Search has shown potential to automate the design of neural networks. Deep Reinforcement Learning based agents can learn complex architectural patterns, as well as explore a vast and compositional search space. On the other hand, evolutionary algorithms offer higher sample efficiency, which is critical for such a resource intensive application. In order to capture the best of both worlds, we propose a class of Evolutionary-Neural hybrid agents (Evo-NAS). We show that the Evo-NAS agent outperforms both neural and evolutionary agents when applied to architecture search for a suite of text and image classification benchmarks. On a high-complexity architecture search space for image classification, the Evo-NAS agent surpasses the accuracy achieved by commonly used agents with only 1/3 of the search cost.""","""Thanks to the authors for the revision and discussion. This paper provides a neural architecture search (NAS) method, called Evolutionary-Neural hybrid agents (Evo-NAS), which combines NN-based NAS and Aging EVO. While the authors' response addressed some of the reviewers' comments, during discussion period there is a new concern that the idea proposed here highly overlaps with the method of RENAS, which stands for Reinforced Evolutionary Neural Architecture Search. Reviewers acknowledge that this might discount the novelty of the paper. Overall, there is not sufficient support for acceptance."""
1719,"""Fairness with Wasserstein Adversarial Networks""",[],"""Quantifying, enforcing and implementing fairness emerged as a major topic in machine learning. We investigate these questions in the context of deep learning. Our main algorithmic and theoretical tool is the computational estimation of similarities between probability, ``\`a la Wasserstein'', using adversarial networks. This idea is flexible enough to investigate different fairness constrained learning tasks, which we model by specifying properties of the underlying data generative process. The first setting considers bias in the generative model which should be filtered out. The second model is related to the presence of nuisance variables in the observations producing an unwanted bias for the learning task. For both models, we devise a learning algorithm based on approximation of Wasserstein distances using adversarial networks. We provide formal arguments describing the fairness enforcing properties of these algorithm in relation with the underlying fairness generative processes. Finally we perform experiments, both on synthetic and real world data, to demonstrate empirically the superiority of our approach compared to state of the art fairness algorithms as well as concurrent GAN type adversarial architectures based on Jensen divergence.""","""This paper presents an approach to enforce statistical fairness notions using adversarial networks. The reviewers point out several issues of the paper, including 1) their approach does not provably enforce criteria such as demographic parity, 2) lack of novelty and 3) poor presentation."""
1720,"""Entropy Minimization In Emergent Languages""",['language emergence'],"""There is a growing interest in studying the languages emerging when neural agents are jointly trained to solve tasks requiring communication through a discrete channel. We investigate here the information-theoretic complexity of such languages, focusing on the basic two-agent, one-exchange setup. We find that, under common training procedures, the emergent languages are subject to an entropy minimization pressure that has also been detected in human language, whereby the mutual information between the communicating agent's inputs and the messages is minimized, within the range afforded by the need for successful communication. This pressure is amplified as we increase communication channel discreteness. Further, we observe that stronger discrete-channel-driven entropy minimization leads to representations with increased robustness to overfitting and adversarial attacks. We conclude by discussing the implications of our findings for the study of natural and artificial communication systems.""","""This paper studies the information-theoretic complexity for emergent languages when pairs of neural networks are trained to solve a two player communication game. One of the primary claims of the paper was that under common training protocols, networks were biased towards low entropy solutions. During the discussion period, one reviewer shared an ipython notebook investigating the experiments shown in Figure 1. There it was discovered that low entropy solutions were only obtained for networks which were themselves initialized at low entropy configurations. When networks are initialized at high entropy configurations, the converged solution would remain high entropy. This experiment raises questions about the validity of the claim that there was ""pressure"" towards low entropy solutions to the task. Therefore, a more careful analysis of the phenomenon is required. """
1721,"""Regional based query in graph active learning""","['Active Learning', 'Graph Convolution Networks', 'Graph', 'Graph Topology']","""Graph convolution networks (GCN) have emerged as a leading method to classify nodes and graphs. These GCN have been combined with active learning (AL) methods, when a small chosen set of tagged examples can be used. Most AL-GCN use the sample class uncertainty as selection criteria, and not the graph. In contrast, representative sampling uses the graph, but not the prediction. We propose to combine the two and query nodes based on the uncertainty of the graph around them. We here propose two novel methods to select optimal nodes in AL-GCN that explicitly use the graph information to query for optimal nodes. The first method named regional uncertainty is an extension of the classical entropy measure, but instead of sampling nodes with high entropy, we propose to sample nodes surrounded by nodes of different classes, or nodes with high ambiguity. The second method called Adaptive Page-Rank is an extension of the page-rank algorithm, where nodes that have a low probability of being reached by random walks from tagged nodes are selected. We show that the latter is optimal when the fraction of tagged nodes is low, and when this fraction grows to one over the average degree, the regional uncertainty performs better than all existing methods. While we have tested these methods on graphs, such methods can be extended to any classification problem, where a distance can be defined between the input samples.""","""The paper proposes a method for performing active learning on graph convolutional networks. In particular, instead of performing uncertainty-based sampling based on an individual node level, the authors propose to look at regional based uncertainty. They propose an efficient algorithm based on page rank. Empirically, they compare their method to several other leading methods, comparing favorably. Reviewers found the work poorly organized and difficult to read. The idea to use region based estimates is intuitive but feels like nothing more than just that. It's not clear if there is a mathematical basis to justify such a method (e.g. an analysis of sample complexity as has been accomplished in other graph active learning problems, Dasarathy, Nowak, Zhu 2015). The idea requires further study and justification, and the paper needs an improved exposition. Finally, the authors were not anonymized on the PDF. """
1722,"""Deep unsupervised feature selection""","['Single cell rna', 'microarray', 'feature selection', 'feature ranking']","""Unsupervised feature selection involves finding a small number of highly informative features, in the absence of a specific supervised learning task. Selecting a small number of features is an important problem in many scientific domains with high-dimensional observations. Here, we propose the restricted autoencoder (RAE) framework for selecting features that can accurately reconstruct the rest of the features. We justify our approach through a novel proof that the reconstruction ability of a set of features bounds its performance in downstream supervised learning tasks. Based on this theory, we present a learning algorithm for RAEs that iteratively eliminates features using learned per-feature corruption rates. We apply the RAE framework to two high-dimensional biological datasetssingle cell RNA sequencing and microarray gene expression data, which pose important problems in cell biology and precision medicineand demonstrate that RAEs outperform nine baseline methods, often by a large margin.""","""This paper proposes Restricted AutoEncoders (REAs) for unsupervised feature selection, and applies and evaluates it in applications in biology. The paper was reviewed by three experts. R1 recommends Weak Reject, identifying some specific technical concerns as well as questions about missing and unclear experimental details. R2 recommends Reject, with concerns about limited novelty and unconvincing experimental results. R3 recommends Weak Accept saying that the overall idea is good, but also feels the contribution is ""severely undermined"" by a recently-published paper that proposes a very similar approach. Given that that paper (at ECMLPKDD 2019) was presented just one week before the deadline for ICLR, we would not have expected the authors to cite the paper. Nevertheless, given the concerns expressed by the other reviewers and the lack of an author response to help clarify the novelty, technical concerns, and missing details, we are not able to recommend acceptance. We believe the paper does have significant merit and hope that the reviewer comments will help authors in preparing a revision for another venue."""
1723,"""Deeper Insights into Weight Sharing in Neural Architecture Search""","['Neural Architecture Search', 'NAS', 'AutoML', 'AutoDL', 'Deep Learning', 'Machine Learning']","""With the success of deep neural networks, Neural Architecture Search (NAS) as a way of automatic model design has attracted wide attention. As training every child model from scratch is very time-consuming, recent works leverage weight-sharing to speed up the model evaluation procedure. These approaches greatly reduce computation by maintaining a single copy of weights on the super-net and share the weights among every child model. However, weight-sharing has no theoretical guarantee and its impact has not been well studied before. In this paper, we conduct comprehensive experiments to reveal the impact of weight-sharing: (1) The best-performing models from different runs or even from consecutive epochs within the same run have significant variance; (2) Even with high variance, we can extract valuable information from training the super-net with shared weights; (3) The interference between child models is a main factor that induces high variance; (4) Properly reducing the degree of weight sharing could effectively reduce variance and improve performance.""","""This paper provides a series of empirical evaluations on a small neural architecture search space with 64 architectures. The experiments are interesting, but limited in scope and limited to 64 architectures trained on CIFAR-10. It is unclear whether lessons learned on this search space would transfer to large search spaces. One upside is that code is available, making the work reproducible. All reviewers read the rebuttal and participated in the private discussion of reviewers and AC, but none of them changed their mind. All gave a weak rejection score. I agree with this assessment and therefore recommend rejection."""
1724,"""CATER: A diagnostic dataset for Compositional Actions & TEmporal Reasoning""","['Video Understanding', 'Temporal Reasoning']","""Computer vision has undergone a dramatic revolution in performance, driven in large part through deep features trained on large-scale supervised datasets. However, much of these improvements have focused on static image analysis; video understanding has seen rather modest improvements. Even though new datasets and spatiotemporal models have been proposed, simple frame-by-frame classification methods often still remain competitive. We posit that current video datasets are plagued with implicit biases over scene and object structure that can dwarf variations in temporal structure. In this work, we build a video dataset with fully observable and controllable object and scene bias, and which truly requires spatiotemporal understanding in order to be solved. Our dataset, named CATER, is rendered synthetically using a library of standard 3D objects, and tests the ability to recognize compositions of object movements that require long-term reasoning. In addition to being a challenging dataset, CATER also provides a plethora of diagnostic tools to analyze modern spatiotemporal video architectures by being completely observable and controllable. Using CATER, we provide insights into some of the most recent state of the art deep video architectures.""","""The paper proposed a new synthetically generated video dataset (CATER) for benchmarking temporal reasoning. The dataset is based on the CLEVR dataset and provides videos make up of primitive actions (""rotate"", ""pick-place"", ""slide"", ""contain"") that can be combined to form for complex actions. The paper also benchmarks a variety of methods on three proposed tasks (atomic action classification, composite action classification, and 'snitch' localization) and demonstrates that while it is possible to get high performance on atomic action classification, the other two task are still challenging and requires temporal modeling. Overall, all reviewers found the paper to be well written and easy to follow, with care given to the dataset construction, as well as the task definitions and experiment setup and analysis. The paper received strong scores from all reviewers (3 accepts). Based on the reviewer comments, the authors further improved the paper by adding additional relevant datasets for comparison and providing missing details pointed out by the reviewers. After the rebuttal, the reviewers remained positive."""
1725,"""Consistent Meta-Reinforcement Learning via Model Identification and Experience Relabeling""","['Meta-Reinforcement Learning', 'Reinforcement Learning', 'Off-Policy', 'Model Based']","""Reinforcement learning algorithms can acquire policies for complex tasks automatically, however the number of samples required to learn a diverse set of skills can be prohibitively large. While meta-reinforcement learning has enabled agents to leverage prior experience to adapt quickly to new tasks, the performance of these methods depends crucially on how close the new task is to the previously experienced tasks. Current approaches are either not able to extrapolate well, or can do so at the expense of requiring extremely large amounts of data due to on-policy training. In this work, we present model identification and experience relabeling (MIER), a meta-reinforcement learning algorithm that is both efficient and extrapolates well when faced with out-of-distribution tasks at test time based on a simple insight: we recognize that dynamics models can be adapted efficiently and consistently with off-policy data, even if policies and value functions cannot. These dynamics models can then be used to continue training policies for out-of-distribution tasks without using meta-reinforcement learning at all, by generating synthetic experience for the new task. ""","""The authors propose an algorithm for meta-rl which reduces the problem to one of model identification. The main idea is to meta-train a fast-adapting model of the environment and a shared policy, both conditioned on task-specific context variables. At meta-testing, only the model is adapted using environment data, while the policy simply requires simulated experience. Finally, the authors show experimentally that this procedure better generalizes to out-of-distribution tasks than similar methods. The reviewers agree that the paper has a few significant shortcomings. It's unclear how hyper-parameters are selected in the experimental section; the algorithm does not allow for continual adaptation; all policy learning is done through data relabelled by the model. Overall, the problem the paper addresses is very important, but we do not deem the paper publishable in its current form."""
1726,"""Out-of-Distribution Detection Using Layerwise Uncertainty in Deep Neural Networks""","['out-of-distribution', 'uncertainty']","""In this paper, we tackle the problem of detecting samples that are not drawn from the training distribution, i.e., out-of-distribution (OOD) samples, in classification. Many previous studies have attempted to solve this problem by regarding samples with low classification confidence as OOD examples using deep neural networks (DNNs). However, on difficult datasets or models with low classification ability, these methods incorrectly regard in-distribution samples close to the decision boundary as OOD samples. This problem arises because their approaches use only the features close to the output layer and disregard the uncertainty of the features. Therefore, we propose a method that extracts the uncertainties of features in each layer of DNNs using a reparameterization trick and combines them. In experiments, our method outperforms the existing methods by a large margin, achieving state-of-the-art detection performance on several datasets and classification models. For example, our method increases the AUROC score of prior work (83.8%) to 99.8% in DenseNet on the CIFAR-100 and Tiny-ImageNet datasets.""","""The paper proposes a method for OOD detection which leverages the uncertainties associated with the features at the intermediate layers (and not just the output layer). All the reviewers agreed that while this is an interesting direction, the paper requires more work before it can be accepted. In particular, the reviewers raised several concerns about other relevant baselines, some of the reported empirical results, and clarity of the explanation. I encourage the authors to revise the draft based on the reviewers feedback and resubmit to a different venue. """
1727,"""Intriguing Properties of Adversarial Training at Scale""","['adversarial defense', 'adversarial machine learning']","""Adversarial training is one of the main defenses against adversarial attacks. In this paper, we provide the first rigorous study on diagnosing elements of large-scale adversarial training on ImageNet, which reveals two intriguing properties. First, we study the role of normalization. Batch normalization (BN) is a crucial element for achieving state-of-the-art performance on many vision tasks, but we show it may prevent networks from obtaining strong robustness in adversarial training. One unexpected observation is that, for models trained with BN, simply removing clean images from training data largely boosts adversarial robustness, i.e., 18.3%. We relate this phenomenon to the hypothesis that clean images and adversarial images are drawn from two different domains. This two-domain hypothesis may explain the issue of BN when training with a mixture of clean and adversarial images, as estimating normalization statistics of this mixture distribution is challenging. Guided by this two-domain hypothesis, we show disentangling the mixture distribution for normalization, i.e., applying separate BNs to clean and adversarial images for statistics estimation, achieves much stronger robustness. Additionally, we find that enforcing BNs to behave consistently at training and testing can further enhance robustness. Second, we study the role of network capacity. We find our so-called ""deep"" networks are still shallow for the task of adversarial learning. Unlike traditional classification tasks where accuracy is only marginally improved by adding more layers to ""deep"" networks (e.g., ResNet-152), adversarial training exhibits a much stronger demand on deeper networks to achieve higher adversarial robustness. This robustness improvement can be observed substantially and consistently even by pushing the network capacity to an unprecedented scale, i.e., ResNet-638. ""","""This paper studies the properties of adversarial training in the large scale setting. The reviewers found the properties identified by the paper to be of interest to the ICLR community - in particular the robustness community. We encourage the authors to release their models to help jumpstart future work building on this study."""
1728,"""Axial Attention in Multidimensional Transformers""","['self-attention', 'transformer', 'images', 'videos']","""Self-attention effectively captures large receptive fields with high information bandwidth, but its computational resource requirements grow quadratically with the number of points over which attention is performed. For data arranged as large multidimensional tensors, such as images and videos, the quadratic growth makes self-attention prohibitively expensive. These tensors often have thousands of positions that one wishes to capture and proposed attentional alternatives either limit the resulting receptive field or require custom subroutines. We propose Axial Attention, a simple generalization of self-attention that naturally aligns with the multiple dimensions of the tensors in both the encoding and the decoding settings. The Axial Transformer uses axial self-attention layers and a shift operation to efficiently build large and full receptive fields. Notably the proposed structure of the layers allows for the vast majority of the context to be computed in parallel during decoding without introducing any independence assumptions. This semi-parallel structure goes a long way to making decoding from even a very large Axial Transformer broadly applicable. We demonstrate state-of-the-art results for the Axial Transformer on the ImageNet-32 and ImageNet-64 image benchmarks as well as on the BAIR Robotic Pushing video benchmark. We open source the implementation of Axial Transformers.""","""This paper proposes a self-attention-based autoregressive model called Axial Transformers for images and other data organized as high dimensional tensors. The Axial Attention is applied within each axis of the data to accelerate the processing. Most of the authors claim that main idea behind Axial Attention is widely applicable, which can be used in many core vision tasks, such as detection and classification. However, the revision fails to provide more application for Axial attention. Overall, the idea behind this paper is interesting but more convincing experimental results are needed. """
1729,"""Improving Sequential Latent Variable Models with Autoregressive Flows""","['Autoregressive Flows', 'Sequence Modeling', 'Latent Variable Models', 'Video Modeling', 'Variational Inference']","""We propose an approach for sequence modeling based on autoregressive normalizing flows. Each autoregressive transform, acting across time, serves as a moving reference frame for modeling higher-level dynamics. This technique provides a simple, general-purpose method for improving sequence modeling, with connections to existing and classical techniques. We demonstrate the proposed approach both with standalone models, as well as a part of larger sequential latent variable models. Results are presented on three benchmark video datasets, where flow-based dynamics improve log-likelihood performance over baseline models.""","""The paper scores low on novelty. The experiments and model analysis are not very strong."""
1730,"""A Mutual Information Maximization Perspective of Language Representation Learning""",[],"""We show state-of-the-art word representation learning methods maximize an objective function that is a lower bound on the mutual information between different parts of a word sequence (i.e., a sentence). Our formulation provides an alternative perspective that unifies classical word embedding models (e.g., Skip-gram) and modern contextual embeddings (e.g., BERT, XLNet). In addition to enhancing our theoretical understanding of these methods, our derivation leads to a principled framework that can be used to construct new self-supervised tasks. We provide an example by drawing inspirations from related methods based on mutual information maximization that have been successful in computer vision, and introduce a simple self-supervised objective that maximizes the mutual information between a global sentence representation and n-grams in the sentence. Our analysis offers a holistic view of representation learning methods to transfer knowledge and translate progress across multiple domains (e.g., natural language processing, computer vision, audio processing).""","""This paper explores several embedding models (Skip-gram, BERT, XLNet) and describes a framework for comparing, and in the end, unifying them. The framework is such that it actually suggests new ways of creating embeddings, and draws connections to methodology from computer vision. One of the reviewers had several questions about the derivations in your paper and was worried about the paper's clarity. But all of the reviewers appreciated the contributions of the paper, which joins multiple seemingly disparite models under into one theoretical framework. The reviewers were positive about the paper, and in particular were happy to see the active response of authors to their questions and willingness to update the paper with their suggested improvements."""
1731,"""Gradient Perturbation is Underrated for Differentially Private Convex Optimization""","['minimum curvature', 'gradient perturbation', 'DP-GD', 'DP-SGD']","""Gradient perturbation, widely used for differentially private optimization, injects noise at every iterative update to guarantee differential privacy. Previous work first determines the noise level that can satisfy the privacy requirement and then analyzes the utility of noisy gradient updates as in non-private case. In this paper, we explore how the privacy noise affects the optimization property. We show that for differentially private convex optimization, the utility guarantee of both DP-GD and DP-SGD is determined by an \emph{expected curvature} rather than the minimum curvature. The \emph{expected curvature} represents the average curvature over the optimization path, which is usually much larger than the minimum curvature and hence can help us achieve a significantly improved utility guarantee. By using the \emph{expected curvature}, our theory justifies the advantage of gradient perturbation over other perturbation methods and closes the gap between theory and practice. Extensive experiments on real world datasets corroborate our theoretical findings.""","""In this paper, the authors showed that for differentially private convex optimization, the utility guarantee of both DP-GD and DP-SGD is determined by the expected curvature rather than the worst-case minimum curvature. Based on this motivation, the authors justified the advantage of gradient perturbation over other perturbation methods. This is a borderline paper, and has been discussed after author response. The main concerns of this paper include (1) the authors failed to show any loss function that can satisfy the expected curvature inequality; (2) the contribution of this paper is limited, since all the proofs in the paper are just small tweak of existing proofs; (3) this paper does not really improve any existing gradient perturbation based differentially private methods. Due to the above concerns, I have to recommend reject."""
1732,"""DeepAGREL: Biologically plausible deep learning via direct reinforcement""","['biologically plausible deep learning', 'reinforcement learning', 'feedback gating', 'image claassification']","""While much recent work has focused on biologically plausible variants of error-backpropagation, learning in the brain seems to mostly adhere to a reinforcement learning paradigm; biologically plausible neural reinforcement learning frameworks, however, were limited to shallow networks learning from compact and abstract sensory representations. Here, we show that it is possible to generalize such approaches to deep networks with an arbitrary number of layers. We demonstrate the learning scheme - DeepAGREL - on classical and hard image-classification benchmarks requiring deep networks, namely MNIST, CIFAR10, and CIFAR100, cast as direct reward tasks, both for deep fully connected, convolutional and locally connected architectures. We show that for these tasks, DeepAGREL achieves an accuracy that is equal to supervised error-backpropagation, and the trial-and-error nature of such learning imposes only a very limited cost in terms of training time. Thus, our results provide new insights into how deep learning may be implemented in the brain. ""","""The paper proposes an RL-based algorithm for training neural networks that is able to match the performance of backprop on CIFAR and MNIST datasets. The reviewers generally found the algorithm and motivations interesting, but some had issue with the imprecision of the notion of ""biologically plausible"" used by the authors. One reviewer had issues with missing discussion of related work and also doubts about the meaningfulness of the experiments, since the networks were quite shallow. For this type of paper, clarity and precision of exposition is crucial in my opinion, and so I recommend rejection at this time, but encourage the authors to take the feedback into account and resubmit to a future venue."""
1733,"""End-To-End Input Selection for Deep Neural Networks""","['Deep Learning', 'Input Selection', 'Gumbel Softmax Trick', 'Remote Sensing', 'Feature Selection']","""Data have often to be moved between servers and clients during the inference phase. This is the case, for instance, when large amounts of data are stored on a public storage server without the possibility for the users to directly execute code and, hence, apply machine learning models. Depending on the available bandwidth, this data transfer can become a major bottleneck. We propose a simple yet effective framework that allows to select certain parts of the input data needed for the subsequent application of a given neural network. Both the associated selection masks as well as the neural network are trained simultaneously such that a good model performance is achieved while, at the same time, only a minimal amount of data is selected. During the inference phase, only the parts selected by the masks have to be transferred between the server and the client. Our experiments indicate that it is often possible to significantly reduce the amount of data needed to be transferred without affecting the model performance much.""","""This paper proposes to address the high bandwidth cost when transferring data between server and user for machine learning applications. The input data is augment with channel and spatial mask so that the file transfer cost is reduced. While the reviewers agree that this is a well motivated and interesting problem to study, a number of concerns are raised, including loosely specified performance/size trade-off, how this work is compared to related work, low novelty relative to a few key missing references. The authors respond to Reviewers concerns but did not change the rating. The ACs concur the concerns and the paper can not be accepted at its current state."""
1734,"""Demystifying Inter-Class Disentanglement""","['disentanglement', 'latent optimization', 'domain translation']","""Learning to disentangle the hidden factors of variations within a set of observations is a key task for artificial intelligence. We present a unified formulation for class and content disentanglement and use it to illustrate the limitations of current methods. We therefore introduce LORD, a novel method based on Latent Optimization for Representation Disentanglement. We find that latent optimization, along with an asymmetric noise regularization, is superior to amortized inference for achieving disentangled representations. In extensive experiments, our method is shown to achieve better disentanglement performance than both adversarial and non-adversarial methods that use the same level of supervision. We further introduce a clustering-based approach for extending our method for settings that exhibit in-class variation with promising results on the task of domain translation.""","""This paper proposes a novel method for class-supervised disentangled representation learning. The method augments an autoencoder with asymmetric noise regularisation and is able to disentangled content (class) and style information from each other. The reviewers agree that the method achieves impressive empirical results and significantly outperforms the baselines. Furthermore, the authors were able to alleviate some of the initial concerns raised by the reviewers during the discussion stage by providing further experimental results and modifying the paper text. By the end of the discussion period some of the reviewers raised their scores and everyone agreed that the paper should be accepted. Hence, I am happy to recommend acceptance."""
1735,"""Event extraction from unstructured Amharic text""","['Event extraction', 'machine learning classifiers', 'Nominal events']","""In information extraction, event extraction is one of the types that extract the specific knowledge of certain incidents from texts. Event extraction has been done on different languages texts but not on one of the Semitic language Amharic. In this study, we present a system that extracts an event from unstructured Amharic text. The system has designed by the integration of supervised machine learning and rule-based approaches together. We call it a hybrid system. The model from the supervised machine learning detects events from the text, then, handcrafted rules and the rule-based rules extract the event from the text. The hybrid system has compared with the standalone rule-based method that is well known for event extraction. The study has shown that the hybrid system has outperformed the standalone rule-based method. For the event extraction, we have been extracting event arguments. Event arguments identify event triggering words or phrases that clearly express the occurrence of the event. The event argument attributes can be verbs, nouns, occasionally adjectives such as /wedding and time as well.""","""This paper performs event extraction from Amharic texts. To this end, authors prepared a novel Amharic corpus and used a hybrid system of rule-based and learning-based systems. Overall, while all reviewers admit the importance of addressing low-resource language and the value of the novel Amharic corpus, they are not satisfied with the quality of the current paper as a scientific work. Most importantly, although the attempt of even extraction might be new on Amharic, there have been many works on other languages. It should be clearly presented what are the non-trivial language-specific challenges on Amharic and how they are solved, otherwise it seems just an engineering of existing techniques on a new dataset. Also, all reviewers are fairly concerned about the presentation and clarity of the paper. Unfortunately, no revised paper is uploaded and we cannot confirm how authors' response is reflected. For those reasons, I would like to recommend rejection. """
1736,"""Assessing Generalization in TD methods for Deep Reinforcement Learning""","['reinforcement learning', 'deep learning', 'generalization']","""Current Deep Reinforcement Learning (DRL) methods can exhibit both data inefficiency and brittleness, which seem to indicate that they generalize poorly. In this work, we experimentally analyze this issue through the lens of memorization, and show that it can be observed directly during training. More precisely, we find that Deep Neural Networks (DNNs) trained with supervised tasks on trajectories capture temporal structure well, but DNNs trained with TD(0) methods struggle to do so, while using TD(lambda) targets leads to better generalization.""","""This paper received three reviews. R1 recommends Weak Reject, and identifies a variety of concerns about the motivation, presentation, clarity and soundness of results, and experimental design (e.g. choice of metrics). In a short review, R2 recommends Weak Accept, but indicates they are not an expert in this area. R3 also recommends Weak Accept, but identifies concerns also centering around clarity and completeness of the paper as well as some specific technical questions. In their response, authors address these issues, and have a constructive back-and-forth conversation with R1, who remains unconvinced about significance of the empirical results and thus the conclusion of the overall paper. After the discussion period, R3 indicated that they weakly favored acceptance but agreed that the paper had significant presentation issues and would not strongly advocate for it. R1 advocated for Reject, given the concerns identified in their reviews and followup comments. Given the split decision, the AC also read the paper. While the work clearly has merit, we agree with R1's comment that it is overall a ""potentially interesting idea, but the justification and presentation/quantification of results is not good enough in the submitted paper,"" and feel the paper really needs a revision and another round of peer review before publication. """
1737,"""In-Domain Representation Learning For Remote Sensing""","['Representation learning', 'remote sensing']","""Given the importance of remote sensing, surprisingly little attention has been paid to it by the representation learning community. To address it and to speed up innovation in this domain, we provide simplified access to 5 diverse remote sensing datasets in a standardized form. We specifically explore in-domain representation learning and address the question of ""what characteristics should a dataset have to be a good source for remote sensing representation learning"". The established baselines achieve state-of-the-art performance on these datasets. ""","""As the reviewers point out, this paper requires a major revision and fleshing out of the claimed contribution before it is suitable for conference presentation."""
1738,"""Learning Calibratable Policies using Programmatic Style-Consistency""","['imitation learning', 'conditional generation', 'data programming']","""We study the important and challenging problem of controllable generation of long-term sequential behaviors. Solutions to this problem would impact many applications, such as calibrating behaviors of AI agents in games or predicting player trajectories in sports. In contrast to the well-studied areas of controllable generation of images, text, and speech, there are significant challenges that are unique to or exacerbated by generating long-term behaviors: how should we specify the factors of variation to control, and how can we ensure that the generated temporal behavior faithfully demonstrates diverse styles? In this paper, we leverage large amounts of raw behavioral data to learn policies that can be calibrated to generate a diverse range of behavior styles (e.g., aggressive versus passive play in sports). Inspired by recent work on leveraging programmatic labeling functions, we present a novel framework that combines imitation learning with data programming to learn style-calibratable policies. Our primary technical contribution is a formal notion of style-consistency as a learning objective, and its integration with conventional imitation learning approaches. We evaluate our framework using demonstrations from professional basketball players and agents in the MuJoCo physics environment, and show that our learned policies can be accurately calibrated to generate interesting behavior styles in both domains.""","""The reviewers generally reached a consensus that the work is not quite ready for acceptance in its current form. The central concerns were about the potentially limited novelty of the method, and the fact that it was not quite clear how good the annotations needed to be (or how robust the method would be to imperfect annotations). This, combined with an evaluation scenario that is non-standard and requires some guesswork to understand its difficulty, leaves one with the impression that it is not quite clear from the experiments whether the method really works well. I would recommend for the authors to improve the evaluation in the next submission."""
1739,"""DyNet: Dynamic Convolution for Accelerating Convolution Neural Networks""","['CNNs', 'dynamic convolution kernel']","""Convolution operator is the core of convolutional neural networks (CNNs) and occupies the most computation cost. To make CNNs more efficient, many methods have been proposed to either design lightweight networks or compress models. Although some efficient network structures have been proposed, such as MobileNet or ShuffleNet, we find that there still exists redundant information between convolution kernels. To address this issue, we propose a novel dynamic convolution method named \textbf{DyNet} in this paper, which can adaptively generate convolution kernels based on image contents. To demonstrate the effectiveness, we apply DyNet on multiple state-of-the-art CNNs. The experiment results show that DyNet can reduce the computation cost remarkably, while maintaining the performance nearly unchanged. Specifically, for ShuffleNetV2 (1.0), MobileNetV2 (1.0), ResNet18 and ResNet50, DyNet reduces 40.0%, 56.7%, 68.2% and 72.4% FLOPs respectively while the Top-1 accuracy on ImageNet only changes by +1.0%, -0.27%, -0.6% and -0.08%. Meanwhile, DyNet further accelerates the inference speed of MobileNetV2 (1.0), ResNet18 and ResNet50 by 1.87x,1.32x and 1.48x on CPU platform respectively. To verify the scalability, we also apply DyNet on segmentation task, the results show that DyNet can reduces 69.3% FLOPs while maintaining the Mean IoU on segmentation task.""","""The paper proposed the use of dynamic convolutional kernels as a way to reduce inference computation cost, which is a linear combination of static kernels and fused after training for inference to reduce computation cost. The authors evaluated the proposed methods on a variety models and shown good FLOPS reduction while maintaining accuracy. The main concern for this paper is the limited novelty. There have been many works use dynamic convolutions as pointed out by all the reviewers. The most similar ones are SENet and soft conditional computation. Although the authors claim that soft conditional computation ""focus on using more parameters to make models to be more expressive while we focus on reducing redundant calculations"", the methods are pretty the same and moreover in the abstract of soft conditional computation they have ""CondConv improves the performance and inference cost trade-off""."""
1740,"""Accelerated Information Gradient flow""","['Optimal transport', 'Information geometry', 'Nesterov accelerated gradient method']","""We present a systematic framework for the Nesterov's accelerated gradient flows in the spaces of probabilities embedded with information metrics. Here two metrics are considered, including both the Fisher-Rao metric and the Wasserstein- pseudo-formula metric. For the Wasserstein- pseudo-formula metric case, we prove the convergence properties of the accelerated gradient flows, and introduce their formulations in Gaussian families. Furthermore, we propose a practical discrete-time algorithm in particle implementations with an adaptive restart technique. We formulate a novel bandwidth selection method, which learns the Wasserstein- pseudo-formula gradient direction from Brownian-motion samples. Experimental results including Bayesian inference show the strength of the current method compared with the state-of-the-art.""","""The paper makes its contribution by deriving an accelerated gradient flow for the Wasserstein distances. It is technically strong and demonstrates it applicability using examples fo Gaussian distributions and logistic regression. Reviewer 3 provided a deep technical assessment, pointing out the relevance to our ML community since these ideas are not yet widespread, but had concerns about the clarity of the paper. Reviewer 2 had similar concerns about clarity, and was also positive about its relevance to the ML community. The authors provided details responses to the technical questions posed by the reviewers. The AC believes that such work is a good fit for the conference. The reviewers felts that this paper does not yet achieve the aim of making this work more widespread and needs more focus on communication. This is a strong paper and the authors are encouraged to address the accessibility questions. We hope the review offers useful points of feedback for their future work."""
1741,"""Learning Curves for Deep Neural Networks: A field theory perspective""","['Gaussian Processes', 'Neural Tangent Kernel', 'Learning Curves', 'Field Theory', 'Statistical Mechanics', 'Generalization', 'Deep neural networks']","""A series of recent works established a rigorous correspondence between very wide deep neural networks (DNNs), trained in a particular manner, and noiseless Bayesian Inference with a certain Gaussian Process (GP) known as the Neural Tangent Kernel (NTK). Here we extend a known field-theory formalism for GP inference to get a detailed understanding of learning-curves in DNNs trained in the regime of this correspondence (NTK regime). In particular, a renormalization-group approach is used to show that noiseless GP inference using NTK, which lacks a good analytical handle, can be well approximated by noisy GP inference on a related kernel we call the renormalized NTK. Following this, a perturbation-theory analysis is carried in one over the dataset-size yielding analytical expressions for the (fixed-teacher/fixed-target) leading and sub-leading asymptotics of the learning curves. At least for uniform datasets, a coherent picture emerges wherein fully-connected DNNs have a strong implicit bias towards functions which are low order polynomials of the input. ""","""This paper studies deep neural network (DNN) learning curves by leveraging recent connections of (wide) DNNs to kernel methods such as Gaussian processes. The bulk of the arguments contained in this paper are, thus, for the ""kernel regime"" rather than ""the problem of non-linearity in DNNs"", as one reviewer puts it. When it comes to scoring this paper, it has been controversial. However a lot of discussion has taken place. On the positive side, it seems that there is a lot of novel perspectives included in this paper. On the other hand, even after the revision, it seems that this paper is still very difficult to follow for non-physicists. Overall, it would be beneficial to perform a more careful revision of the paper such that it can be better appreciated by the targeted scientific community. """
1742,"""Decoupling Weight Regularization from Batch Size for Model Compression""","['Model compression', 'Weight Regularization', 'Batch Size', 'Gradient Descent']","""Conventionally, compression-aware training performs weight compression for every mini-batch to compute the impact of compression on the loss function. In this paper, in order to study when would be the right time to compress weights during optimization steps, we propose a new hyper-parameter called Non-Regularization period or NR period during which weights are not updated for regularization. We first investigate the influence of NR period on regularization using weight decay and weight random noise insertion. Throughout various experiments, we show that stronger weight regularization demands longer NR period (regardless of batch size) to best utilize regularization effects. From our empirical evidence, we argue that weight regularization for every mini-batch allows small weight updates only and limited regularization effects such that there is a need to search for right NR period and weight regularization strength to enhance model accuracy. Consequently, NR period becomes especially crucial for model compression where large weight updates are necessary to increase compression ratio. Using various models, we show that simple weight updates to comply with compression formats along with long NR period is enough to achieve high compression ratio and model accuracy.""","""This paper proposes to apply regularizers such as weight decay or weight noise only periodically, rather than every epoch. It investigates how the ""non-regularization period"", or period between regularization steps, interacts with other hyperparameters. Overall, the writing feels somewhat scattered, and it is hard to identify a clear argument for why the NRP should help. Certainly one could save computation this way, but regularizers like weight decay or weight noise incur only a small computational cost anyway. One explicit claim from the paper is that a higher NRP allows larger regularization. There's a sense in which this is demonstrated, though not a very interesting sense: Figure 4 shows that the weight decay strength should be adjusted proportionally to the NRP. But varying the parameters in this way simply results in an unbiased (but noisier) estimate of gradients of exactly the same regularization penalty, so I don't think there's much surprising here. Similarly, Section 3 argues that a higher NRP allows for larger stochastic perturbations, which makes it easier to escape local optima. But this isn't demonstrated experimentally, nor does it seem obvious that stochasticity will help find a better local optimum. Overall, I think this paper needs substantial cleanup before it's ready to be published at a venue such as ICLR. """
1743,"""Out-of-distribution Detection in Few-shot Classification""","['few-shot classification', 'out-of-distribution detection', 'uncertainty estimate']","""In many real-world settings, a learning model must perform few-shot classification: learn to classify examples from unseen classes using only a few labeled examples per class. Additionally, to be safely deployed, it should have the ability to detect out-of-distribution inputs: examples that do not belong to any of the classes. While both few-shot classification and out-of-distribution detection are popular topics, their combination has not been studied. In this work, we propose tasks for out-of-distribution detection in the few-shot setting and establish benchmark datasets, based on four popular few-shot classification datasets. Then, we propose two new methods for this task and investigate their performance. In sum, we establish baseline out-of-distribution detection results using standard metrics on new benchmark datasets and show improved results with our proposed methods.""","""This paper presents a method for out-of-distribution detection under the condition of access to only a few positive labeled samples. The main contribution as summarized by reviewers and authors is the new proposed benchmark and problem statement. All reviewers are in agreement that this paper is not ready for publication in its current form. The main concern is around the validity of the problem statement. The reviewers seek more clarity motivating the proposed scenario. Though the authors argue that as few-shot recognition is very difficult and may benefit from strategies like active learning, it is not directly clear how out of distribution detection is the best approach. In addition, R3 seeks clarification on the similarity to existing work. Considering the unanimous opinions of the reviewers and all author rebuttal text, the AC does not recommend acceptance of this work. We encourage the authors to focus their revisions on the explanation and motivation of this new benchmark and submit to a future venue. """
1744,"""BinaryDuo: Reducing Gradient Mismatch in Binary Activation Network by Coupling Binary Activations""",[],"""Binary Neural Networks (BNNs) have been garnering interest thanks to their compute cost reduction and memory savings. However, BNNs suffer from performance degradation mainly due to the gradient mismatch caused by binarizing activations. Previous works tried to address the gradient mismatch problem by reducing the discrepancy between activation functions used at forward pass and its differentiable approximation used at backward pass, which is an indirect measure. In this work, we use the gradient of smoothed loss function to better estimate the gradient mismatch in quantized neural network. Analysis using the gradient mismatch estimator indicates that using higher precision for activation is more effective than modifying the differentiable approximation of activation function. Based on the observation, we propose a new training scheme for binary activation networks called BinaryDuo in which two binary activations are coupled into a ternary activation during training. Experimental results show that BinaryDuo outperforms state-of-the-art BNNs on various benchmarks with the same amount of parameters and computing cost.""","""Three reviewers suggest acceptance. Reviewers were impressed by the thoroughness of the author response. Please take reviewer comments into account in the camera ready. Congratulations!"""
1745,"""DG-GAN: the GAN with the duality gap""","['GAN', 'duality gap', 'metric', 'saddle point', 'game']","""Generative Adversarial Networks (GANs) are powerful, but difficult to understand and train because GANs is a min-max problem. This paper understand GANs with duality gap that comes from game theorem and show that duality gap can be a kind of metric to evolution the difference between the true data distribution and the distribution generated by generator with given condition. And train the networks using duality gap can get some better results. Furthermore, the paper calculates the generalization bound of duality gap to estimate the help design the neural networks and select the sample size. ""","""This paper proposes looking at the duality gap to measure performance. However, the metric is just an upperbound on the true metric of interest, and therefore its value can be ambiguous. The reviewers found the paper to be in an unacceptable form and was clearly hastily prepared. They were also skeptical about the novelty of the result as well as the comprehensiveness of the experiments. This paper would require extensive revisions before any potential acceptance. Reject """
1746,"""Towards Interpretable Molecular Graph Representation Learning""","['molecular graphs', 'graph pooling', 'hierarchical', 'GNN', 'Laplacian', 'drug discovery']","""Recent work in graph neural networks (GNNs) has led to improvements in molecular activity and property prediction tasks. Unfortunately, GNNs often fail to capture the relative importance of interactions between molecular substructures, in part due to the absence of efficient intermediate pooling steps. To address these issues, we propose LaPool (Laplacian Pooling), a novel, data-driven, and interpretable hierarchical graph pooling method that takes into account both node features and graph structure to improve molecular understanding. We benchmark LaPool and show that it not only outperforms recent GNNs on molecular graph understanding and prediction tasks but also remains highly competitive on other graph types. We then demonstrate the improved interpretability achieved with LaPool using both qualitative and quantitative assessments, highlighting its potential applications in drug discovery.""","""The paper introduces a new pooling approach ""Laplacian pooling"" for graph neural networks and applies this to molecular graphs. While the paper has been substantially improved from its original form, there are still various concerns regarding performance and interpretability that remain unanswered. In its current form the paper is not ready for acceptance to ICLR-2020."""
1747,"""Domain Aggregation Networks for Multi-Source Domain Adaptation""","['Domain Adaptation', 'Transfer Learning', 'Deep Learning']","""In many real-world applications, we want to exploit multiple source datasets of similar tasks to learn a model for a different but related target dataset -- e.g., recognizing characters of a new font using a set of different fonts. While most recent research has considered ad-hoc combination rules to address this problem, we extend previous work on domain discrepancy minimization to develop a finite-sample generalization bound, and accordingly propose a theoretically justified optimization procedure. The algorithm we develop, Domain AggRegation Network (DARN), is able to effectively adjust the weight of each source domain during training to ensure relevant domains are given more importance for adaptation. We evaluate the proposed method on real-world sentiment analysis and digit recognition datasets and show that DARN can significantly outperform the state-of-the-art alternatives.""","""Thanks for the detailed replies to the reviewers. Their score was slightly improved, this paper is still below the bar given high competition of ICLR2020. For this reason, we decided not to accept this paper."""
1748,"""Gradient pseudo-formula Regularization for Quantization Robustness""","['quantization', 'regularization', 'robustness', 'gradient regularization']","""We analyze the effect of quantizing weights and activations of neural networks on their loss and derive a simple regularization scheme that improves robustness against post-training quantization. By training quantization-ready networks, our approach enables storing a single set of weights that can be quantized on-demand to different bit-widths as energy and memory requirements of the application change. Unlike quantization-aware training using the straight-through estimator that only targets a specific bit-width and requires access to training data and pipeline, our regularization-based method paves the way for ``on the fly'' post-training quantization to various bit-widths. We show that by modeling quantization as a pseudo-formula -bounded perturbation, the first-order term in the loss expansion can be regularized using the pseudo-formula -norm of gradients. We experimentally validate our method on different vision architectures on CIFAR-10 and ImageNet datasets and show that the regularization of a neural network using our method improves robustness against quantization noise.""","""Reviewers uniformly suggest acceptance. Please take their comments into account in the camera-ready. Congratulations!"""
1749,"""Strategies for Pre-training Graph Neural Networks""","['Pre-training', 'Transfer learning', 'Graph Neural Networks']","""Many applications of machine learning require a model to make accurate pre-dictions on test examples that are distributionally different from training ones, while task-specific labels are scarce during training. An effective approach to this challenge is to pre-train a model on related tasks where data is abundant, and then fine-tune it on a downstream task of interest. While pre-training has been effective in many language and vision domains, it remains an open question how to effectively use pre-training on graph datasets. In this paper, we develop a new strategy and self-supervised methods for pre-training Graph Neural Networks (GNNs). The key to the success of our strategy is to pre-train an expressive GNN at the level of individual nodes as well as entire graphs so that the GNN can learn useful local and global representations simultaneously. We systematically study pre-training on multiple graph classification datasets. We find that nave strategies, which pre-train GNNs at the level of either entire graphs or individual nodes, give limited improvement and can even lead to negative transfer on many downstream tasks. In contrast, our strategy avoids negative transfer and improves generalization significantly across downstream tasks, leading up to 9.4% absolute improvements in ROC-AUC over non-pre-trained models and achieving state-of-the-art performance for molecular property prediction and protein function prediction.""","""All three reviewers are consistently positive on this paper. Thus an accept is recommended."""
1750,"""Cross-lingual Alignment vs Joint Training: A Comparative Study and A Simple Unified Framework""",['Cross-lingual Representation'],"""Learning multilingual representations of text has proven a successful method for many cross-lingual transfer learning tasks. There are two main paradigms for learning such representations: (1) alignment, which maps different independently trained monolingual representations into a shared space, and (2) joint training, which directly learns unified multilingual representations using monolingual and cross-lingual objectives jointly. In this paper, we first conduct direct comparisons of representations learned using both of these methods across diverse cross-lingual tasks. Our empirical results reveal a set of pros and cons for both methods, and show that the relative performance of alignment versus joint training is task-dependent. Stemming from this analysis, we propose a simple and novel framework that combines these two previously mutually-exclusive approaches. Extensive experiments demonstrate that our proposed framework alleviates limitations of both approaches, and outperforms existing methods on the MUSE bilingual lexicon induction (BLI) benchmark. We further show that this framework can generalize to contextualized representations such as Multilingual BERT, and produces state-of-the-art results on the CoNLL cross-lingual NER benchmark.""","""Reviewer worries include: whether the approach scales to distant language pairs, overselling of the paper as a ""framework"", a few citations and comparisons missing. I agree and encourage the authors not to use the word ""framework"" here. I would also encourage the authors to evaluate on more interesting language pairs, and analyze what vocabularies are relocated, as well as what their method is better at compared to previous work. """
1751,"""Bayesian Inference for Large Scale Image Classification""","['image classification', 'bayesian inference', 'mcmc', 'imagenet']","""Bayesian inference promises to ground and improve the performance of deep neural networks. It promises to be robust to overfitting, to simplify the training procedure and the space of hyperparameters, and to provide a calibrated measure of uncertainty that can enhance decision making, agent exploration and prediction fairness. Markov Chain Monte Carlo (MCMC) methods enable Bayesian inference by generating samples from the posterior distribution over model parameters. Despite the theoretical advantages of Bayesian inference and the similarity between MCMC and optimization methods, the performance of sampling methods has so far lagged behind optimization methods for large scale deep learning tasks. We aim to fill this gap and introduce ATMC, an adaptive noise MCMC algorithm that estimates and is able to sample from the posterior of a neural network. ATMC dynamically adjusts the amount of momentum and noise applied to each parameter update in order to compensate for the use of stochastic gradients. We use a ResNet architecture without batch normalization to test ATMC on the Cifar10 benchmark and the large scale ImageNet benchmark and show that, despite the absence of batch normalization, ATMC outperforms a strong optimization baseline in terms of both classification accuracy and test log-likelihood. We show that ATMC is intrinsically robust to overfitting on the training data and that ATMC provides a better calibrated measure of uncertainty compared to the optimization baseline.""","""This paper proposes a variant of Hamiltonian Monte Carlo for Bayesian inference in deep learning. Although the reviewers acknowledge the ambition, scope and novelty of the paper they still have a number of reservations regarding experimental results and claims (regarding need for hyperparameter tuning). The overall score consequently falls below acceptance. Rejection is recommended. These reservations made by the referees should definitely be addressable before next conference deadline so looking forward to see the paper published asap."""
1752,"""Modeling treatment events in disease progression""","['disease progression', 'treatment events', 'matrix completion']","""Ability to quantify and predict progression of a disease is fundamental for selecting an appropriate treatment. Many clinical metrics cannot be acquired frequently either because of their cost (e.g. MRI, gait analysis) or because they are inconvenient or harmful to a patient (e.g. biopsy, x-ray). In such scenarios, in order to estimate individual trajectories of disease progression, it is advantageous to leverage similarities between patients, i.e. the covariance of trajectories, and find a latent representation of progression. Most of existing methods for estimating trajectories do not account for events in-between observations, what dramatically decreases their adequacy for clinical practice. In this study, we develop a machine learning framework named Coordinatewise-Soft-Impute (CSI) for analyzing disease progression from sparse observations in the presence of confounding events. CSI is guaranteed to converge to the global minimum of the corresponding optimization problem. Experimental results also demonstrates the effectiveness of CSI using both simulated and real dataset.""","""The proposed method has very weak novelty."""
1753,"""Learning the Arrow of Time for Problems in Reinforcement Learning""","['Arrow of Time', 'Reinforcement Learning', 'AI-Safety']","""We humans have an innate understanding of the asymmetric progression of time, which we use to efficiently and safely perceive and manipulate our environment. Drawing inspiration from that, we approach the problem of learning an arrow of time in a Markov (Decision) Process. We illustrate how a learned arrow of time can capture salient information about the environment, which in turn can be used to measure reachability, detect side-effects and to obtain an intrinsic reward signal. Finally, we propose a simple yet effective algorithm to parameterize the problem at hand and learn an arrow of time with a function approximator (here, a deep neural network). Our empirical results span a selection of discrete and continuous environments, and demonstrate for a class of stochastic processes that the learned arrow of time agrees reasonably well with a well known notion of an arrow of time due to Jordan, Kinderlehrer and Otto (1998). ""","""This paper develops the notion of the arrow of time in MDPs and explores how this might be useful in RL. All the reviewers found the paper thought provoking, well-written, and they believe the work could have significant impact. The paper does not fit the typical mold: it presents some ideas and uses illustrative experiments to suggest the potential utility of the arrow without nailing down a final algorithm or make a precise performance claim. Overall it is a solid paper, and the reviewers all agreed on acceptance. There are certainly weaknesses in the work, and there is a bit of work to do to get this paper ready. R2 had a nice suggestion of a baseline based on simply learning a transition model (its described in the updated review)---please include it. The description of the experimental methodology is a bit of a mess. Most of the experiments in the paper do not clearly indicate how many runs were conducted or how errorbars where computed or what they represent. It is likely that only a handful of runs were used, which is surprising given the size of some of the domains used. In many cases the figure caption does not even indicate which domain the data came from. All of this is dangerously close to criteria for rejection; please do better. Readability is also known as empowerment and it would be good to discuss this connection. In general the paper was a bit light on connections outlining how information theory has been used in RL. I suggest you start here (pseudo-url) to improve this aspect. Finally, the paper has a very large appendix (~14 oages) with many many more experiments and theory. I am still not convinced that the balance is quite right. This is probably a journal or long arxiv paper. Maybe this paper should be thought of as a nectar version of a longer standalone arxiv paper. Finally, relying on effectiveness of random exploration is no small thing and there is a long history in RL of ideas that would work well, given it is easy to gather data that accurately summarizes the dynamics of the world (e.g. proto-value, funcs). Many ideas are effective given this assumption. The paper should clearly and honestly discuss this assumption, and provide some arguments why there is hope."""
1754,"""Extreme Values are Accurate and Robust in Deep Networks""","['Biological inspired CNN architecture design', 'Adversarial Robustness Architecture']","""Recent evidence shows that convolutional neural networks (CNNs) are biased towards textures so that CNNs are non-robust to adversarial perturbations over textures, while traditional robust visual features like SIFT (scale-invariant feature transforms) are designed to be robust across a substantial range of affine distortion, addition of noise, etc with the mimic of human perception nature. This paper aims to leverage good properties of SIFT to renovate CNN architectures towards better accuracy and robustness. We borrow the scale-space extreme value idea from SIFT, and propose EVPNet (extreme value preserving network) which contains three novel components to model the extreme values: (1) parametric differences of Gaussian (DoG) to extract extrema, (2) truncated ReLU to suppress non-stable extrema and (3) projected normalization layer (PNL) to mimic PCA-SIFT like feature normalization. Experiments demonstrate that EVPNets can achieve similar or better accuracy than conventional CNNs, while achieving much better robustness on a set of adversarial attacks (FGSM,PGD,etc) even without adversarial training.""","""This manuscript proposed biologically-inspired modifications to convolutional neural networks including differences of Gaussians convolutional filter, a truncated ReLU, and a modified projected normalization layer. The authors' results indicate that the modifications improve performance as well as improved robustness to adversarial attacks. The reviewers and AC agree that the problem studied is timely and interesting, and closely related to a variety of recent work on robust model architectures. However, this manuscript also received quite divergent reviews, resulting from differences in opinion about the novelty and importance of the results. In reviews and discussion, the reviewers noted issues with clarity of the presentation and sufficient justification of the approach and results. In the opinion of the AC, the manuscript in its current state is borderline and could be improved with more convincing empirical justification."""
1755,"""Few-shot Learning by Focusing on Differences""","['Deep learning', 'few-shot learning']","""Few-shot classification may involve differentiating data that belongs to a different level of labels granularity. Compounded by the fact that the number of available labeled examples are scarce in the novel classification set, relying solely on the loss function to implicitly guide the classifier to separate data based on its label might not be enough; few-shot classifier needs to be very biased to perform well. In this paper, we propose a model that incorporates a simple prior: focusing on differences by building a dissimilar set of class representations. The model treats a class representation as a vector and removes its component that is shared among closely related class representatives. It does so through the combination of learned attention and vector orthogonalization. Our model works well on our newly introduced dataset, Hierarchical-CIFAR, that contains different level of labels granularity. It also substantially improved the performance on fine-grained classification dataset, CUB; whereas staying competitive on standard benchmarks such as mini-Imagenet, Omniglot, and few-shot dataset derived from CIFAR.""","""Main content: [Blind review #3] The authors propose a metric based model for few-shot learning. The goal of the proposed technique is to incorporate a prior that highlight better the dissimilarity between closely related class prototype. Thus, the proposed paper is related to prototypical neural network (use of prototype to represent a class) but differ from it by using inner product scoring as a similarity measure instead of the use of euclidean distance. There is also close similarity between the proposed method and matching network. [Blind review #2] The stated contributions of the paper are: (1) a method for performing few-shot learning and (2) an approach for building harder few-shot learning datasets from existing datasets. The authors describe a model for creating a task-aware embedding for different novel sets (for different image classification settings) using a nonlinear self-attention-like mechanism applied to the centroid of the global embeddings for each class. The resulting embeddings are used per class with an additional attention layer applied on the embeddings from the other classes to identify closely-related classes and consider the part of the embedding orthogonal to the attention-weighted-average of these closely-related classes. They compare the accuracy of their model vs others in the 1-shot and 5-shot setting on various datasets, including a derived dataset from CIFAR which they call Hierarchical-CIFAR. -- Discussion: All reviews agree on a weak reject. -- Recommendation and justification: While the ideas appear to be on a good track, the paper itself is poorly written - as one review put it, more like notes to themselves, rather than a well-written document to the ICLR audience."""
1756,"""Smooth markets: A basic mechanism for organizing gradient-based learners""","['game theory', 'optimization', 'gradient descent', 'adversarial learning']","""With the success of modern machine learning, it is becoming increasingly important to understand and control how learning algorithms interact. Unfortunately, negative results from game theory show there is little hope of understanding or controlling general n-player games. We therefore introduce smooth markets (SM-games), a class of n-player games with pairwise zero sum interactions. SM-games codify a common design pattern in machine learning that includes some GANs, adversarial training, and other recent algorithms. We show that SM-games are amenable to analysis and optimization using first-order methods.""","""The paper discusses smooth market games and demonstrate the merit of the approach. The reviewers agree on the quality of the paper, and the comments have been addressed well by the authors. """
1757,"""Bias-Resilient Neural Network""","['Invariant Feature Learning', 'Vanished Correlation', 'Generative Adversarial Networks', 'Gender Shades', 'Fairness in Machine Learning']","""Presence of bias and confounding effects is inarguably one of the most critical challenges in machine learning applications that has alluded to pivotal debates in the recent years. Such challenges range from spurious associations of confounding variables in medical studies to the bias of race in gender or face recognition systems. One solution is to enhance datasets and organize them such that they do not reflect biases, which is a cumbersome and intensive task. The alternative is to make use of available data and build models considering these biases. Traditional statistical methods apply straightforward techniques such as residualization or stratification to precomputed features to account for confounding variables. However, these techniques are not in general applicable to end-to-end deep learning methods. In this paper, we propose a method based on the adversarial training strategy to learn discriminative features unbiased and invariant to the confounder(s). This is enabled by incorporating a new adversarial loss function that encourages a vanished correlation between the bias and learned features. We apply our method to a synthetic, a medical diagnosis, and a gender classification (Gender Shades) dataset. Our results show that the learned features by our method not only result in superior prediction performance but also are uncorrelated with the bias or confounder variables. The code is available at pseudo-url.""","""The paper addressed the problem of machine bias when training machine learning models. The authors propose an approach based on representation learning with adversarial training. As opposed to the majority of previous works that trying to create a representation from which it is not possible to predict the sensitive feature (bias), the authors propose to minimize the dependency between the learned features and the sensitive feature with adversarial training. While acknowledging that the proposed model is addressing an important problem and is potentially useful, the reviewers and AC note the following potential weaknesses: (1) limited technical contribution -- the proposed approach is similar to a number of works published in machine learning and computer vision before the submission deadline that were overlooked by the authors. Specifically: i) adversarial training for learning fair representations [Edwards and Storkey, Censoring Representations with an Adversary, ICLR 2016], [Beutel, et al 2017, Data decisions and theoretical implications when adversarially learning fair representations], ii) learning fair representation by minimizing the dependency between the latent representation and the sensitive attributes [The variational fair autoencoder, ICLR 2016 by Louizos et al.; Fairness Constraints: Mechanisms for Fair Classification, by Zafar et al, 2015] or by minimizing the mutual information between feature embedding and bias [Learning Not to Learn: Training Deep Neural Networks with Biased Data, CVPR 2019]. (2) Limited empirical evidence -- the baseline methods used in the evaluation are not sufficient to assess the benefits of the proposed approach over the existing SOTA methods mentioned above. In fact, none of the baseline methods used in the evaluation tackle machine bias (via adversarial training or minimizing statistical dependence). (3) It would be beneficial to also report fairness metrics, e.g. equality of opportunity, statistical parity, to assess the effectiveness of bias removal. R1 has raised some concerns regarding empirical evidence -- see the point about mixed results. Also R2 has reported concerns regarding controversial results in experiment 4.2 and suggested ways to justify when and why the results of the CNNs baseline are close to the BR-Net. Addressing these concerns would strengthen the contributions of the proposed method. Among these, (3) did not have a decisive impact on the decision, but would be helpful to address in a subsequent revision. However, (1) and (2) make it very difficult to assess the benefits of the proposed approach, and were viewed by AC as critical issues. AC suggests, in its current state the manuscript is not ready for a publication. We hope the reviews are useful for improving and revising the paper. """
1758,"""Sign-OPT: A Query-Efficient Hard-label Adversarial Attack""",[],"""We study the most practical problem setup for evaluating adversarial robustness of a machine learning system with limited access: the hard-label black-box attack setting for generating adversarial examples, where limited model queries are allowed and only the decision is provided to a queried data input. Several algorithms have been proposed for this problem but they typically require huge amount (>20,000) of queries for attacking one example. Among them, one of the state-of-the-art approaches (Cheng et al., 2019) showed that hard-label attack can be modeled as an optimization problem where the objective function can be evaluated by binary search with additional model queries, thereby a zeroth order optimization algorithm can be applied. In this paper, we adopt the same optimization formulation but propose to directly estimate the sign of gradient at any direction instead of the gradient itself, which enjoys the benefit of single query. Using this single query oracle for retrieving sign of directional derivative, we develop a novel query-efficient Sign-OPT approach for hard-label black-box attack. We provide a convergence analysis of the new algorithm and conduct experiments on several models on MNIST, CIFAR-10 and ImageNet. We find that Sign-OPT attack consistently requires 5X to 10X fewer queries when compared to the current state-of-the-art approaches, and usually converges to an adversarial example with smaller perturbation. ""","""The reviewers had several concerns with the paper related to novelty and comparisons with other approaches. During the discussion phase, these concerns were adequately addressed."""
1759,"""Inducing Stronger Object Representations in Deep Visual Trackers""","['Object Tracking', 'Computer Vision', 'Deep Learning']","""Fully convolutional deep correlation networks are integral components of state-of- the-art approaches to single object visual tracking. It is commonly assumed that these networks perform tracking by detection by matching features of the object instance with features of the entire frame. Strong architectural priors and conditioning on the object representation is thought to encourage this tracking strategy. Despite these strong priors, we show that deep trackers often default to tracking- by-saliency detection without relying on the object instance representation. Our analysis shows that despite being a useful prior, salience detection can prevent the emergence of more robust tracking strategies in deep networks. This leads us to introduce an auxiliary detection task that encourages more discriminative object representations that improve tracking performance.""","""This paper proposes to learn a visual tracking network for an object detection loss as well as the ordinary tracking objective for enhancing the reliability of the tracking network. The reviewers were unanimous in their opinion that the paper should not be accepted to ICLR in its current form. A main concern is that the proposed method shows improvement over a relatively weak base system. Although the author response proposed to include additional analysis, but the reviewers felt that without the additional analysis already included it was not possible to change the overall review score."""
1760,"""Generalized Convolutional Forest Networks for Domain Generalization and Visual Recognition""",[],"""When constructing random forests, it is of prime importance to ensure high accuracy and low correlation of individual tree classifiers for good performance. Nevertheless, it is typically difficult for existing random forest methods to strike a good balance between these conflicting factors. In this work, we propose a generalized convolutional forest networks to learn a feature space to maximize the strength of individual tree classifiers while minimizing the respective correlation. The feature space is iteratively constructed by a probabilistic triplet sampling method based on the distribution obtained from the splits of the random forest. The sampling process is designed to pull the data of the same label together for higher strength and push away the data frequently falling to the same leaf nodes. We perform extensive experiments on five image classification and two domain generalization datasets with ResNet-50 and DenseNet-161 backbone networks. Experimental results show that the proposed algorithm performs favorably against state-of-the-art methods.""","""The authors introduce an approach to learn a random forest model and a representation simultaneously. The basic idea is to modify the representation so that subsequent trees in the random forest are less correlated. The authors evaluate the technique empirically and show some modest gains. While the reviews were mixed, the approach is quite different from the usual approaches published at ICLR and so I think it's worth highlighting this work. """
1761,"""Reducing Transformer Depth on Demand with Structured Dropout""","['reduction', 'regularization', 'pruning', 'dropout', 'transformer']","""Overparametrized transformer networks have obtained state of the art results in various natural language processing tasks, such as machine translation, language modeling, and question answering. These models contain hundreds of millions of parameters, necessitating a large amount of computation and making them prone to overfitting. In this work, we explore LayerDrop, a form of structured dropout, which has a regularization effect during training and allows for efficient pruning at inference time. In particular, we show that it is possible to select sub-networks of any depth from one large network without having to finetune them and with limited impact on performance. We demonstrate the effectiveness of our approach by improving the state of the art on machine translation, language modeling, summarization, question answering, and language understanding benchmarks. Moreover, we show that our approach leads to small BERT-like models of higher quality than when training from scratch or using distillation.""","""This paper presents Layerdrop, which is a method for structured dropout which allows you to train one model, and then prune to a desired depth at test time. This is a simple method which is exciting because you can get a smaller, more efficient model at test time for free, as it does not need fine tuning. They show strong results on machine translation, language modelling and a couple of other NLP benchmarks. The reviews are consistently positive, with significant author and reviewer discussion. This is clearly an approach which merits attention, and should be included in ICLR."""
1762,"""On the Equivalence between Positional Node Embeddings and Structural Graph Representations""","['Graph Neural Networks', 'Structural Graph Representations', 'Node Embeddings', 'Relational Learning', 'Invariant Theory', 'Theory', 'Deep Learning', 'Representational Power', 'Graph Isomorphism']","""This work provides the first unifying theoretical framework for node embeddings and structural graph representations, bridging methods like matrix factorization and graph neural networks. Using invariant theory, we show that relationship between structural representations and node embeddings is analogous to that of a distribution and its samples. We prove that all tasks that can be performed by node embeddings can also be performed by structural representations and vice-versa. We also show that the concept of transductive and inductive learning is unrelated to node embeddings and graph representations, clearing another source of confusion in the literature. Finally, we introduce new practical guidelines to generating and using node embeddings, which further augments standard operating procedures used today.""","""The paper shows the relationship between node embeddings and structural graph representations. By careful definition of what structural node representation means, and what node embedding means, using the permutation group, the authors show in Theorem 2 that node embeddings cannot represent any extra information that is not already in the structural representation. The paper then provide empirical experiments on three tasks, and show in a fourth task an illustration of the theoretical results. The reviewers of the paper scored the paper highly, but with low confidence. I read the paper myself (unfortunately not with a lot of time), with the aim of increasing the confidence of the resulting decision. The main gap in the paper is between the phrases ""structural node representation"" and ""node embedding"", and their theoretical definitions. The analogy of distribution and its samples follows unsurprisingly from the definitions (8 and 12), but the interpretation of those definitions as the corresponding English phrases is not obvious by only looking at the definitions. There also seems to be a sleight of hand going on with the most expressive representations (Definitions 9 and 11), which is used to make the conditional independence statement of Theorem 2. The authors should clarify in the final version whether the existence of such a representation can be shown, or even better a constructive way to get it from data. Given the significance of the theoretical results, the authors should improve the introduction of the two main concepts by: - relating them to prior work (one way is to move Section 5 towards the front) - explaining in greater detail why Definitions 8 and 12 correspond to the two concepts. For example expanding the part of the proof of Corollary 1 about SVD, to make clear what Definition 12 means. - a corresponding simple example of Definition 8 to relate to a classical method. The paper provides a nice connection between two disparate concepts. Unfortunately, the connection uses graph invariance and equivariance, which is unfamiliar to many of the ICLR audience. On balance, I believe that the authors can improve the presentation such that a reader can understand the implications of the connection without being an expert in graph isomorphism. As such, I am recommending an accept. """
1763,"""Dream to Control: Learning Behaviors by Latent Imagination""","['world model', 'latent dynamics', 'imagination', 'planning by backprop', 'policy optimization', 'planning', 'reinforcement learning', 'control', 'representations', 'latent variable model', 'visual control', 'value function']","""Learned world models summarize an agent's experience to facilitate learning complex behaviors. While learning world models from high-dimensional sensory inputs is becoming feasible through deep learning, there are many potential ways for deriving behaviors from them. We present Dreamer, a reinforcement learning agent that solves long-horizon tasks from images purely by latent imagination. We efficiently learn behaviors by propagating analytic gradients of learned state values back through trajectories imagined in the compact state space of a learned world model. On 20 challenging visual control tasks, Dreamer exceeds existing approaches in data-efficiency, computation time, and final performance.""","""This paper presents an approach to model-based reinforcement learning in high-dimensional tasks. The approach involves learning a latent dynamics model, and performing rollouts thereof with an actor-critic model to learn behaviours. This is extensively evaluated on 20 visual control tasks. This paper was favourably received, but there were concerns around it being incremental (relative to PlaNet and SVG). The authors highlighted the differences in the rebuttal, clarifying the novelty of this work. Given the interesting ideas presented, and the convincing results, this paper should be accepted."""
1764,"""Efficient Multivariate Bandit Algorithm with Path Planning""","['Multivariate Multi-armed Bandit', 'Monte Carlo Tree Search', 'Thompson Sampling', 'Path Planning']","""In this paper, we solve the arms exponential exploding issues in multivariate Multi-Armed Bandit (Multivariate-MAB) problem when the arm dimension hierarchy is considered. We propose a framework called path planning (TS-PP) which utilizes decision graph/trees to model arm reward success rate with m-way dimension interaction, and adopts Thompson sampling (TS) for heuristic search of arm selection. Naturally, it is quite straightforward to combat the curse of dimensionality using a serial processes that operates sequentially by focusing on one dimension per each process. For our best acknowledge, we are the first to solve Multivariate-MAB problem using graph path planning strategy and deploying alike Monte-Carlo tree search ideas. Our proposed method utilizing tree models has advantages comparing with traditional models such as general linear regression. Simulation studies validate our claim by achieving faster convergence speed, better efficient optimal arm allocation and lower cumulative regret.""","""This paper tackles the multivariate bandit problem (akin to a factorial experiment) where the player faces a sequence of decisions (that can be viewed as a tree) before obtaining a reward. The authors introduce a framework combining Thompson Sampling with path planning in trees/graphs. More specifically, they consider four path planning strategies, leading to four approaches. The resulting approaches are empirically evaluated on synthetic settings. Unfortunately, the proposed approaches lack theoretical justification and the current experiments are not strong enough to support the claims made in the paper. Given that most of reviewer's concerns remained valid after rebuttal, I recommend to reject this paper."""
1765,"""AlignNet: Self-supervised Alignment Module""","['Graph networks', 'alignment', 'objects', 'relation networks']","""The natural world consists of objects that we perceive as persistent in space and time, even though these objects appear, disappear and reappear in our field of view as we move. This can be attributed to our notion of object persistence -- our knowledge that objects typically continue to exist, even if we can no longer see them -- and our ability to track objects. Drawing inspiration from the psychology literature on `sticky indices', we propose the AlignNet, a model that learns to assign unique indices to new objects when they first appear and reassign the index to subsequent instances of that object. By introducing a persistent object-based memory, the AlignNet may be used to keep track of objects across time, even if they disappear and reappear later. We implement the AlignNet as a graph network applied to a bipartite graph, in which the input nodes are objects from two sets that we wish to align. The network is trained to predict the edges which connect two instances of the same object across sets. The model is also capable of identifying when there are no matches and dealing with these cases. We perform experiments to show the model's ability to deal with the appearance, disappearance and reappearance of objects. Additionally, we demonstrate how a persistent object-based memory can help solve question-answering problems in a partially observable environment.""","""This paper proposes a network architecture which labels object with an identifier that it is trained to retain across subsequent instances of that same object. After discussion, the reviewers agree that the approach is interesting, well-motivated and written, and novel. However, there was unanimous concern about the experimental evaluation, so the paper does not appear to be ready for publication just yet, and I am recommending rejection."""
1766,"""A Generalized Framework of Sequence Generation with Application to Undirected Sequence Models""","['nlp', 'sequence modeling', 'natural language generation', 'machine translation', 'BERT', 'Sesame Street']","""Undirected neural sequence models such as BERT (Devlin et al., 2019) have received renewed interest due to their success on discriminative natural language understanding tasks such as question-answering and natural language inference. The problem of generating sequences directly from these models has received relatively little attention, in part because generating from such models departs significantly from the conventional approach of monotonic generation in directed sequence models. We investigate this problem by first proposing a generalized model of sequence generation that unifies decoding in directed and undirected models. The proposed framework models the process of generation rather than a resulting sequence, and under this framework, we derive various neural sequence models as special cases, such as autoregressive, semi-autoregressive, and refinement-based non-autoregressive models. This unification enables us to adapt decoding algorithms originally developed for directed sequence models to undirected models. We demonstrate this by evaluating various decoding strategies for a cross-lingual masked translation model (Lample and Conneau, 2019). Our experiments show that generation from undirected sequence models, under our framework, is competitive with the state of the art on WMT'14 English-German translation. We also demonstrate that the proposed approach enables constant-time translation with similar performance to linear-time translation from the same model by rescoring hypotheses with an autoregressive model.""","""This paper proposes a generalized way to generate sequences from undirected sequence models. Overall, I believe a framework like this could definitely be a valuable contribution, but as Reviewer 1 and Reviewer 3 noted, the paper is a bit lacking both in theoretical analysis and strong empirical results. I don't think that this is a bad paper at all, but it feels like the paper needs a little bit of an extra push to tighten up the argumentation and/or results before warranting publication at a premier venue such as ICLR. I'd suggest the authors continue to improve the paper and aim to re-submit at revised version at a future conference. """
1767,"""BERTScore: Evaluating Text Generation with BERT""","['Metric', 'Evaluation', 'Contextual Embedding', 'Text Generation']","""We propose BERTScore, an automatic evaluation metric for text generation. Analogously to common metrics, BERTScore computes a similarity score for each token in the candidate sentence with each token in the reference sentence. However, instead of exact matches, we compute token similarity using contextual embeddings. We evaluate using the outputs of 363 machine translation and image captioning systems. BERTScore correlates better with human judgments and provides stronger model selection performance than existing metrics. Finally, we use an adversarial paraphrase detection task and show that BERTScore is more robust to challenging examples compared to existing metrics. ""","""Thanks for an interesting discussion. The authors present a supposedly task-independent evaluation metric for generation tasks with references that relies on BERT or similar pretrained language models and a BERT-internal alignment. Reviewers are moderately positive. I encourage the authors to think about a) whether their approach scales to language pairs where wordpieces are less comparable; b) whether second order similarly, e.g., using RSA, would be better than alignment-based similarity; c) whether this metric works in the extremes, e.g., can it distinguish between bad output and super-bad output (where in both cases alignment may be impossible), and can it distinguish between good output and super-good output (where BERT scores may be too biased by BERT's training objective). """
1768,"""How Well Do WGANs Estimate the Wasserstein Metric?""","['Optimal Transport', 'Wasserstein Metric', 'Generative Adversial Networks']","""Generative modelling is often cast as minimizing a similarity measure between a data distribution and a model distribution. Recently, a popular choice for the similarity measure has been the Wasserstein metric, which can be expressed in the Kantorovich duality formulation as the optimum difference of the expected values of a potential function under the real data distribution and the model hypothesis. In practice, the potential is approximated with a neural network and is called the discriminator. Duality constraints on the function class of the discriminator are enforced approximately, and the expectations are estimated from samples. This gives at least three sources of errors: the approximated discriminator and constraints, the estimation of the expectation value, and the optimization required to find the optimal potential. In this work, we study how well the methods, that are used in generative adversarial networks to approximate the Wasserstein metric, perform. We consider, in particular, the pseudo-formula -transform formulation, which eliminates the need to enforce the constraints explicitly. We demonstrate that the pseudo-formula -transform allows for a more accurate estimation of the true Wasserstein metric from samples, but surprisingly, does not""","""There is insufficient support to recommend accepting this paper. Generally the reviewers found the technical contribution to be insufficient, and were not sufficiently convinced by the experimental evaluation. The feedback provided should help the authors improve their paper."""
1769,"""The Break-Even Point on Optimization Trajectories of Deep Neural Networks""","['generalization', 'sgd', 'learning rate', 'batch size', 'hessian', 'curvature', 'trajectory', 'optimization']","""The early phase of training of deep neural networks is critical for their final performance. In this work, we study how the hyperparameters of stochastic gradient descent (SGD) used in the early phase of training affect the rest of the optimization trajectory. We argue for the existence of the ""``break-even"" point on this trajectory, beyond which the curvature of the loss surface and noise in the gradient are implicitly regularized by SGD. In particular, we demonstrate on multiple classification tasks that using a large learning rate in the initial phase of training reduces the variance of the gradient, and improves the conditioning of the covariance of gradients. These effects are beneficial from the optimization perspective and become visible after the break-even point. Complementing prior work, we also show that using a low learning rate results in bad conditioning of the loss surface even for a neural network with batch normalization layers. In short, our work shows that key properties of the loss surface are strongly influenced by SGD in the early phase of training. We argue that studying the impact of the identified effects on generalization is a promising future direction.""","""This is an interesting study analyzing learning trajectories and their dependence on hyperparameters, important for better understanding of learning in deep neural networks. All reviewers agree that the paper has a useful message to the ICLR community, and appreciate changes made by the authors in response to the initial reviews."""
1770,"""DS-VIC: Unsupervised Discovery of Decision States for Transfer in RL""","['reinforcement learning', 'probabilistic inference', 'variational inference', 'intrinsic control', 'transfer learning']","""We learn to identify decision states, namely the parsimonious set of states where decisions meaningfully affect the future states an agent can reach in an environment. We utilize the VIC framework, which maximizes an agents `empowerment, ie the ability to reliably reach a diverse set of states -- and formulate a sandwich bound on the empowerment objective that allows identification of decision states. Unlike previous work, our decision states are discovered without extrinsic rewards -- simply by interacting with the world. Our results show that our decision states are: 1) often interpretable, and 2) lead to better exploration on downstream goal-driven tasks in partially observable environments.""","""This work is interesting because it's aim is to push the work in intrinsic motivation towards crisp definitions, and thus reads like an algorithmic paper rather than yet another reward heuristic and system building paper. There is some nice theory here, integration with options, and clear connections to existing work. However, the paper is not ready for publication. There were were several issues that could not be resolved in the reviewers minds (even after the author response and extensive discussion). The primary issues were: (1) There was significant confusion around the beta sensitivity---figs 6,7,8 appear misleading or at least contradictory to the message of the paper. (2) The need for x,y env states. (3) The several reviewers found the decision states unintuitive and confused the quantitative analysis focus if they given the authors primary focus is transfer performance. (4) All reviewers found the experiments lacking. Overall, the results generally don't support the claims of the paper, and there are too many missing details and odd empirical choices. Again, there was extensive discussion because all agreed this is an interesting line of work. Taking the reviewers excellent suggestions on board will almost certainly result in an excellent paper. Keep going!"""
1771,"""Triple Wins: Boosting Accuracy, Robustness and Efficiency Together by Enabling Input-Adaptive Inference""","['adversarial robustness', 'efficient inference']","""Deep networks were recently suggested to face the odds between accuracy (on clean natural images) and robustness (on adversarially perturbed images) (Tsipras et al., 2019). Such a dilemma is shown to be rooted in the inherently higher sample complexity (Schmidt et al., 2018) and/or model capacity (Nakkiran, 2019), for learning a high-accuracy and robust classifier. In view of that, give a classification task, growing the model capacity appears to help draw a win-win between accuracy and robustness, yet at the expense of model size and latency, therefore posing challenges for resource-constrained applications. Is it possible to co-design model accuracy, robustness and efficiency to achieve their triple wins? This paper studies multi-exit networks associated with input-adaptive efficient inference, showing their strong promise in achieving a sweet point"" in co-optimizing model accuracy, robustness, and efficiency. Our proposed solution, dubbed Robust Dynamic Inference Networks (RDI-Nets), allows for each input (either clean or adversarial) to adaptively choose one of the multiple output layers (early branches or the final one) to output its prediction. That multi-loss adaptivity adds new variations and flexibility to adversarial attacks and defenses, on which we present a systematical investigation. We show experimentally that by equipping existing backbones with such robust adaptive inference, the resulting RDI-Nets can achieve better accuracy and robustness, yet with over 30% computational savings, compared to the defended original models. ""","""The authors develop a novel technique to train networks to be robust and accurate while still being efficient to train and evaluate. The authors propose ""Robust Dynamic Inference Networks"" that allows inputs to be adaptively routed to one of several output channels and thereby adjust the inference time used for any given input. They show The line of investigation initiated by authors is very interesting and should open up a new set of research questions in the adversarial training literature. The reviewers were in consensus on the quality of the paper and voted in favor of acceptance. One of the reviewers had concerns about the evaluation in the paper, in particular about whether carefully crafted attacks could break the networks studied by the authors. However, the authors performed additional experiments and revised the paper to address this concern to the satisfaction of the reviewer. Overall, the paper contains interesting contributions and should be accepted."""
1772,"""Understanding and Improving Transformer From a Multi-Particle Dynamic System Point of View""","['Transformer', 'Ordinary Differential Equation', 'Multi-Particle Dynamic System', 'Natural Language Processing']","""The Transformer architecture is widely used in natural language processing. Despite its success, the design principle of the Transformer remains elusive. In this paper, we provide a novel perspective towards understanding the architecture: we show that the Transformer can be mathematically interpreted as a numerical Ordinary Differential Equation (ODE) solver for a convection-diffusion equation in a multi-particle dynamic system. In particular, how words in a sentence are abstracted into contexts by passing through the layers of the Transformer can be interpreted as approximating multiple particles' movement in the space using the Lie-Trotter splitting scheme and the Euler's method. Given this ODE's perspective, the rich literature of numerical analysis can be brought to guide us in designing effective structures beyond the Transformer. As an example, we propose to replace the Lie-Trotter splitting scheme by the Strang-Marchuk splitting scheme, a scheme that is more commonly used and with much lower local truncation errors. The Strang-Marchuk splitting scheme suggests that the self-attention and position-wise feed-forward network (FFN) sub-layers should not be treated equally. Instead, in each layer, two position-wise FFN sub-layers should be used, and the self-attention sub-layer is placed in between. This leads to a brand new architecture. Such an FFN-attention-FFN layer is ""Macaron-like"", and thus we call the network with this new architecture the Macaron Net. Through extensive experiments, we show that the Macaron Net is superior to the Transformer on both supervised and unsupervised learning tasks. The reproducible code can be found on pseudo-url""","""In this work, the authors interpret the Transformer as a numerical ODE modelling multi-particle convection. Guided by this connection, the authors take the Transformer that uses a feed forward net over attentions, and create a variant of transformer which instead uses an FFN-attention-FFN layer, thus the name macaron net. The authors present experiments in the GLUE dataset and in two MT datasets, and they overall report improved performance using their variant of Transformer. Thus, the main selling point of the paper is how seeing Transformer under his new light can potentially improve results through the construction of better models. The main criticisms from the authors is that this story is not entirely convincing because the proposed variant departs a bit from the theory (R1 and comment about the Strang-Marchuk splitting) and the papers does not consider an evaluation of accuracy of Macaron in solving the underlying set of ODEs (comment from R3). As such, I cannot recommend acceptance of this paper -- I believe another set of revisions would increase the impact of this paper."""
1773,"""Online Meta-Critic Learning for Off-Policy Actor-Critic Methods""","['off-policy actor-critic', 'reinforcement learning', 'meta-learning']","""Off-Policy Actor-Critic (Off-PAC) methods have proven successful in a variety of continuous control tasks. Normally, the critics action-value function is updated using temporal-difference, and the critic in turn provides a loss for the actor that trains it to take actions with higher expected return. In this paper, we introduce a novel and flexible meta-critic that observes the learning process and meta-learns an additional loss for the actor that accelerates and improves actor-critic learning. Compared to the vanilla critic, the meta-critic network is explicitly trained to accelerate the learning process; and compared to existing meta-learning algorithms, meta-critic is rapidly learned online for a single task, rather than slowly over a family of tasks. Crucially, our meta-critic framework is designed for off-policy based learners, which currently provide state-of-the-art reinforcement learning sample efficiency. We demonstrate that online meta-critic learning leads to improvements in a variety of continuous control environments when combined with contemporary Off-PAC methods DDPG, TD3 and the state-of-the-art SAC. ""","""There was extension discussion of the paper between the reviewers. It's clear that the reviewers appreciated the main idea in the paper, and the notion of an ""online"" meta-critic that accelerates the RL process is definitely very appealing. However, there were unanswered questions about what the method is actually doing that make me reticent to recommend acceptance at this point. I would refer the authors to R3 and R1 for an in-depth discussion of the issues, but the short summary is that it's not clear whether, if, and how the meta-loss in this case actually converges, and what the meta-critic is actually doing. In the absence of a theoretical understanding for what the modification does to accelerate RL, we are left with the empirical experiments, and there it is necessary to consider alternative hypotheses and perform detailed ablation analyses to understand that the method really works for the reasons stated by the authors (and not some of the alternative explanations, see e.g. R3). While there is nothing wrong with a result that is primarily empirical, it is important to analyze that the empirical gains really are happening for the reasons claimed, and to carefully study convergence and asymptotic properties of the algorithm. The comparatively diminished gains with the stronger algorithms (TD3 and especially SAC) make me more skeptical. Therefore, I would recommend that the paper not be accepted at this time, though I encourage the authors to resubmit with a more in-depth experimental evaluation."""
1774,"""Adversarial Imitation Attack""","['Adversarial examples', 'Security', 'Machine learning', 'Deep neural network', 'Computer vision']","""Deep learning models are known to be vulnerable to adversarial examples. A practical adversarial attack should require as little as possible knowledge of attacked models T. Current substitute attacks need pre-trained models to generate adversarial examples and their attack success rates heavily rely on the transferability of adversarial examples. Current score-based and decision-based attacks require lots of queries for the T. In this study, we propose a novel adversarial imitation attack. First, it produces a replica of the T by a two-player game like the generative adversarial networks (GANs). The objective of the generative model G is to generate examples which lead D returning different outputs with T. The objective of the discriminative model D is to output the same labels with T under the same inputs. Then, the adversarial examples generated by D are utilized to fool the T. Compared with the current substitute attacks, imitation attack can use less training data to produce a replica of T and improve the transferability of adversarial examples. Experiments demonstrate that our imitation attack requires less training data than the black-box substitute attacks, but achieves an attack success rate close to the white-box attack on unseen data with no query. ""","""This paper proposes to use a generative adversarial network to train a substitute that replicates (imitates) a learned model under attack. It then shows that the adversarial examples for the substitute can be effectively used to attack the learned model. The proposed approach leads to better success rates of attacking than other substitute-training approaches that require more training examples. The condition to get a well-trained imitation model is that a sufficient number of queries are obtained from the target model. This paper has valuable contributions by developing an imitation attacker. However, some key issues remain. In particular, I agree with R1 that the average number of queries per image is relatively high, even during training. In the rebuttal, the authors made the assumption that suppose their method could make an infinite number of queries for target models, which is unfortunately not realistic. Another point that I found confusing: at testing, I dont see how you can use the imitation model D to generate adversarial samples (D is a discriminative model, not a generator); it should be G, right? """
1775,"""A Gradient-Based Approach to Neural Networks Structure Learning""",[],"""Designing the architecture of deep neural networks (DNNs) requires human expertise and is a cumbersome task. One approach to automatize this task has been considering DNN architecture parameters such as the number of layers, the number of neurons per layer, or the activation function of each layer as hyper-parameters, and using an external method for optimizing it. Here we propose a novel neural network model, called Farfalle Neural Network, in which important architecture features such as the number of neurons in each layer and the wiring among the neurons are automatically learned during the training process. We show that the proposed model can replace a stack of dense layers, which is used as a part of many DNN architectures. It can achieve higher accuracy using significantly fewer parameters. ""","""This paper proposes a neural network architecture that represents each neuron with input and output embeddings. Experiments on CIFAR show that the proposed method outperforms baseline models with a fully connected layer. I like the main idea of the paper. However, I agree with R1 and R2 that experiments presented in the paper are not enough to convince readers of the benefit of the proposed method. In particular, I would like to see a more comprehensive set of results across a suite of datasets. It would be even better, although not necessary, if the authors apply this method on top of different base architectures in multiple domains. At the very least, the authors should run an experiment to compare the proposed approach with a feed forward network on a simple/toy classification dataset. I understand that these experiments require a lot of computational resources. The authors do not need to reach SotA, but they do need to provide more empirical evidence that the method is useful in practice. I also would like to see more discussions with regards to the computational cost of the proposed method. How much slower/faster is training/inference compared to a fully connected network? The writing of the paper can also be improved. There are many a few typos throughout the paper, even in the abstract. I recommend rejecting this paper for ICLR, but would encourage the authors to polish it and run a few more suggested experiments to strengthen the paper."""
1776,"""Intrinsic Motivation for Encouraging Synergistic Behavior""","['reinforcement learning', 'intrinsic motivation', 'synergistic', 'robot manipulation']","""We study the role of intrinsic motivation as an exploration bias for reinforcement learning in sparse-reward synergistic tasks, which are tasks where multiple agents must work together to achieve a goal they could not individually. Our key idea is that a good guiding principle for intrinsic motivation in synergistic tasks is to take actions which affect the world in ways that would not be achieved if the agents were acting on their own. Thus, we propose to incentivize agents to take (joint) actions whose effects cannot be predicted via a composition of the predicted effect for each individual agent. We study two instantiations of this idea, one based on the true states encountered, and another based on a dynamics model trained concurrently with the policy. While the former is simpler, the latter has the benefit of being analytically differentiable with respect to the action taken. We validate our approach in robotic bimanual manipulation and multi-agent locomotion tasks with sparse rewards; we find that our approach yields more efficient learning than both 1) training with only the sparse reward and 2) using the typical surprise-based formulation of intrinsic motivation, which does not bias toward synergistic behavior. Videos are available on the project webpage: pseudo-url.""","""The authors address the important issue of exploration in reinforcement learning. In this case, they propose to use reward shaping to encourage joint-actions whose outcomes deviate from the sequential counterpart. Although the proposed intrinsic reward is targeted at a particular family of two-agent robotic tasks, one can imagine generalizing some of the ideas here to other multi-agent learning tasks. The reviewers agree that the paper is of interest to the ICLR audience."""
1777,"""Optimising Neural Network Architectures for Provable Adversarial Robustness""","['Provable adversarial robustness', 'Lipschitz neural networks', 'network architectures']","""Existing Lipschitz-based provable defences to adversarial examples only cover the L2 threat model. We introduce the first bound that makes use of Lipschitz continuity to provide a more general guarantee for threat models based on any p-norm. Additionally, a new strategy is proposed for designing network architectures that exhibit superior provable adversarial robustness over conventional convolutional neural networks. Experiments are conducted to validate our theoretical contributions, show that the assumptions made during the design of our novel architecture hold in practice, and quantify the empirical robustness of several Lipschitz-based adversarial defence methods.""","""The authors propose a novel method to estimate the Lipschitz constant of a neural network, and use this estimate to derive architectures that will have improved adversarial robustness. While the paper contains interesting ideas, the reviewers felt it was not ready for publication due to the following factors: 1) The novelty and significance of the bound derived by the authors is unclear. In particular, the bound used is coarse and likely to be loose, and hence is not likely to be useful in general. 2) The bound on adversarial risk seems of limited significance, since in practice, this can be estimated accurately based on the adversarial risk measured on the test set. 3) The paper is poorly organized with several typos and is hard to read in its present form. The reviewers were in consensus and the authors did not respond during the rebuttal phase. Therefore, I recommend rejection. However, all the reviewers found interesting ideas in the paper. Hence, I encourage the authors to consider the reviewers' feedback and submit a revised version to a future venue."""
1778,"""Trajectory representation learning for Multi-Task NMRDPs planning""","['Representation Learning', 'State Estimation', 'Non Markovian Decision Process']","""Expanding Non Markovian Reward Decision Processes (NMRDP) into Markov Decision Processes (MDP) enables the use of state of the art Reinforcement Learning (RL) techniques to identify optimal policies. In this paper an approach to exploring NMRDPs and expanding them into MDPs, without the prior knowledge of the reward structure, is proposed. The non Markovianity of the reward function is disentangled under the assumption that sets of similar and dissimilar trajectory batches can be sampled. More precisely, within the same batch, measuring the similarity between any couple of trajectories is permitted, although comparing trajectories from different batches is not possible. A modified version of the triplet loss is optimised to construct a representation of the trajectories under which rewards become Markovian.""","""The paper considers a special case of decision making processes with non-Markovian reward functions, where conditioned on an unobserved task-label the reward function becomes Markovian. A semi-supervised loss for learning trajectory embeddings is proposed. The approach is tested on a multi-task grid-world environment and ablation studies are performed. The reviewers mainly criticize the experiments in the paper. The environments studied are quite simple, leaving it uncertain if the approach still works in more complex settings. Apart from ablation results, no baselines were presented although the setting is similar to continual learning / multi-task learning (with unobserved task label) where prior work does exist. Furthermore, the writing was found to be partially lacking in clarity, although the authors addressed this in the rebuttal. The paper is somewhat below acceptance threshold, judging from reviews and my own reading, mostly due to lack of convincing experiments. Furthermore, the general setting considered in this paper seems quite specific, and therefore of limited impact."""
1779,"""Mutual Exclusivity as a Challenge for Deep Neural Networks""","['Cognitive Science', 'Deep Learning', 'Word Learning', 'Lifelong Learning']","""Strong inductive biases allow children to learn in fast and adaptable ways. Children use the mutual exclusivity (ME) bias to help disambiguate how words map to referents, assuming that if an object has one label then it does not need another. In this paper, we investigate whether or not standard neural architectures have a ME bias, demonstrating that they lack this learning assumption. Moreover, we show that their inductive biases are poorly matched to lifelong learning formulations of classification and translation. We demonstrate that there is a compelling case for designing neural networks that reason by mutual exclusivity, which remains an open challenge.""","""This paper presents an understudied bias known to exist in the learning patterns of children, but not present in trained NN models. This bias is the mutual exclusivity bias: if the child already knows the word for an object, they can recognize that the object is likely not the referent when a new word is introduced. So that is, the names of objects are mutually exclusive. The authors and reviewers had a healthy discussion. In particular, Reviewer 3 would have liked to have seen a new algorithm or model proposed, as well as an analysis of when ME would help or hurt. I hope these ideas can be incorporated into a future submission of this paper."""
1780,"""Tree-Structured Attention with Hierarchical Accumulation""","['Tree', 'Constituency Tree', 'Hierarchical Accumulation', 'Machine Translation', 'NMT', 'WMT', 'IWSLT', 'Text Classification', 'Sentiment Analysis']","""Incorporating hierarchical structures like constituency trees has been shown to be effective for various natural language processing (NLP) tasks. However, it is evident that state-of-the-art (SOTA) sequence-based models like the Transformer struggle to encode such structures inherently. On the other hand, dedicated models like the Tree-LSTM, while explicitly modeling hierarchical structures, do not perform as efficiently as the Transformer. In this paper, we attempt to bridge this gap with Hierarchical Accumulation to encode parse tree structures into self-attention at constant time complexity. Our approach outperforms SOTA methods in four IWSLT translation tasks and the WMT'14 English-German task. It also yields improvements over Transformer and Tree-LSTM on three text classification tasks. We further demonstrate that using hierarchical priors can compensate for data shortage, and that our model prefers phrase-level attentions over token-level attentions.""","""This paper incorporates tree-structured information about a sentence into how transformers process it. Results are improved. The paper is clear. Reviewers liked it. Clear accept."""
1781,"""Symmetric-APL Activations: Training Insights and Robustness to Adversarial Attacks""","['Activation function', 'Adaptive', 'Training', 'Robustness', 'Adversarial attack']","""Deep neural networks with learnable activation functions have shown superior performance over deep neural networks with fixed activation functions for many different problems. The adaptability of learnable activation functions adds expressive power to the model which results in better performance. Here, we propose a new learnable activation function based on Adaptive Piecewise Linear units (APL), which 1) gives equal expressive power to both the positive and negative halves on the input space and 2) is able to approximate any zero-centered continuous non-linearity in a closed interval. We investigate how the shape of the Symmetric-APL function changes during training and perform ablation studies to gain insight into the reason behind these changes. We hypothesize that these activation functions go through two distinct stages: 1) adding gradient information and 2) adding expressive power. Finally, we show that the use of Symmetric-APL activations can significantly increase the robustness of deep neural networks to adversarial attacks. Our experiments on both black-box and open-box adversarial attacks show that commonly-used architectures, namely Lenet, Network-in-Network, and ResNet-18 can be up to 51% more resistant to adversarial fooling by only using the proposed activation functions instead of ReLUs.""","""This work presents a learnable activation function based on adaptive piecewise linear (APL) units. Specifically, it extends APL to the symmetric form. The authors argue that S-APL activations can lead networks that are more robust to adversarial attacks. They present an empirical evaluation to prove the latter claim. However, the significance of these empirical results were not clear due to non-standard threat models used in black-box setting and the weak attacks used in open-box setting. The authors revised the submission and addressed some of the concerns the reviewers had. This effort was greatly appreciated by the reviewers. However, the issues related to the significance of robustness results remained unclear even after the revision. In particular, as pointed by R4, some of the revisions seem to be incomplete (Table 4). Also, the concern R4 had initially raised about non-standard black-box attacks was not addressed. Finally, some experimental details are still missing. While the revision indeed a great step, the adversarial experiments more clear and use more standard setup be convincing. """
1782,"""A Function Space View of Bounded Norm Infinite Width ReLU Nets: The Multivariate Case""","['inductive bias', 'regularization', 'infinite-width networks', 'ReLU networks']","""We give a tight characterization of the (vectorized Euclidean) norm of weights required to realize a function \mathbb{R}^d$ as a single hidden-layer ReLU network with an unbounded number of units (infinite width), extending the univariate characterization of Savarese et al. (2019) to the multivariate case.""","""The article studies the set of functions expressed by a network with bounded parameters in the limit of large width, relating the required norm to the norm of a transform of the target function, and extending previous work that addressed the univariate case. The article contains a number of observations and consequences. The reviewers were quite positive about this article. """
1783,"""Anchor & Transform: Learning Sparse Representations of Discrete Objects""","['sparse representation learning', 'discrete inputs', 'natural language processing']","""Learning continuous representations of discrete objects such as text, users, and items lies at the heart of many applications including text and user modeling. Unfortunately, traditional methods that embed all objects do not scale to large vocabulary sizes and embedding dimensions. In this paper, we propose a general method, Anchor & Transform (ANT) that learns sparse representations of discrete objects by jointly learning a small set of anchor embeddings and a sparse transformation from anchor objects to all objects. ANT is scalable, flexible, end-to-end trainable, and allows the user to easily incorporate domain knowledge about object relationships (e.g. WordNet, co-occurrence, item clusters). ANT also recovers several task-specific baselines under certain structural assumptions on the anchors and transformation matrices. On text classification and language modeling benchmarks, ANT demonstrates stronger performance with fewer parameters as compared to existing vocabulary selection and embedding compression baselines.""","""The paper proposes a method to produce embeddings of discrete objects, jointly learning a small set of anchor embeddings and a sparse transformation from anchor objects to all the others. While the paper is well written, and proposes an interesting solution, the contribution seems rather incremental (as noted by several reviewers), considering the existing literature in the area. Also, after discussions the usefulness of the method remains a bit unclear - it seems some engineering (related to sparse operations) is still required to validate the viability of the approach. """
1784,"""TechKG: A Large-Scale Chinese Technology-Oriented Knowledge Graph""",['Chinese knowledge graph building'],"""Knowledge graph is a kind of valuable knowledge base which would benefit lots of AI-related applications. Up to now, lots of large-scale knowledge graphs have been built. However, most of them are non-Chinese and designed for general purpose. In this work, we introduce TechKG, a large scale Chinese knowledge graph that is technology-oriented. It is built automatically from massive technical papers that are published in Chinese academic journals of different research domains. Some carefully designed heuristic rules are used to extract high quality entities and relations. Totally, it comprises of over 260 million triplets that are built upon more than 52 million entities which come from 38 research domains. Our preliminary experiments indicate that TechKG has high adaptability and can be used as a dataset for many diverse AI-related applications.""","""This paper presents a large-scale automatically extracted knowledge base in Chinese which contains information about entities and their relations present in academic papers. The authors have collected several papers that come from around 38 different domains. As such this is a dataset creation paper where the authors have used existing methodologies to perform relation extraction in Chinese. After having read the reviews and followup replies by authors, the main criticisms of the paper still hold. In addition to the lack of technical contribution, I feel that the writing of the paper can be improved a lot, for example, I would like to see a table with some example entities and relations extracted. That said, with further improvements this paper could potentially be a good contribution to LREC which is focused on dataset creation. In its current form, I recommend the paper to be rejected."""
1785,"""Learning Generative Models using Denoising Density Estimators""","['generative probabilistic models', 'denoising autoencoders', 'neural density estimation']","""Learning generative probabilistic models that can estimate the continuous density given a set of samples, and that can sample from that density is one of the fundamental challenges in unsupervised machine learning. In this paper we introduce a new approach to obtain such models based on what we call denoising density estimators (DDEs). A DDE is a scalar function, parameterized by a neural network, that is efficiently trained to represent a kernel density estimator of the data. In addition, we show how to leverage DDEs to develop a novel approach to obtain generative models that sample from given densities. We prove that our algorithms to obtain both DDEs and generative models are guaranteed to converge to the correct solutions. Advantages of our approach include that we do not require specific network architectures like in normalizing flows, ODE solvers as in continuous normalizing flows, nor do we require adversarial training as in generative adversarial networks (GANs). Finally, we provide experimental results that demonstrate practical applications of our technique. ""","""The majority of reviewers suggest that this paper is not yet ready for publication. The idea presented in the paper is interesting, but there are concerns about what experiments are done, what papers are cited, and how polished the paper is. This all suggests that the paper could benefit from a bit more time to thoughtfully go through some of the criticisms, and make sure that everything reviewers suggest is covered."""
1786,"""Exploring the Correlation between Likelihood of Flow-based Generative Models and Image Semantics""","['flow-based generative models', 'out-of-distribution samples detection', 'likelihood robustness']",""" Among deep generative models, flow-based models, simply referred as \emph{flow}s in this paper, differ from other models in that they provide tractable likelihood. Besides being an evaluation metric of synthesized data, flows are supposed to be robust against out-of-distribution~(OoD) inputs since they do not discard any information of the inputs. However, it has been observed that flows trained on FashionMNIST assign higher likelihoods to OoD samples from MNIST. This counter-intuitive observation raises the concern about the robustness of flows' likelihood. In this paper, we explore the correlation between flows' likelihood and image semantics. We choose two typical flows as the target models: Glow, based on coupling transformations, and pixelCNN, based on autoregressive transformations. Our experiments reveal surprisingly weak correlation between flows' likelihoods and image semantics: the predictive likelihoods of flows can be heavily affected by trivial transformations that keep the image semantics unchanged, which we call semantic-invariant transformations~(SITs). We explore three SITs~(all small pixel-level modifications): image pixel translation, random noise perturbation, latent factors zeroing~(limited to flows using multi-scale architecture, e.g. Glow). These findings, though counter-intuitive, resonate with the fact that the predictive likelihood of a flow is the joint probability of all the image pixels. So flows' likelihoods, modeling on pixel-level intensities, is not able to indicate the existence likelihood of the high-level image semantics. We call for attention that it may be \emph{abuse} if we use the predictive likelihoods of flows for OoD samples detection.""","""This paper discusses the (lack of) correlation between the image semantics and the likelihood assigned by flow-based models, and implications for out-of-distribution (OOD) detection. The reviewers raised several important questions: 1) precise definition of OOD: definition of semantics vs typicality (cf. definition in Nalisnick et al. 2019 pointed by R1) There was a nice discussion between authors and the reviewers. At a high level, there was some agreement in the end, but lack of precise definition may cause confusion. I think adding a precise definition will add more clarity and improve the paper. 2) novelty: similar observations have been made in earlier papers cf. Nalisnick et al. 2018. R3 also pointed a recent paper by Ren et al. 2019 which showed that likelihood can be dominated by background pixels. Older work has shown that the likelihood and sample quality are not necessarily correlated. The reviewers appreciate that this paper provides additional evidence, but weren't convinced that the new observations in this paper qualified for a full paper. 3) experiments on more datasets Overall, while this paper explores an interesting direction, it's not ready for publication as is. I encourage the authors to revise the paper based on the feedback and submit to a different venue."""
1787,"""On Predictive Information Sub-optimality of RNNs""",[],"""Certain biological neurons demonstrate a remarkable capability to optimally compress the history of sensory inputs while being maximally informative about the future. In this work, we investigate if the same can be said of artificial neurons in recurrent neural networks (RNNs) trained with maximum likelihood. In experiments on two datasets, restorative Brownian motion and a hand-drawn sketch dataset, we find that RNNs are sub-optimal in the information plane. Instead of optimally compressing past information, they extract additional information that is not relevant for predicting the future. Overcoming this limitation may require alternative training procedures and architectures, or objectives beyond maximum likelihood estimation.""","""Nice start but unfortunately not ripe. The issues remarked by the reviewers were only partly addressed, and an improved version of the paper should be submitted at a future venue."""
1788,"""Domain-Independent Dominance of Adaptive Methods""",[],"""From a simplified analysis of adaptive methods, we derive AvaGrad, a new optimizer which outperforms SGD on vision tasks when its adaptability is properly tuned. We observe that the power of our method is partially explained by a decoupling of learning rate and adaptability, greatly simplifying hyperparameter search. In light of this observation, we demonstrate that, against conventional wisdom, Adam can also outperform SGD on vision tasks, as long as the coupling between its learning rate and adaptability is taken into account. In practice, AvaGrad matches the best results, as measured by generalization accuracy, delivered by any existing optimizer (SGD or adaptive) across image classification (CIFAR, ImageNet) and character-level language modelling (Penn Treebank) tasks. This later observation, alongside of AvaGrad's decoupling of hyperparameters, could make it the preferred optimizer for deep learning, replacing both SGD and Adam.""","""This paper proposes an adaptive gradient method for optimization in deep learning called AvaGrad. The authors argue that AvaGrad greatly simplifies hyperparameter search (over e.g. ADAM) and demonstrate competitive performance on benchmark image and text problems. In thorough reviews, thorough author response and discussion by the reviewers (which are are all appreciated) a few concerns about the work came to light and were debated. One reviewer was compelled by the author response to raise their recommendation to weak accept. However, none of the reviewers felt strongly enough to champion the paper for acceptance and even the reviewer assigning the highest score had reservations. A major issue of debate was the treatment of hyperparameters, i.e. that the authors tuned hyperparameters on a smaller problem and then assumed these would extrapolate to larger problems. In a largely empirical paper this does seem to be a significant concern. The space of adaptive optimizers for deep learning is a crowded one and thus the empirical (or theoretical) burden of proof of superiority is high. The authors state regarding a concurrent submission: ""when hyperparameters are properly tuned, echoing our results on this matter"", however, it seems that the reviewers disagree that the hyperparameters are indeed properly tuned in this paper. It's due to these remaining reservations that the recommendation is to reject. """
1789,"""Monotonic Multihead Attention""","['Simultaneous Translation', 'Transformer', 'Monotonic Attention']","""Simultaneous machine translation models start generating a target sequence before they have encoded or read the source sequence. Recent approach for this task either apply a fixed policy on transformer, or a learnable monotonic attention on a weaker recurrent neural network based structure. In this paper, we propose a new attention mechanism, Monotonic Multihead Attention (MMA), which introduced the monotonic attention mechanism to multihead attention. We also introduced two novel interpretable approaches for latency control that are specifically designed for multiple attentions. We apply MMA to the simultaneous machine translation task and demonstrate better latency-quality tradeoffs compared to MILk, the previous state-of-the-art approach. ""","""This paper extends previous models for monotonic attention to the multi-head attention used in Transformers, yielding ""Monotonic Multi-head Attention."" The proposed method achieves better latency-quality tradeoffs in simultaneous MT tasks in two language pairs. The proposed method is a relatively straightforward extension of the previous Hard and Infinite Lookback monotonic attention models. However, all reviewers seem to agree that this paper is a meaningful contribution to the task of simultaneously MT, and the revised version of the paper (along with the authors' comments) addressed most of the raised concerns. Therefore, I propose acceptance of this paper."""
1790,"""SNODE: Spectral Discretization of Neural ODEs for System Identification""","['Recurrent neural networks', 'system identification', 'neural ODEs']","""This paper proposes the use of spectral element methods \citep{canuto_spectral_1988} for fast and accurate training of Neural Ordinary Differential Equations (ODE-Nets; \citealp{Chen2018NeuralOD}) for system identification. This is achieved by expressing their dynamics as a truncated series of Legendre polynomials. The series coefficients, as well as the network weights, are computed by minimizing the weighted sum of the loss function and the violation of the ODE-Net dynamics. The problem is solved by coordinate descent that alternately minimizes, with respect to the coefficients and the weights, two unconstrained sub-problems using standard backpropagation and gradient methods. The resulting optimization scheme is fully time-parallel and results in a low memory footprint. Experimental comparison to standard methods, such as backpropagation through explicit solvers and the adjoint technique \citep{Chen2018NeuralOD}, on training surrogate models of small and medium-scale dynamical systems shows that it is at least one order of magnitude faster at reaching a comparable value of the loss function. The corresponding testing MSE is one order of magnitude smaller as well, suggesting generalization capabilities increase.""","""This work proposes using spectral element methods to speed up training of ODE Networks for system identification. The authors utilize truncated series of Legendre polynomials to analyze the dynamics and then conduct experiments that shows their proposed scheme achieves an order of magnitude improvement in training speed compared to baseline methods. Reviewers raised some concerns (e.g. empirical comparison against adjoint methods in the multi-agent example) or asked for clarifications (e.g. details of time sampling of the data). The authors adequately addressed most of these concerns via rebuttal response as well as revising the initial submission. At the end, all reviewers recommended for accept based on contributions of this work on improving training speed of ODE Networks. R4 hopes that some of the additional concerns that are not yet reflected in the current revision, be addressed in the camera ready version. """
1791,"""Generalized Domain Adaptation with Covariate and Label Shift CO-ALignment""","['Domain Adaptation', 'Label Shift', 'Covariate Shift']","""Unsupervised knowledge transfer has a great potential to improve the generalizability of deep models to novel domains. Yet the current literature assumes that the label distribution is domain-invariant and only aligns the covariate or vice versa. In this paper, we explore the task of Generalized Domain Adaptation (GDA): How to transfer knowledge across different domains in the presence of both covariate and label shift? We propose a covariate and label distribution CO-ALignment (COAL) model to tackle this problem. Our model leverages prototype-based conditional alignment and label distribution estimation to diminish the covariate and label shifts, respectively. We demonstrate experimentally that when both types of shift exist in the data, COAL leads to state-of-the-art performance on several cross-domain benchmarks.""","""This paper proposes a method to address the covariate shift and label shift problems simultaneously. The paper is an interesting attempt towards an important problem. However, Reviewers and AC commonly believe that the current version is not acceptable due to several major misconceptions and misleading presentations. In particular: - The novelty of the paper is not very significant. - The main concern of this work is that its shift assumption is not well justified. - The proposed method may be problematic by using the minimax entropy and self-training with resampling. - The presentation has many errors that require a full rewrite. Hence I recommend rejection."""
1792,"""GenDICE: Generalized Offline Estimation of Stationary Values""","['Off-policy Policy Evaluation', 'Reinforcement Learning', 'Stationary Distribution Correction Estimation', 'Fenchel Dual']","""An important problem that arises in reinforcement learning and Monte Carlo methods is estimating quantities defined by the stationary distribution of a Markov chain. In many real-world applications, access to the underlying transition operator is limited to a fixed set of data that has already been collected, without additional interaction with the environment being available. We show that consistent estimation remains possible in this scenario, and that effective estimation can still be achieved in important applications. Our approach is based on estimating a ratio that corrects for the discrepancy between the stationary and empirical distributions, derived from fundamental properties of the stationary distribution, and exploiting constraint reformulations based on variational divergence minimization. The resulting algorithm, GenDICE, is straightforward and effective. We prove the consistency of the method under general conditions, provide a detailed error analysis, and demonstrate strong empirical performance on benchmark tasks, including off-line PageRank and off-policy policy evaluation.""","""The authors develop a framework for off-policy value estimation for infinite horizon RL tasks, for estimating the stationary distribution of a Markov chain. Reviewers were uniformly impressed by the work, and satisfied by the author response. Congratulations! """
1793,"""Towards Stable and Efficient Training of Verifiably Robust Neural Networks""","['Robust Neural Networks', 'Verifiable Training', 'Certified Adversarial Defense']","""Training neural networks with verifiable robustness guarantees is challenging. Several existing approaches utilize linear relaxation based neural network output bounds under perturbation, but they can slow down training by a factor of hundreds depending on the underlying network architectures. Meanwhile, interval bound propagation (IBP) based training is efficient and significantly outperforms linear relaxation based methods on many tasks, yet it may suffer from stability issues since the bounds are much looser especially at the beginning of training. In this paper, we propose a new certified adversarial training method, CROWN-IBP, by combining the fast IBP bounds in a forward bounding pass and a tight linear relaxation based bound, CROWN, in a backward bounding pass. CROWN-IBP is computationally efficient and consistently outperforms IBP baselines on training verifiably robust neural networks. We conduct large scale experiments on MNIST and CIFAR datasets, and outperform all previous linear relaxation and bound propagation based certified defenses in L_inf robustness. Notably, we achieve 7.02% verified test error on MNIST at epsilon=0.3, and 66.94% on CIFAR-10 with epsilon=8/255.""","""This paper presents a method that hybridizes the strategies of linear programming and interval bound propagation to improve adversarial robustness. While some reviewers have concerns about the novelty of the underlying ideas presented, the method is an improvement to the SOTA in certifiable robustness, and has become a benchmark method within this class of defenses."""
1794,"""Multi-Dimensional Explanation of Reviews""","['deep learning', 'explanation', 'interpretability', 'reviews', 'multi-aspect', 'sentiment analysis', 'mask']","""Neural models achieved considerable improvement for many natural language processing tasks, but they offer little transparency, and interpretability comes at a cost. In some domains, automated predictions without justifications have limited applicability. Recently, progress has been made regarding single-aspect sentiment analysis for reviews, where the ambiguity of a justification is minimal. In this context, a justification, or mask, consists of (long) word sequences from the input text, which suffice to make the prediction. Existing models cannot handle more than one aspect in one training and induce binary masks that might be ambiguous. In our work, we propose a neural model for predicting multi-aspect sentiments for reviews and generates a probabilistic multi-dimensional mask (one per aspect) simultaneously, in an unsupervised and multi-task learning manner. Our evaluation shows that on three datasets, in the beer and hotel domain, our model outperforms strong baselines and generates masks that are: strong feature predictors, meaningful, and interpretable.""","""This paper proposes a neural network model for predicting multi-aspect sentiment and generating masks that can justify the predictions. The positive aspects of the paper include improved results over the state-of-the-art. Reviewers found the technical novelty limited, and the experiments short of being fully convincing. After the author rebuttal, there were discussions between the reviewers and the AC, and the reviewers still thought the paper is not fully convincing given these limitations. I thank the authors for their submission and detailed responses to the reviewers and hope to see this research in a future venue."""
1795,"""WaveFlow: A Compact Flow-based Model for Raw Audio""","['flow-based models', 'raw audio', 'waveforms', 'speech synthesis', 'generative models']","""In this work, we present WaveFlow, a small-footprint generative flow for raw audio, which is trained with maximum likelihood without complicated density distillation and auxiliary losses as used in Parallel WaveNet. It provides a unified view of flow-based models for raw audio, including autoregressive flow (e.g., WaveNet) and bipartite flow (e.g., WaveGlow) as special cases. We systematically study these likelihood-based generative models for raw waveforms in terms of test likelihood and speech fidelity. We demonstrate that WaveFlow can synthesize high-fidelity speech and obtain comparable likelihood as WaveNet, while only requiring a few sequential steps to generate very long waveforms. In particular, our small-footprint WaveFlow has only 5.91M parameters and can generate 22.05kHz speech 15.39 times faster than real-time on a GPU without customized inference kernels.""","""The paper presented a unified framework for constructing likelihood-based generative models for raw audio. It demonstrated tradeoffs between memory footprint, generation speech and audio fidelity. The experimental justification with objective likelihood scores and subjective mean opinion scores are matching standard baselines. The main concern of this paper is the novelty and depth of the analysis. It could be much stronger if there're thorough analysis on the benefits and limitations of the unified approach and more insights on how to make the model much better. """
1796,"""On the ""steerability"" of generative adversarial networks""","['generative adversarial network', 'latent space interpolation', 'dataset bias', 'model generalization']","""An open secret in contemporary machine learning is that many models work beautifully on standard benchmarks but fail to generalize outside the lab. This has been attributed to biased training data, which provide poor coverage over real world events. Generative models are no exception, but recent advances in generative adversarial networks (GANs) suggest otherwise -- these models can now synthesize strikingly realistic and diverse images. Is generative modeling of photos a solved problem? We show that although current GANs can fit standard datasets very well, they still fall short of being comprehensive models of the visual manifold. In particular, we study their ability to fit simple transformations such as camera movements and color changes. We find that the models reflect the biases of the datasets on which they are trained (e.g., centered objects), but that they also exhibit some capacity for generalization: by ""steering"" in latent space, we can shift the distribution while still creating realistic images. We hypothesize that the degree of distributional shift is related to the breadth of the training data distribution. Thus, we conduct experiments to quantify the limits of GAN transformations and introduce techniques to mitigate the problem. Code is released on our project page: pseudo-url""","""All three reviewers agree that the paper provide an interesting study on the ability of generative adversarial networks to model geometric transformations and a simple practical approach to how such ability can be improved. Acceptance as a poster is recommended."""
1797,"""A Functional Characterization of Randomly Initialized Gradient Descent in Deep ReLU Networks""","['Inductive Bias', 'Generalization', 'Interpretability', 'Functional Characterization', 'Loss Surface', 'Initialization']","""Despite their popularity and successes, deep neural networks are poorly understood theoretically and treated as 'black box' systems. Using a functional view of these networks gives us a useful new lens with which to understand them. This allows us us to theoretically or experimentally probe properties of these networks, including the effect of standard initializations, the value of depth, the underlying loss surface, and the origins of generalization. One key result is that generalization results from smoothness of the functional approximation, combined with a flat initial approximation. This smoothness increases with number of units, explaining why massively overparamaterized networks continue to generalize well.""","""This article sets out to study the advantages of depth and overparametrization in neural networks from the perspective of function space, with results on univariate shallow fully connected ReLU networks and some experiments on deep networks. The article presents results on the concentration /dispersion of the slope / break point distribution of the functions represented by shallow univariate ReLU networks for parameters from various distributions. The reviewers found that the article contains interesting analysis, but that the presentation could be improved. The revision clarified some aspects and included some experiments illustrating breakpoint distributions in relation to the curvature of some target functions. However, the reviewers did not find this convincing enough, pointing out that the analysis focuses on a very restrictive setting and that that presentation of the article still could be improved. The discussion of implicit regularisation in section 2.4 seems promising, but it would benefit from a clearer motivation, background, and discussion. """
1798,"""Towards an Adversarially Robust Normalization Approach""","['robustness', 'BatchNorm', 'adversarial']","""Batch Normalization (BatchNorm) has shown to be effective for improving and accelerating the training of deep neural networks. However, recently it has been shown that it is also vulnerable to adversarial perturbations. In this work, we aim to investigate the cause of adversarial vulnerability of the BatchNorm. We hypothesize that the use of different normalization statistics during training and inference (mini-batch statistics for training and moving average of these values at inference) is the main cause of this adversarial vulnerability in the BatchNorm layer. We empirically proved this by experiments on various neural network architectures and datasets. Furthermore, we introduce Robust Normalization (RobustNorm) and experimentally show that it is not only resilient to adversarial perturbation but also inherit the benefits of BatchNorm.""",""" This paper presents an empirical analysis of the reasons behind BatchNorm vulnerability to adversarial inputs, based on the hypothesis that such vulnerability may be caused by using different statistics during the inference stage as compared to the training stage. While the paper is interesting and clearly written, reviewers point out insufficient empirical evaluation in order to make the claim more convincing."""
1799,"""Interpretations are useful: penalizing explanations to align neural networks with prior knowledge""","['explainability', 'deep learning', 'interpretability', 'computer vision']","""For an explanation of a deep learning model to be effective, it must provide both insight into a model and suggest a corresponding action in order to achieve some objective. Too often, the litany of proposed explainable deep learning methods stop at the first step, providing practitioners with insight into a model, but no way to act on it. In this paper, we propose contextual decomposition explanation penalization (CDEP), a method which enables practitioners to leverage existing explanation methods in order to increase the predictive accuracy of deep learning models. In particular, when shown that a model has incorrectly assigned importance to some features, CDEP enables practitioners to correct these errors by directly regularizing the provided explanations. Using explanations provided by contextual decomposition (CD) (Murdoch et al., 2018), we demonstrate the ability of our method to increase performance on an array of toy and real datasets.""","""The paper contains interesting ideas for giving simple explanations to a NN; however, the reviewers do not feel the contribution is sufficiently novel to merit acceptance."""
1800,"""Improving Adversarial Robustness Requires Revisiting Misclassified Examples""","['Robustness', 'Adversarial Defense', 'Adversarial Training']","""Deep neural networks (DNNs) are vulnerable to adversarial examples crafted by imperceptible perturbations. A range of defense techniques have been proposed to improve DNN robustness to adversarial examples, among which adversarial training has been demonstrated to be the most effective. Adversarial training is often formulated as a min-max optimization problem, with the inner maximization for generating adversarial examples. However, there exists a simple, yet easily overlooked fact that adversarial examples are only defined on correctly classified (natural) examples, but inevitably, some (natural) examples will be misclassified during training. In this paper, we investigate the distinctive influence of misclassified and correctly classified examples on the final robustness of adversarial training. Specifically, we find that misclassified examples indeed have a significant impact on the final robustness. More surprisingly, we find that different maximization techniques on misclassified examples may have a negligible influence on the final robustness, while different minimization techniques are crucial. Motivated by the above discovery, we propose a new defense algorithm called {\em Misclassification Aware adveRsarial Training} (MART), which explicitly differentiates the misclassified and correctly classified examples during the training. We also propose a semi-supervised extension of MART, which can leverage the unlabeled data to further improve the robustness. Experimental results show that MART and its variant could significantly improve the state-of-the-art adversarial robustness.""","""This paper presents modifications to the adversarial training loss that yield improvements in adversarial robustness. While some reviewers were concerned by the lack of mathematical elegance in the proposed method, there is consensus that the proposed method clears a tough bar by increasing SOTA robustness on CIFAR-10. """
1801,"""Learning Key Steps to Attack Deep Reinforcement Learning Agents""","['deep reinforcement learning', 'adversarial attacks']","""Deep reinforcement learning agents are known to be vulnerable to adversarial attacks. In particular, recent studies have shown that attacking a few key steps is effective for decreasing the agent's cumulative reward. However, all existing attacking methods find those key steps with human-designed heuristics, and it is not clear how more effective key steps can be identified. This paper introduces a novel reinforcement learning framework that learns more effective key steps through interacting with the agent. The proposed framework does not require any human heuristics nor knowledge, and can be flexibly coupled with any white-box or black-box adversarial attack scenarios. Experiments on benchmark Atari games across different scenarios demonstrate that the proposed framework is superior to existing methods for identifying more effective key steps.""","""This paper considers adversarial attacks in deep reinforcement learning, and specifically focuses on the problem of identifying key steps to attack. The paper poses learning these key steps as an RL problem with a cost for the attacker choosing to attack. The reviewers agreed that this was an interesting problem setup, and the ability to learn these attacks without heuristics is promising. The main concern, which was felt was not adequately addressed in the rebuttals, was that the results need to be more than just competitive with heuristic approaches. The fact that the attack ratio cannot be reliably changed, even with varying pseudo-formula still presents a major hurdle in the evaluation of the proposed method. For the aforementioned reasons, I recommend rejecting this paper."""
1802,"""TWO-STEP UNCERTAINTY NETWORK FOR TASKDRIVEN SENSOR PLACEMENT""","['Uncertainty Estimation', 'Sensor Placement', 'Sequential Control', 'Adaptive Sensing']","""Optimal sensor placement achieves the minimal cost of sensors while obtaining the prespecified objectives. In this work, we propose a framework for sensor placement to maximize the information gain called Two-step Uncertainty Network(TUN). TUN encodes an arbitrary number of measurements, models the conditional distribution of high dimensional data, and estimates the task-specific information gain at un-observed locations. Experiments on the synthetic data show that TUN outperforms the random sampling strategy and Gaussian Process-based strategy consistently.""","""This paper proposes a sensor placement strategy based on maximising the information gain. Instead of using Gaussian process, the authors apply neural nets as function approximators. A limited empirical evaluation is performed to assess the performance of the proposed strategy. The reviewers have raised several major issues, including the lack of novelty, clarity, and missing critical details in the exposition. The authors didnt address any of the raised concerns in the rebuttal. I will hence recommend rejection of this paper."""
1803,"""Watch, Try, Learn: Meta-Learning from Demonstrations and Rewards""","['meta-learning', 'reinforcement learning', 'imitation learning']","""Imitation learning allows agents to learn complex behaviors from demonstrations. However, learning a complex vision-based task may require an impractical number of demonstrations. Meta-imitation learning is a promising approach towards enabling agents to learn a new task from one or a few demonstrations by leveraging experience from learning similar tasks. In the presence of task ambiguity or unobserved dynamics, demonstrations alone may not provide enough information; an agent must also try the task to successfully infer a policy. In this work, we propose a method that can learn to learn from both demonstrations and trial-and-error experience with sparse reward feedback. In comparison to meta-imitation, this approach enables the agent to effectively and efficiently improve itself autonomously beyond the demonstration data. In comparison to meta-reinforcement learning, we can scale to substantially broader distributions of tasks, as the demonstration reduces the burden of exploration. Our experiments show that our method significantly outperforms prior approaches on a set of challenging, vision-based control tasks.""","""The paper proposed a meta-learning approach that learns from demonstrations and subsequent RL tasks. The reviewers found this work interesting and promising. There have been some concerns regarding the clarity of presentation, which seems to be addressed in the revised version. Therefore, I recommend acceptance for this paper."""
1804,"""Unpaired Point Cloud Completion on Real Scans using Adversarial Training""","['point cloud completion', 'generative adversarial network', 'real scans']","""As 3D scanning solutions become increasingly popular, several deep learning setups have been developed for the task of scan completion, i.e., plausibly filling in regions that were missed in the raw scans. These methods, however, largely rely on supervision in the form of paired training data, i.e., partial scans with corresponding desired completed scans. While these methods have been successfully demonstrated on synthetic data, the approaches cannot be directly used on real scans in absence of suitable paired training data. We develop a first approach that works directly on input point clouds, does not require paired training data, and hence can directly be applied to real scans for scan completion. We evaluate the approach qualitatively on several real-world datasets (ScanNet, Matterport3D, KITTI), quantitatively on 3D-EPN shape completion benchmark dataset, and demonstrate realistic completions under varying levels of incompleteness. ""","""This paper presents an unsupervised method for completing point clouds obtained from real 3D scans based on GAN. Generally, the paper is well-organized, and its contributions and experimental supports are clearly presented, from which all reviewers got positive impressions. Although the technical contribution of the method seems marginal as it is essentially a combination of established methods, it well fits in a novel and practical application scenario, and its useful is convincingly demonstrated in intensive experiments. We conclude that the paper provides favorable insights covering the weakness in technical novelty, so Id like to recommend acceptance. """
1805,"""Frequency-based Search-control in Dyna""","['Model-based reinforcement learning', 'search-control', 'Dyna', 'frequency of a signal']","""Model-based reinforcement learning has been empirically demonstrated as a successful strategy to improve sample efficiency. In particular, Dyna is an elegant model-based architecture integrating learning and planning that provides huge flexibility of using a model. One of the most important components in Dyna is called search-control, which refers to the process of generating state or state-action pairs from which we query the model to acquire simulated experiences. Search-control is critical in improving learning efficiency. In this work, we propose a simple and novel search-control strategy by searching high frequency regions of the value function. Our main intuition is built on Shannon sampling theorem from signal processing, which indicates that a high frequency signal requires more samples to reconstruct. We empirically show that a high frequency function is more difficult to approximate. This suggests a search-control strategy: we should use states from high frequency regions of the value function to query the model to acquire more samples. We develop a simple strategy to locally measure the frequency of a function by gradient and hessian norms, and provide theoretical justification for this approach. We then apply our strategy to search-control in Dyna, and conduct experiments to show its property and effectiveness on benchmark domains.""","""The reviewers are unanimous in their evaluation of this paper, and I concur."""
1806,"""Learning to Transfer via Modelling Multi-level Task Dependency""","['multi-task learning', 'attention mechanism']","""Multi-task learning has been successful in modeling multiple related tasks with large, carefully curated labeled datasets. By leveraging the relationships among different tasks, multi-task learning framework can improve the performance significantly. However, most of the existing works are under the assumption that the predefined tasks are related to each other. Thus, their applications on real-world are limited, because rare real-world problems are closely related. Besides, the understanding of relationships among tasks has been ignored by most of the current methods. Along this line, we propose a novel multi-task learning framework - Learning To Transfer Via Modelling Multi-level Task Dependency, which constructed attention based dependency relationships among different tasks. At the same time, the dependency relationship can be used to guide what knowledge should be transferred, thus the performance of our model also be improved. To show the effectiveness of our model and the importance of considering multi-level dependency relationship, we conduct experiments on several public datasets, on which we obtain significant improvements over current methods.""","""In this work, the authors address a multi-task learning setting and propose to enhance the estimation of task dependency with an attention mechanism capturing sample-dependant measure of task relatedness. All reviewers and AC agree that the current manuscript lacks clarity and convincing empirical evaluations that clearly show the benefits of the proposed approach w.r.t. state-of-the-art methods. Specifically, the reviewers raised several important concerns that were viewed by AC as critical issues: (1) the empirical evaluations need to be significantly strengthened to show the benefits of the proposed methods over SOTA -- see R2s request to empirically compare with the related recent work [Taskonomy, 2018] and R4s request to compare with the work [End-to-end multi-task learning with attention, 2018]. R4 also suggested to include an ablation study to assess the benefits of the attention mechanism. Pleased to report that the authors addressed the ablation study in their rebuttal and confirmed that the proposed attention mechanism plays an important role in the performance of the proposed method. (2) All reviewers see an issue with the presentation clarity of the conceptual and technical contributions -- see R4s and R2s detailed comments and questions regarding technical contributions; see R3s and R4s comments that the distinction between the general task dependency and the data-driven dependency is either not significant or is not clearly articulated; finding better examples to illustrate the difference (instead of reiterating the current ones) would strengthen the clarity and conceptual contributions. A general consensus among reviewers and AC suggests, in its current state the manuscript is not ready for a publication. It needs more clarifications, empirical studies and polish to achieve the desired goal. """
1807,"""A Bilingual Generative Transformer for Semantic Sentence Embedding""","['sentence embedding', 'semantic similarity', 'multilingual', 'latent variables', 'vae']","""Semantic sentence embedding models take natural language sentences and turn them into vectors, such that similar vectors indicate similarity in the semantics between the sentences. Bilingual data offers a useful signal for learning such embeddings: properties shared by both sentences in a translation pair are likely semantic, while divergent properties are likely stylistic or language-specific. We propose a deep latent variable model that attempts to perform source separation on parallel sentences, isolating what they have in common in a latent semantic vector, and explaining what is left over with language-specific latent vectors. Our proposed approach differs from past work on semantic sentence encoding in two ways. First, by using a variational probabilistic framework, we introduce priors that encourage source separation, and can use our models posterior to predict sentence embeddings for monolingual data at test time. Second, we use high- capacity transformers as both data generating distributions and inference networks contrasting with most past work on sentence embeddings. In experiments, our approach substantially outperforms the state-of-the-art on a standard suite of se- mantic similarity evaluations. Further, we demonstrate that our approach yields the largest gains on more difficult subsets of test where simple word overlap is not a good indicator of similarity.""","""This paper presents a model for building sentence embeddings using a generative transformer model that encoders separately semantic aspects (that are common across languages) and language-specific aspects. The authors evaluate their embeddings in a non-parametric way (i.e., on STS tasks by measuring cosine similarity) and find their method to outperform other sentence embeddings methods. The main concern that both reviewers (and myself) have about this work relates to its evaluation part. While the authors present a set of very interesting difficult evaluation and probing splits aiming at quantifying the linguistic behaviour of their model, it is unsatisfying the fact that the authors do not evaluate their model extensively in standard classification embedding benchmarks (e.g., as in GLUE). The authors comment: [their model in producing embeddings] it isnt as strong when using classification for final predictions. This indicates that the embeddings learned by our approach may be most useful when no downstream training is possible. If this is true, why is it the case and isnt it quite restrictive? I think this work is interesting with a nice analysis but the current empirical results are borderline (yes, the model is better on STS, but this is quite limited of an idea compared to using these embeddings as features in a classification tasks). As such, I do not recommend this paper for acceptance but I do hope that authors will keep improving their method and will make it work in more general problems involving classification tasks."""
1808,"""Language GANs Falling Short""","['NLP', 'GAN', 'MLE', 'adversarial', 'text generation', 'temperature']","""Traditional natural language generation (NLG) models are trained using maximum likelihood estimation (MLE) which differs from the sample generation inference procedure. During training the ground truth tokens are passed to the model, however, during inference, the model instead reads its previously generated samples - a phenomenon coined exposure bias. Exposure bias was hypothesized to be a root cause of poor sample quality and thus many generative adversarial networks (GANs) were proposed as a remedy since they have identical training and inference. However, many of the ensuing GAN variants validated sample quality improvements but ignored loss of sample diversity. This work reiterates the fallacy of quality-only metrics and clearly demonstrate that the well-established technique of reducing softmax temperature can outperform GANs on a quality-only metric. Further, we establish a definitive quality-diversity evaluation procedure using temperature tuning over local and global sample metrics. Under this, we find that MLE models consistently outperform the proposed GAN variants over the whole quality-diversity space. Specifically, we find that 1) exposure bias appears to be less of an issue than the complications arising from non-differentiable, sequential GAN training; 2) MLE trained models provide a better quality/diversity trade-off compared to their GAN counterparts, all while being easier to train, easier to cross-validate, and less computationally expensive.""","""Main content: Blind review #1 summarizes it well: Recently many language GAN papers have been published to overcome the so called exposure bias, and demonstrated improvements in natural language generation in terms of sample quality, some works propose to assess the generation in terms of diversity, however, quality and diversity are two conflicting measures that are hard to meet. This paper is a groundbreaking work that proposes receiver operating curve or Pareto optimality for quality and diversity measures, and shows that simple temperature sweeping in MLE generates the best quality-diversity curves than all language GAN models through comprehensive experiments. It points out a good target that language GANs should aims at. -- Discussion: The main reservation was the originality of the idea of using temperature sweep in the softmax. However, it turns out this idea came from the authors in the first place, which they have not been able to state directly due to the anonymity requirement. Per the program chair's instruction to direct this to the area chair, I think this has been handled correctly. -- Recommendation and justification: This paper should be accepted. It provides readers with insight in that it illuminates a misconception of how important exposure bias has been assumed to be, and provides a less expensive MLE based way to train than GAN counterparts."""
1809,"""Neural Networks for Principal Component Analysis: A New Loss Function Provably Yields Ordered Exact Eigenvectors ""","['Principal Component Analysis', 'Autoencoder', 'Neural Network']","""In this paper, we propose a new loss function for performing principal component analysis (PCA) using linear autoencoders (LAEs). Optimizing the standard L2 loss results in a decoder matrix that spans the principal subspace of the sample covariance of the data, but fails to identify the exact eigenvectors. This downside originates from an invariance that cancels out in the global map. Here, we prove that our loss function eliminates this issue, i.e. the decoder converges to the exact ordered unnormalized eigenvectors of the sample covariance matrix. For this new loss, we establish that all local minima are global optima and also show that computing the new loss (and also its gradients) has the same order of complexity as the classical loss. We report numerical results on both synthetic simulations, and a real-data PCA experiment on MNIST (i.e., a 60,000 x784 matrix), demonstrating our approach to be practically applicable and rectify previous LAEs' downsides.""","""Quoting from R3: ""This paper proposes and analyzes a new loss function for linear autoencoders (LAEs) whose minima directly recover the principal components of the data. The core idea is to simultaneously solve a set of MSE LAE problems with tied weights and increasingly stringent masks on the encoder/decoder matrices."" With two weak acceptance recommendations and a recommendation for rejection, this paper is borderline in terms of its scores. The approach and idea are interesting. The main shortcoming of the paper, as highlighted by the reviewers, is that the approach and theoretical analysis are not properly motivated to solve an actual problem faced in real-world data. The approach does not provide a better algorithm for recovering the eigenvectors of the data, nor is it proposed as part of a learning framework to solve a real-world problem. Experiments are shown on synthetic data and MNIST. As a stand-alone theoretical result, it leaves open questions as to the proposed utility."""
1810,"""Exploration via Flow-Based Intrinsic Rewards""","['reinforcement learning', 'exploration', 'curiosity', 'optical flow', 'intrinsic rewards']","""Exploration bonuses derived from the novelty of observations in an environment have become a popular approach to motivate exploration for reinforcement learning (RL) agents in the past few years. Recent methods such as curiosity-driven exploration usually estimate the novelty of new observations by the prediction errors of their system dynamics models. In this paper, we introduce the concept of optical flow estimation from the field of computer vision to the RL domain and utilize the errors from optical flow estimation to evaluate the novelty of new observations. We introduce a flow-based intrinsic curiosity module (FICM) capable of learning the motion features and understanding the observations in a more comprehensive and efficient fashion. We evaluate our method and compare it with a number of baselines on several benchmark environments, including Atari games, Super Mario Bros., and ViZDoom. Our results show that the proposed method is superior to the baselines in certain environments, especially for those featuring sophisticated moving patterns or with high-dimensional observation spaces.""","""This paper proposes a method for improving exploration by implementing intrinsic rewards based on optical flow prediction error. The approach was evaluated on several Atari games, Super Mario, and VizDoom. There are several strengths to this work, including the fact that it comes with open source code, and several reviewers agree its an interesting approach. R1 thought it was well-written and quite easy to follow. I also commend the authors for being so responsive with comments and for adding the new experiments that were asked for. The main issue that reviewers pointed out, and which I am also concerned about, is how these particular games were chosen. R3 points out that these 5 Atari games are not known for being hard exploration games. Authors did conduct further experiments on 6 Atari games suggested by the reviewer, but the results didnt show significant improvement over baselines. I appreciate the authors argument that every method has its niche, but the environments chosen must still be properly motivated. I would have preferred to see results on all Atari games, along with detailed and quantitative analysis into why FICM fails on specific tasks. For instance, they state in the rebuttal that The selection criteria of our environments is determined by the relevance of motions of the foreground and background components (including the controllable agent and the uncontrollable objects) to the performance (i.e., obtainable scores) of the agent. But it doesnt seem like this was assessed in any quantitative way. Without this understanding, itd be difficult for an outsider to know which tasks are appropriate to use with this approach. I urge the authors to focus on expanding and quantifying the work they depict in Figure 8, which, although it begins to illuminate why FICM works for some games and not others, is still only a qualitative snapshot of 2 games. I still think this is a very interesting approach and look forward to future versions of this paper."""
1811,"""Learning from Label Proportions with Consistency Regularization""","['learning from label proportions', 'consistency regularization', 'semi-supervised learning']","""The problem of learning from label proportions (LLP) involves training classifiers with weak labels on bags of instances, rather than strong labels on individual instances. The weak labels only contain the label proportions of each bag. The LLP problem is important for many practical applications that only allow label proportions to be collected because of data privacy or annotation costs, and has recently received lots of research attention. Most existing works focus on extending supervised learning models to solve the LLP problem, but the weak learning nature makes it hard to further improve LLP performance with a supervised angle. In this paper, we take a different angle from semi-supervised learning. In particular, we propose a novel model inspired by consistency regularization, a popular concept in semi-supervised learning that encourages the model to produce a decision boundary that better describes the data manifold. With the introduction of consistency regularization, we further extend our study to non-uniform bag-generation and validation-based parameter-selection procedures that better match practical needs. Experiments not only justify that LLP with consistency regularization achieves superior performance, but also demonstrate the practical usability of the proposed procedures.""","""After reading the author's rebuttal, the reviewer still hold that the main contribution is just the simple combination of already known losses. And the paper need to pay more attention on the clarity of the paper."""
1812,"""Poly-encoders: Architectures and Pre-training Strategies for Fast and Accurate Multi-sentence Scoring""",[],"""The use of deep pre-trained transformers has led to remarkable progress in a number of applications (Devlin et al., 2018). For tasks that make pairwise comparisons between sequences, matching a given input with a corresponding label, two approaches are common: Cross-encoders performing full self-attention over the pair and Bi-encoders encoding the pair separately. The former often performs better, but is too slow for practical use. In this work, we develop a new transformer architecture, the Poly-encoder, that learns global rather than token level self-attention features. We perform a detailed comparison of all three approaches, including what pre-training and fine-tuning strategies work best. We show our models achieve state-of-the-art results on four tasks; that Poly-encoders are faster than Cross-encoders and more accurate than Bi-encoders; and that the best results are obtained by pre-training on large datasets similar to the downstream tasks.""","""The paper presents a new architecture that achieves the advantages of both Bi-encoder and Cross-encoder architectures. The proposed idea is reasonable and well-motivated, and the paper is clearly written. The experimental results on retrieval and dialog tasks are strong, achieving high accuracy while the computational efficiency is orders of magnitude smaller than Cross-encoder. All reviewers recommend acceptance of the paper and this AC concurs."""
1813,"""PDP: A General Neural Framework for Learning SAT Solvers""","['Neural SAT solvers', 'Graph Neural Networks', 'Neural Message Passing', 'Unsupervised Learning', 'Neural Decimation']","""There have been recent efforts for incorporating Graph Neural Network models for learning fully neural solvers for constraint satisfaction problems (CSP) and particularly Boolean satisfiability (SAT). Despite the unique representational power of these neural embedding models, it is not clear to what extent they actually learn a search strategy vs. statistical biases in the training data. On the other hand, by fixing the search strategy (e.g. greedy search), one would effectively deprive the neural models of learning better strategies than those given. In this paper, we propose a generic neural framework for learning SAT solvers (and in general any CSP solver) that can be described in terms of probabilistic inference and yet learn search strategies beyond greedy search. Our framework is based on the idea of propagation, decimation and prediction (and hence the name PDP) in graphical models, and can be trained directly toward solving SAT in a fully unsupervised manner via energy minimization, as shown in the paper. Our experimental results demonstrate the effectiveness of our framework for SAT solving compared to both neural and the industrial baselines.""","""The authors present a neural framework for learning SAT solvers that takes the form of probabilistic inference. The whole process consists of propagation, decimation and prediction steps, so unlike other prior work like Neurosat that learns to predict sat/unsat and only through this binary signal, this work presents a more modular approach, which is learned via energy minimization and it aims at predicting assignments (the assignments are soft which give rise to a differentiable loss). On the other hand, at test time the method returns the first soft assignment whose hard version (obtained by thresholding) satisfies the formula. Reviewers found this to be an interesting submission, however there were some concerns regarding (among others) comparison to previous work. Overall, this submission has generated a lot of discussion among the reviewers (also regarding how this model actually operates) and it is currently borderline without a strong champion. Due to the concerns raised and the limited space in the conference's program, unfortunately I cannot recommend this work for acceptance. """
1814,"""PassNet: Learning pass probability surfaces from single-location labels. An architecture for visually-interpretable soccer analytics""","['fully convolutional neural networks', 'convolutional neural networks', 'sports analytics', 'interpretable machine learning', 'deep learning']","""We propose a fully convolutional network architecture that is able to estimate a full surface of pass probabilities from single-location labels derived from high frequency spatio-temporal data of professional soccer matches. The network is able to perform remarkably well from low-level inputs by learning a feature hierarchy that produces predictions at different sampling levels that are merged together to preserve both coarse and fine detail. Our approach presents an extreme case of weakly supervised learning where there is just a single pixel correspondence between ground-truth outcomes and the predicted probability map. By providing not just an accurate evaluation of observed events but also a visual interpretation of the results of other potential actions, our approach opens the door for spatio-temporal decision-making analysis, an as-yet little-explored area in sports. Our proposed deep learning architecture can be easily adapted to solve many other related problems in sports analytics; we demonstrate this by extending the network to learn to estimate pass-selection likelihood.""","""The paper proposes PassNet, which is an architecture that produces a 2D map of probability of successful completion of a soccer pass. The architecture has some similarities with UNet and has downsampling and upsampling modules with a set of skip-connections between them. The reviewers raised several issues: * Novelty compared to UNET * Lack of ablation studies * Uncertainty about what probabilities mean and issues regarding output interpretation. The authors have tried to address these concerns in their rebuttal and provided additional experiments. They also argue that the application area (sport analytics) of the paper is novel. Even though the application area is interesting and might lead to new problems, this paper did not get enough support from reviewers to justify its acceptance."""
1815,"""Robust Instruction-Following in a Situated Agent via Transfer-Learning from Text""","['agent', 'language', '3D', 'simulation', 'policy', 'instruction', 'transfer']","""Recent work has described neural-network-based agents that are trained to execute language-like commands in simulated worlds, as a step towards an intelligent agent or robot that can be instructed by human users. However, the instructions that such agents are trained to follow are typically generated from templates (by an environment simulator), and do not reflect the varied or ambiguous expressions used by real people. We address this issue by integrating language encoders that are pretrained on large text corpora into a situated, instruction-following agent. In a procedurally-randomized first-person 3D world, we first train agents to follow synthetic instructions requiring the identification, manipulation and relative positioning of visually-realistic objects models. We then show how these abilities can transfer to a context where humans provide instructions in natural language, but only when agents are endowed with language encoding components that were pretrained on text-data. We explore techniques for integrating text-trained and environment-trained components into an agent, observing clear advantages for the fully-contextual phrase representations computed by the well-known BERT model, and additional gains by integrating a self-attention operation optimized to adapt BERT's representations for the agent's tasks and environment. These results bridge the gap between two successful strands of recent AI research: agent-centric behavior optimization and text-based representation learning. ""","""The paper examines whether it is possible to train agents to follow synthetic instructions that perceives and modifies a 3D scene based on a first-person viewpoint, and have the trained agents follow natural language instructions provided by humans. The paper received two weak rejects and one weak accept. The main concerns voiced by the reviewers are: 1. Lack of variety in natural language One of the key claims of the paper is that previous work on instruction following can only handle instructions generated from templates and cannot handle ambiguous expressions used by real people, and that the contribution of this work is that it can handle such expresssions. However, as pointed out by R1, the language considered in this work is very simplistic in form (close to being template based) with the main variation coming from synonyms. Even the free-form natural instructions that are collected, are done so with very specific instructions that restrict diversity of language (e.g don't use colors or other properties of the object). R1 also point out that there are prior work that handles much more diverse language. 2. Limited technical novelty and questions about how much the proposed CMSA method actually contribute 3. Overclaims and lack of precision when using terminology There is concern that the task that is addressed is not actually that complex. The environments are simple (with just 2 objects) and not that realistic. Tackling 2 tasks is barely ""multi-task"", and commonly, ""manipulation"" refers to low-level grasping/picking up of objects which is not how it is used here. While the paper has many strong elements and is mostly well written, considerable improvements still need to be made for the paper to have claims it can support. It is currently below the bar for acceptance. The authors are encouraged to improve their paper and resubmit to an appropriate venue. """
1816,"""Causal Induction from Visual Observations for Goal Directed Tasks""","['meta-learning', 'causal reasoning', 'policy learning']","""Causal reasoning has been an indispensable capability for humans and other intelligent animals to interact with the physical world. In this work, we propose to endow an artificial agent with the capability of causal reasoning for completing goal-directed tasks. We develop learning-based approaches to inducing causal knowledge in the form of directed acyclic graphs, which can be used to contextualize a learned goal-conditional policy to perform tasks in novel environments with latent causal structures. We leverage attention mechanisms in our causal induction model and goal-conditional policy, enabling us to incrementally generate the causal graph from the agent's visual observations and to selectively use the induced graph for determining actions. Our experiments show that our method effectively generalizes towards completing new tasks in novel environments with previously unseen causal structures.""","""The submission presents an approach to uncovering causal relations in an environment via interaction. The topic is interesting and the work is timely. However, the experimental setting is quite simplistic and the approach makes strong assumptions that limit its applicability. The reviewers are split. R2 raised their rating from 3 to 6 following the authors' responses and revision, but R1 maintained their rating of 3 and posted a response that justifies this position. In light of the limitations of the work, the AC recommends against accepting the submission."""
1817,"""Feature Selection using Stochastic Gates""","['Feature selection', 'classification', 'regression', 'survival analysis']","""Feature selection problems have been extensively studied in the setting of linear estimation, for instance LASSO, but less emphasis has been placed on feature selection for non-linear functions. In this study, we propose a method for feature selection in high-dimensional non-linear function estimation problems. The new procedure is based on directly penalizing the pseudo-formula norm of features, or the count of the number of selected features. Our pseudo-formula based regularization relies on a continuous relaxation of the Bernoulli distribution, which allows our model to learn the parameters of the approximate Bernoulli distributions via gradient descent. The proposed framework simultaneously learns a non-linear regression or classification function while selecting a small subset of features. We provide an information-theoretic justification for incorporating Bernoulli distribution into our approach. Furthermore, we evaluate our method using synthetic and real-life data and demonstrate that our approach outperforms other embedded methods in terms of predictive performance and feature selection.""","""The authors propose a method for feature selection in non linear models by using an appropriate continuous relaxation of binary feature selection variables. The reviewers found that the paper contains several interesting methodological contributions. However, they thought that the foundations of the methodology make very strong assumptions. Moreover the experimental evaluation is lacking comparison with other methods for non linear feature selection such as that of Doquet et al and Chang et al."""
1818,"""What Can Neural Networks Reason About?""","['reasoning', 'deep learning theory', 'algorithmic alignment', 'graph neural networks']","""Neural networks have succeeded in many reasoning tasks. Empirically, these tasks require specialized network structures, e.g., Graph Neural Networks (GNNs) perform well on many such tasks, but less structured networks fail. Theoretically, there is limited understanding of why and when a network structure generalizes better than others, although they have equal expressive power. In this paper, we develop a framework to characterize which reasoning tasks a network can learn well, by studying how well its computation structure aligns with the algorithmic structure of the relevant reasoning process. We formally define this algorithmic alignment and derive a sample complexity bound that decreases with better alignment. This framework offers an explanation for the empirical success of popular reasoning models, and suggests their limitations. As an example, we unify seemingly different reasoning tasks, such as intuitive physics, visual question answering, and shortest paths, via the lens of a powerful algorithmic paradigm, dynamic programming (DP). We show that GNNs align with DP and thus are expected to solve these tasks. On several reasoning tasks, our theory is supported by empirical results.""","""This paper proposes a framework which qualifies how well given neural architectures can perform on reasoning tasks. From this, they show a number of interesting empirical results, including the ability of graph neural network architectures for learn dynamic programming. This substantial theoretical and empirical study impressed the reviewers, who strongly lean towards acceptance. My view is that this is exactly the sort of work we should be show-casing at the conference, both in terms of focus, and of quality. I am happy to recommend this for acceptance."""
1819,"""Octave Graph Convolutional Network""","['Graph Convolutional Networks', 'Octave Convolution', 'Graph Mining']","""Many variants of Graph Convolutional Networks (GCNs) for representation learning have been proposed recently and have achieved fruitful results in various domains. Among them, spectral-based GCNs are constructed via convolution theorem upon theoretical foundation from the perspective of Graph Signal Processing (GSP). However, despite most of them implicitly act as low-pass filters that generate smooth representations for each node, there is limited development on the full usage of underlying information from low-frequency. Here, we first introduce the octave convolution on graphs in spectral domain. Accordingly, we present Octave Graph Convolutional Network (OctGCN), a novel architecture that learns representations for different frequency components regarding to weighted filters and graph wavelets bases. We empirically validate the importance of low-frequency components in graph signals on semi-supervised node classification and demonstrate that our model achieves state-of-the-art performance in comparison with both spectral-based and spatial-based baselines.""","""Two reviewers are negative on this paper while the other one is slightly positive. Overall, the paper does not make the bar of ICLR and thus a reject is recommended."""
1820,"""Guided Adaptive Credit Assignment for Sample Efficient Policy Optimization""","['credit assignment', 'sparse reward', 'policy optimization', 'sample efficiency']","""Policy gradient methods have achieved remarkable successes in solving challenging reinforcement learning problems. However, it still often suffers from sparse reward tasks, which leads to poor sample efficiency during training. In this work, we propose a guided adaptive credit assignment method to do effectively credit assignment for policy gradient methods. Motivated by entropy regularized policy optimization, our method extends the previous credit assignment methods by introducing more general guided adaptive credit assignment(GACA). The benefit of GACA is a principled way of utilizing off-policy samples. The effectiveness of proposed algorithm is demonstrated on the challenging \textsc{WikiTableQuestions} and \textsc{WikiSQL} benchmarks and an instruction following environment. The task is generating action sequences or program sequences from natural language questions or instructions, where only final binary success-failure execution feedback is available. Empirical studies show that our method significantly improves the sample efficiency of the state-of-the-art policy optimization approaches.""","""The paper proposes a policy gradient algorithm related to entropy-regularized RL, that instead of the KL uses f-divergence to avoid mode collapse. The reviewers found many technical issues with the presentation of the method, and the evaluation. In particular, the experiments are conducted on particular program synthesis tasks and show small margin improvements, while the algorithm is motivated by general sparse reward RL. I recommend rejection at this time, but encourage the authors to take the feedback into account and resubmit an improved version elsewhere."""
1821,"""Continual Learning with Gated Incremental Memories for Sequential Data Processing""","['continual learning', 'recurrent neural networks', 'progressive networks', 'gating autoencoders', 'sequential data processing']","""The ability to learn over changing task distributions without forgetting previous knowledge, also known as continual learning, is a key enabler for scalable and trustworthy deployments of adaptive solutions. While the importance of continual learning is largely acknowledged in machine vision and reinforcement learning problems, this is mostly under-documented for sequence processing tasks. This work focuses on characterizing and quantitatively assessing the impact of catastrophic forgetting and task interference when dealing with sequential data in recurrent neural networks. We also introduce a general architecture, named Gated Incremental Memory, for augmenting recurrent models with continual learning skills, whose effectiveness is demonstrated through the benchmarks introduced in this paper.""","""This manuscript describes a continual learning approach where individual instances consist of sequences, such as language modeling. The paper consists of a definition of a problem setting, tasks in that problem setting, baselines (not based on existing continual learning approaches, which the authors argue is to highlight the need for such techniques, but with which the reviewers took issue), and a novel architecture. Reviews focused on the gravity of the contribution. R1 and R2, in particular, argued that the paper is written as though the problem/benchmark definition is the main contribution. R2 mentions that in spite of this, the methods section jumps directly into the candidate architecture. As mentioned above, several reviewers also took issue with the fact that existing CL techniques are not employed as baselines. The authors engaged with reviewers and promised updates, but did not take the opportunity to update their paper. As many of the reviewers' comments remain unaddressed and the authors' updates did not materialize, I recommend rejection, and encourage the authors to incorporate the feedback they have received in a future submission."""
1822,"""NormLime: A New Feature Importance Metric for Explaining Deep Neural Networks""","['Machine Learning', 'Deep Learning', 'Interpretability', 'Feature Importance', 'Salience']","""The problem of explaining deep learning models, and model predictions generally, has attracted intensive interest recently. Many successful approaches forgo global approximations in order to provide more faithful local interpretations of the models behavior. LIME develops multiple interpretable models, each approximating a large neural network on a small region of the data manifold, and SP-LIME aggregates the local models to form a global interpretation. Extending this line of research, we propose a simple yet effective method, NormLIME, for aggregating local models into global and class-specific interpretations. A human user study strongly favored the class-specific interpretations created by NormLIME to other feature importance metrics. Numerical experiments employing Keep And Retrain (KAR) based feature ablation across various baselines (Random, Gradient-based, LIME, SHAP) confirms NormLIMEs effectiveness for recognizing important features.""","""The paper aims to extract the set of features explaining a class, from a trained DNN classifier. The proposed approach relies on LIME (Ribeiro et al. 2016), modified as follows: i) around a point x, a linearized sparse approximation of the classifier is found (as in LIME); ii) for a given class, the importance of a feature aggregates the relative absolute weight of this feature in the linearized sparse approximations above; iii) the explanation is made of the top features in terms of importance. This simple modification yields visual explanations that significantly better match the human perception than the SOTA competitors. The experimental setting based on the human evaluation via a Mechanical Turk setting is the second contribution of the approach. The feature importance measure is also assessed along a Keep and Retrain mechanism, showing that the approach selects actually relevant features in terms of prediction. Incidentally, it would be good to see the sensitivity of the method to parameter pseudo-formula (in Eq. 1). As noted by Rev#1, NormLIME is simple (and simplicity is a strength) and it demonstrates its effectiveness on the MNIST data. However, as noted by Rev#4, it is hard to assess the significance of the approach from this only dataset. It is understood that the Mechanical Turk-based assessment can only be used with a sufficiently simple problem. However, complementary experiments on ImageNet for instance, e.g., showing which pixels are retained to classify an image as a husky dog, would be much appreciated to confirm the merits and investigate the limitations of the approach. """
1823,"""Mildly Overparametrized Neural Nets can Memorize Training Data Efficiently""","['nonconvex optimization', 'optimization landscape', 'overparametrization']","""It has been observed \citep{zhang2016understanding} that deep neural networks can memorize: they achieve 100\% accuracy on training data. Recent theoretical results explained such behavior in highly overparametrized regimes, where the number of neurons in each layer is larger than the number of training samples. In this paper, we show that neural networks can be trained to memorize training data perfectly in a mildly overparametrized regime, where the number of parameters is just a constant factor more than the number of training samples, and the number of neurons is much smaller.""","""The paper studies the amount of over-parameterization needed for a quadratic 2 /3 layer neural network to memorize a separable training data set with arbitrary labels. While the reviewers agree that this paper contains interesting results, the review process uncovered highly related prior work, which requires a major revision to put the current paper into perspective and generally various clarifications. The paper will benefit from a revision and resubmission to another venue, and is in its current form not ready for acceptance at ICLR-2020."""
1824,"""Stochastic Gradient Descent with Biased but Consistent Gradient Estimators""","['Stochastic optimization', 'biased gradient estimator', 'graph convolutional networks']","""Stochastic gradient descent (SGD), which dates back to the 1950s, is one of the most popular and effective approaches for performing stochastic optimization. Research on SGD resurged recently in machine learning for optimizing convex loss functions and training nonconvex deep neural networks. The theory assumes that one can easily compute an unbiased gradient estimator, which is usually the case due to the sample average nature of empirical risk minimization. There exist, however, many scenarios (e.g., graphs) where an unbiased estimator may be as expensive to compute as the full gradient because training examples are interconnected. Recently, Chen et al. (2018) proposed using a consistent gradient estimator as an economic alternative. Encouraged by empirical success, we show, in a general setting, that consistent estimators result in the same convergence behavior as do unbiased ones. Our analysis covers strongly convex, convex, and nonconvex objectives. We verify the results with illustrative experiments on synthetic and real-world data. This work opens several new research directions, including the development of more efficient SGD updates with consistent estimators and the design of efficient training algorithms for large-scale graphs. ""","""This paper analyzes the convergence of SGD with a biased yet consistent gradient estimator. The main result is that this biased estimator results in the same convergence rate as does using unbiased ones. The main application is on learning representations on graphs (e.g., GCNs), and FastGCN is a closely related work. I agree that this paper has valuable contributions, but it can be further strengthened by considering the review comments, such as on the key assumptions."""
1825,"""Soft Token Matching for Interpretable Low-Resource Classification""","['low-resource classification', 'semantic matching', 'error boosting', 'text classification', 'natural language processing']","""We propose a model to tackle classification tasks in the presence of very little training data. To this aim, we introduce a novel matching mechanism to focus on elements of the input by using vectors that represent semantically meaningful concepts for the task at hand. By leveraging highlighted portions of the training data, a simple, yet effective, error boosting technique guides the learning process. In practice, it increases the error associated to relevant parts of the input by a given factor. Results on text classification tasks confirm the benefits of the proposed approach in both balanced and unbalanced cases, thus being of practical use when labeling new examples is expensive. In addition, the model is interpretable, as it allows for human inspection of the learned weights.""","""The authors focus on low-resource text classifications tasks augmented with ""rationales"". They propose a new technique that improves performance over existing approaches and that allows human inspection of the learned weights. Although the reviewers did not find any major faults with the paper, they were in consensus that the paper should be rejected at this time. Generally, the reviewers' reservations were in terms of novelty and extent of technical contribution. Given the large number of submissions this year, I am recommending rejection for this paper. """
1826,"""Learning To Explore Using Active Neural SLAM""","['Navigation', 'Exploration']","""This work presents a modular and hierarchical approach to learn policies for exploring 3D environments, called `Active Neural SLAM'. Our approach leverages the strengths of both classical and learning-based methods, by using analytical path planners with learned SLAM module, and global and local policies. The use of learning provides flexibility with respect to input modalities (in the SLAM module), leverages structural regularities of the world (in global policies), and provides robustness to errors in state estimation (in local policies). Such use of learning within each module retains its benefits, while at the same time, hierarchical decomposition and modular training allow us to sidestep the high sample complexities associated with training end-to-end policies. Our experiments in visually and physically realistic simulated 3D environments demonstrate the effectiveness of our approach over past learning and geometry-based approaches. The proposed model can also be easily transferred to the PointGoal task and was the winning entry of the CVPR 2019 Habitat PointGoal Navigation Challenge.""","""The paper presents a method for visual robot navigation in simulated environments. The proposed method combines several modules, such as mapper, global policy, planner, local policy for point-goal navigation. The overall approach is reasonable and the pipeline can be modularly trained. The experimental results on navigation tasks show strong performance, especially in generalization settings. """
1827,"""Differentiable Hebbian Consolidation for Continual Learning""","['continual learning', 'catastrophic forgetting', 'Hebbian learning', 'synaptic plasticity', 'neural networks']","""Continual learning is the problem of sequentially learning new tasks or knowledge while protecting previously acquired knowledge. However, catastrophic forgetting poses a grand challenge for neural networks performing such learning process. Thus, neural networks that are deployed in the real world often struggle in scenarios where the data distribution is non-stationary (concept drift), imbalanced, or not always fully available, i.e., rare edge cases. We propose a Differentiable Hebbian Consolidation model which is composed of a Differentiable Hebbian Plasticity (DHP) Softmax layer that adds a rapid learning plastic component (compressed episodic memory) to the fixed (slow changing) parameters of the softmax output layer; enabling learned representations to be retained for a longer timescale. We demonstrate the flexibility of our method by integrating well-known task-specific synaptic consolidation methods to penalize changes in the slow weights that are important for each target task. We evaluate our approach on the Permuted MNIST, Split MNIST and Vision Datasets Mixture benchmarks, and introduce an imbalanced variant of Permuted MNIST --- a dataset that combines the challenges of class imbalance and concept drift. Our proposed model requires no additional hyperparameters and outperforms comparable baselines by reducing forgetting.""","""The reviewers agreed that this paper tackles an important problem, continual learning, with a method that is well motivated and interesting. The rebuttal was very helpful in terms of relating to other work. However, the empirical evaluation, while good, could be improved. In particular, it is not clear based on the evaluation to what extent more interesting continual learning problems can be tackled. We encourage the authors to continue pursuing this work."""
1828,"""Efficient Systolic Array Based on Decomposable MAC for Quantized Deep Neural Networks""",[],"""Deep Neural Networks (DNNs) have achieved high accuracy in various machine learning applications in recent years. As the recognition accuracy of deep learning applications increases, reducing the complexity of these neural networks and performing the DNN computation on embedded systems or mobile devices become an emerging and crucial challenge. Quantization has been presented to reduce the utilization of computational resources by compressing the input data and weights from floating-point numbers to integers with shorter bit-width. For practical power reduction, it is necessary to operate these DNNs with quantized parameters on appropriate hardware. Therefore, systolic arrays are adopted to be the major computation units for matrix multiplication in DNN accelerators. To obtain a better tradeoff between the precision/accuracy and power consumption, using parameters with various bit-widths among different layers within a DNN is an advanced quantization method. In this paper, we propose a novel decomposition strategy to construct a low-power decomposable multiplier-accumulator (MAC) for the energy efficiency of quantized DNNs. In the experiments, when 65% multiplication operations of VGG-16 are operated in shorter bit-width with at most 1% accuracy loss on the CIFAR-10 dataset, our decomposable MAC has 50% energy reduction compared with a non-decomposable MAC.""","""This paper presents an energy-efficient architecture for quantized deep neural networks based on decomposable multiplication using MACs. Although the proposed approach is shown to be somehow effective, two reviewers pointed out that the very similar idea was already proposed in the previous work, BitBlade [1]. As the authors did not submit a rebuttal to defend this critical point, Id like to recommend rejection. I recommend authors to discuss and clarify the difference from [1] in the future version of the paper. [1] Sungju Ryu, Hyungjun Kim, Wooseok Yi, Jae-Joon Kim. BitBlade: Area and Energy-Efficient Precision-Scalable Neural Network Accelerator with Bitwise Summation. DAC'2019 """
1829,"""Deep Bayesian Structure Networks""",[],"""Bayesian neural networks (BNNs) introduce uncertainty estimation to deep networks by performing Bayesian inference on network weights. However, such models bring the challenges of inference, and further BNNs with weight uncertainty rarely achieve superior performance to standard models. In this paper, we investigate a new line of Bayesian deep learning by performing Bayesian reasoning on the structure of deep neural networks. Drawing inspiration from the neural architecture search, we define the network structure as random weights on the redundant operations between computational nodes, and apply stochastic variational inference techniques to learn the structure distributions of networks. Empirically, the proposed method substantially surpasses the advanced deep neural networks across a range of classification and segmentation tasks. More importantly, our approach also preserves benefits of Bayesian principles, producing improved uncertainty estimation than the strong baselines including MC dropout and variational BNNs algorithms (e.g. noisy EK-FAC). ""","""The authors develop stochastic variational approaches to learn Bayesian ""structure distributions"" for neural networks. While the reviewers appreciated the updates to the paper made by the authors, there will still a number of remaining concerns. There were particularly concerns about the clarity of the paper (remarking on informality of language and lack of changes in the revision with respect to comments in the original review), and the fairness of comparisons. Regarding comparisons, one reviewer comments: ""I do not agree that the comparison with DARTS is fair because the authors remove the options for retraining in both DARTS and DBSN. The reason DARTS trains using one half of the data and validate on the other is that it includes a retraining phase where all data is used. Therefore fair comparison should use the same procedure as DARTS (including a retraining phrase). At the very least, to compare methods without retraining, results of DARTS with more data (e.g., 80%) for training should be reported."" The authors are encouraged to continue with this work, carefully accounting for reviewer comments in future revisions."""
1830,"""A Closer Look at the Optimization Landscapes of Generative Adversarial Networks""","['Deep Learning', 'Generative models', 'GANs', 'Optimization', 'Visualization']","""Generative adversarial networks have been very successful in generative modeling, however they remain relatively challenging to train compared to standard deep neural networks. In this paper, we propose new visualization techniques for the optimization landscapes of GANs that enable us to study the game vector field resulting from the concatenation of the gradient of both players. Using these visualization techniques we try to bridge the gap between theory and practice by showing empirically that the training of GANs exhibits significant rotations around LSSP, similar to the one predicted by theory on toy examples. Moreover, we provide empirical evidence that GAN training seems to converge to a stable stationary point which is a saddle point for the generator loss, not a minimum, while still achieving excellent performance.""","""This is an interesting contribution that sheds some light on a well-studied but still poorly understood problem. I think it might be of interest to the community."""
1831,"""The Logical Expressiveness of Graph Neural Networks""","['Graph Neural Networks', 'First Order Logic', 'Expressiveness']","""The ability of graph neural networks (GNNs) for distinguishing nodes in graphs has been recently characterized in terms of the Weisfeiler-Lehman (WL) test for checking graph isomorphism. This characterization, however, does not settle the issue of which Boolean node classifiers (i.e., functions classifying nodes in graphs as true or false) can be expressed by GNNs. We tackle this problem by focusing on Boolean classifiers expressible as formulas in the logic FOC2, a well-studied fragment of first order logic. FOC2 is tightly related to the WL test, and hence to GNNs. We start by studying a popular class of GNNs, which we call AC-GNNs, in which the features of each node in the graph are updated, in successive layers, only in terms of the features of its neighbors. We show that this class of GNNs is too weak to capture all FOC2 classifiers, and provide a syntactic characterization of the largest subclass of FOC2 classifiers that can be captured by AC-GNNs. This subclass coincides with a logic heavily used by the knowledge representation community. We then look at what needs to be added to AC-GNNs for capturing all FOC2 classifiers. We show that it suffices to add readout functions, which allow to update the features of a node not only in terms of its neighbors, but also in terms of a global attribute vector. We call GNNs of this kind ACR-GNNs. We experimentally validate our findings showing that, on synthetic data conforming to FOC2 formulas, AC-GNNs struggle to fit the training data while ACR-GNNs can generalize even to graphs of sizes not seen during training.""","""The paper focuses on characterizing the expressiveness of graph neural networks. The reviewers were satisfied that the authors answered their questions suffciiently and uniformly agree that this is a strong paper that should be accepted."""
1832,"""On Understanding Knowledge Graph Representation""","['knowledge graphs', 'word embedding', 'representation learning']","""Many methods have been developed to represent knowledge graph data, which implicitly exploit low-rank latent structure in the data to encode known information and enable unknown facts to be inferred. To predict whether a relationship holds between entities, their embeddings are typically compared in the latent space following a relation-specific mapping. Whilst link prediction has steadily improved, the latent structure, and hence why such models capture semantic information, remains unexplained. We build on recent theoretical interpretation of word embeddings as a basis to consider an explicit structure for representations of relations between entities. For identifiable relation types, we are able to predict properties and justify the relative performance of leading knowledge graph representation methods, including their often overlooked ability to make independent predictions.""","""The paper proposes a set of conditions that enable a mapping from word embeddings to relation embeddings in knowledge graphs. Then, using recent results about pointwise mutual information word embeddings, the paper provides insights to the latent space of relations, enabling a categorization of relations of entities in a knowledge graph. Empirical experiments on recent knowledge graph models (TransE, DistMult, TuckER and MuRE) are interpreted in light of the predictions coming from the proposed set of conditions. The authors responded to reviewer comments well, providing significant updates during the discussion period. Unfortunately, the reviewers did not engage further after their original reviews, and so it is hard to tell whether they agreed that the changes resolved all their questions. Overall, the paper provides much needed analysis for understanding of the latent space of relations on knowledge graphs. Unfortunately, the original submission did not clearly present the ideas, and it is unclear whether the updated version addresses all the concerns. The paper in its current state is therefore not yet suitable for publication at ICLR."""
1833,"""Cross Domain Imitation Learning""","['Imitation Learning', 'Domain Adaptation', 'Reinforcement Learning', 'Zeroshot Learning', 'Machine Learning', 'Artificial Intelligence']","""We study the question of how to imitate tasks across domains with discrepancies such as embodiment and viewpoint mismatch. Many prior works require paired, aligned demonstrations and an additional RL procedure for the task. However, paired, aligned demonstrations are seldom obtainable and RL procedures are expensive. In this work, we formalize the Cross Domain Imitation Learning (CDIL) problem, which encompasses imitation learning in the presence of viewpoint and embodiment mismatch. Informally, CDIL is the process of learning how to perform a task optimally, given demonstrations of the task in a distinct domain. We propose a two step approach to CDIL: alignment followed by adaptation. In the alignment step we execute a novel unsupervised MDP alignment algorithm, Generative Adversarial MDP Alignment (GAMA), to learn state and action correspondences from unpaired, unaligned demonstrations. In the adaptation step we leverage the correspondences to zero-shot imitate tasks across domains. To describe when CDIL is feasible via alignment and adaptation, we introduce a theory of MDP alignability. We experimentally evaluate GAMA against baselines in both embodiment and viewpoint mismatch scenarios where aligned demonstrations dont exist and show the effectiveness of our approach.""","""The authors propose a novel approach for imitation learning in settings where demonstrations are unaligned with the task (e.g., differ in terms of state and action space). The proposed approach consists of alignment and adaptation steps and theoretical insights are provided on whether given MDPs can be aligned. Reviewers were positive about the ideas presented in the paper, and several requests for clarification were well addressed by the authors during the rebuttal phase. Key evaluation issues remained unresolved. In particular, it was unclear to what degree performance differences were purely caused by issues in alignment, and reviewers did not see sufficient evidence to support claims about performance on the full cross domain learning setting."""
1834,"""Neural Epitome Search for Architecture-Agnostic Network Compression""","['Network Compression', 'Classification', 'Deep Learning', 'Weights Sharing']","""Traditional compression methods including network pruning, quantization, low rank factorization and knowledge distillation all assume that network architectures and parameters should be hardwired. In this work, we propose a new perspective on network compression, i.e., network parameters can be disentangled from the architectures. From this viewpoint, we present the Neural Epitome Search (NES), a new neural network compression approach that learns to find compact yet expressive epitomes for weight parameters of a specified network architecture end-to-end. The complete network to compress can be generated from the learned epitome via a novel transformation method that adaptively transforms the epitomes to match shapes of the given architecture. Compared with existing compression methods, NES allows the weight tensors to be independent of the architecture design and hence can achieve a good trade-off between model compression rate and performance given a specific model size constraint. Experiments demonstrate that, on ImageNet, when taking MobileNetV2 as backbone, our approach improves the full-model baseline by 1.47% in top-1 accuracy with 25% MAdd reduction and AutoML for Model Compression (AMC) by 2.5% with nearly the same compression ratio. Moreover, taking EfficientNet-B0 as baseline, our NES yields an improvement of 1.2% but are with 10% less MAdd. In particular, our method achieves a new state-of-the-art results of 77.5% under mobile settings (<350M MAdd). Code will be made publicly available.""","""The paper proposed a novel way to compress arbitrary networks by learning epitiomes and corresponding transformations of them to reconstruct the original weight tensors. The idea is very interesting and the paper presented good experimental validations of the proposed method on state-of-the-art models and showed good MAdd reduction. The authors also put a lot of efforts addressing the concerns of all the reviewers by improving the presentation of the paper, which although can still be further improved, and adding more explanations and validations on the proposed method. Although there's still concerns on whether the reduction of MAdd really transforms to computation reduction, all the reviewers agreed the paper is interesting and useful and further development of such work would be useful too. """
1835,"""Stiffness: A New Perspective on Generalization in Neural Networks""","['stiffness', 'gradient alignment', 'critical scale']","""We investigate neural network training and generalization using the concept of stiffness. We measure how stiff a network is by looking at how a small gradient step on one example affects the loss on another example. In particular, we study how stiffness depends on 1) class membership, 2) distance between data points in the input space, 3) training iteration, and 4) learning rate. We experiment on MNIST, FASHION MNIST, and CIFAR-10 using fully-connected and convolutional neural networks. Our results demonstrate that stiffness is a useful concept for diagnosing and characterizing generalization. We observe that small learning rates reliably lead to higher stiffness at a given epoch as well as at a given training loss. In addition, we measure how stiffness between two data points depends on their mutual input-space distance, and establish the concept of a dynamical critical length that characterizes the distance over which datapoints react similarly to gradient updates. The dynamical critical length decreases with training and the higher the learning rate, the smaller the critical length.""","""While there was some support for the ideas presented, the majority of reviewers felt that this submission is not ready for publication at ICLR in its present form. Concerns raised include lack of sufficient motivation for the approach, and problems with clarity of the exposition."""
1836,"""Are Few-shot Learning Benchmarks Too Simple ?""","['few-shot', 'classification', 'meta-learning', 'benchmark', 'omniglot', 'miniimagenet', 'meta-dataset', 'learning to cluster', 'learning', 'cluster', 'unsupervised']","""We argue that the widely used Omniglot and miniImageNet benchmarks are too simple because their class semantics do not vary across episodes, which defeats their intended purpose of evaluating few-shot classification methods. The class semantics of Omniglot is invariably characters and the class semantics of miniImageNet, object category. Because the class semantics are so similar, we propose a new method called Centroid Networks which can achieve surprisingly high accuracies on Omniglot and miniImageNet without using any labels at metaevaluation time. Our results suggest that those benchmarks are not adapted for supervised few-shot classification since the supervision itself is not necessary during meta-evaluation. The Meta-Dataset, a collection of 10 datasets, was recently proposed as a harder few-shot classification benchmark. Using our method, we derive a new metric, the Class Semantics Consistency Criterion, and use it to quantify the difficulty of Meta-Dataset. Finally, under some restrictive assumptions, we show that Centroid Networks is faster and more accurate than a state-of-the-art learning-to-cluster method (Hsu et al., 2018). ""","""The paper is interested in assessing the difficulty of popular few-shot classification benchmarks (Omniglot and miniImageNet). A clustering-based meta-learning method is proposed (called Centroid Network), on which a metric is built (gap between the performance of Prototypical Networks and Centroid Networks). As noted by several reviewers, the proposed metric (critical for the paper) is however not motivated enough, nor convincing enough - after discussion, the logic in the metric reasoning seems to remain flawed. """
1837,"""BEYOND SUPERVISED LEARNING: RECOGNIZING UNSEEN ATTRIBUTE-OBJECT PAIRS WITH VISION-LANGUAGE FUSION AND ATTRACTOR NETWORKS""",['image understanding'],"""This paper handles a challenging problem, unseen attribute-object pair recognition, which asks a model to simultaneously recognize the attribute type and the object type of a given image while this attribute-object pair is not included in the training set. In the past years, the conventional classifier-based methods, which recognize unseen attribute-object pairs by composing separately-trained attribute classifiers and object classifiers, are strongly frustrated. Different from conventional methods, we propose a generative model with a visual pathway and a linguistic pathway. In each pathway, the attractor network is involved to learn the intrinsic feature representation to explore the inner relationship between the attribute and the object. With the learned features in both pathways, the unseen attribute-object pair is recognized by finding out the pair whose linguistic feature closely matches the visual feature of the given image. On two public datasets, our model achieves impressive experiment results, notably outperforming the state-of-the-art methods.""","""The paper focuses on attribute-object pairs image recognition, leveraging some novel ""attractor network"". At this stage, all reviewers agree the paper needs a lot of improvements in the writing. There are also concerns regarding (i) novelty: the proposed approach being two encoder-decoder networks; (ii) lack of motivation for such architecture (iii) possible flow in the approach (are the authors using test labels?) and (iv) weak experiments."""
1838,"""Using Logical Specifications of Objectives in Multi-Objective Reinforcement Learning""","['reinforcement learning', 'multi-objective', 'multi-task', 'propositional logic']","""In the multi-objective reinforcement learning (MORL) paradigm, the relative importance of each environment objective is often unknown prior to training, so agents must learn to specialize their behavior to optimize different combinations of environment objectives that are specified post-training. These are typically linear combinations, so the agent is effectively parameterized by a weight vector that describes how to balance competing environment objectives. However, many real world behaviors require non-linear combinations of objectives. Additionally, the conversion between desired behavior and weightings is often unclear. In this work, we explore the use of a language based on propositional logic with quantitative semantics--in place of weight vectors--for specifying non-linear behaviors in an interpretable way. We use a recurrent encoder to encode logical combinations of objectives, and train a MORL agent to generalize over these encodings. We test our agent in several grid worlds with various objectives and show that our agent can generalize to many never-before-seen specifications with performance comparable to single policy baseline agents. We also demonstrate our agent's ability to generate meaningful policies when presented with novel specifications and quickly specialize to novel specifications.""","""The reviewers generally agreed that the technical novelty of the work was limited, and the experimental evaluation was insufficient to make up for this, evaluating the method only on relatively simple toy tasks. As much, I do not think that the paper is ready for publication at this time."""
1839,"""TSInsight: A local-global attribution framework for interpretability in time-series data""","['Deep Learning', 'Representation Learning', 'Convolutional Neural Networks', 'Time-Series Analysis', 'Feature Importance', 'Visualization', 'Demystification']","""With the rise in employment of deep learning methods in safety-critical scenarios, interpretability is more essential than ever before. Although many different directions regarding interpretability have been explored for visual modalities, time-series data has been neglected with only a handful of methods tested due to their poor intelligibility. We approach the problem of interpretability in a novel way by proposing TSInsight where we attach an auto-encoder with a sparsity-inducing norm on its output to the classifier and fine-tune it based on the gradients from the classifier and a reconstruction penalty. The auto-encoder learns to preserve features that are important for the prediction by the classifier and suppresses the ones that are irrelevant i.e. serves as a feature attribution method to boost interpretability. In other words, we ask the network to only reconstruct parts which are useful for the classifier i.e. are correlated or causal for the prediction. In contrast to most other attribution frameworks, TSInsight is capable of generating both instance-based and model-based explanations. We evaluated TSInsight along with other commonly used attribution methods on a range of different time-series datasets to validate its efficacy. Furthermore, we analyzed the set of properties that TSInsight achieves out of the box including adversarial robustness and output space contraction. The obtained results advocate that TSInsight can be an effective tool for the interpretability of deep time-series models.""","""Main content: Blind review #2 summarizes it well: The aim of this work is to improve interpretability in time series prediction. To do so, they propose to use a relatively post-hoc procedure which learns a sparse representation informed by gradients of the prediction objective under a trained model. In particular, given a trained next-step classifier, they propose to train a sparse autoencoder with a combined objective of reconstruction and classification performance (while keeping the classifier fixed), so as to expose which features are useful for time series prediction. Sparsity, and sparse auto-encoders, have been widely used for the end of interpretability. In this sense, the crux of the approach is very well motivated by the literature. -- Discussion: All reviews had difficulties understanding the significance and novelty, which appears to have in large part arisen from the original submission not having sufficiently contextualized the motivation and strengths of the approach (especially for readers not already specialized in this exact subarea). -- Recommendation and justification: The reviews are uniformly low, probably due to the above factors, and while the authors' revisions during the rebuttal period have improved the objections, there are so many strong submissions that it would be difficult to justify override the very low reviewer scores."""
1840,"""DD-PPO: Learning Near-Perfect PointGoal Navigators from 2.5 Billion Frames""","['autonomous navigation', 'habitat', 'embodied AI', 'pointgoal navigation', 'reinforcement learning']","""We present Decentralized Distributed Proximal Policy Optimization (DD-PPO), a method for distributed reinforcement learning in resource-intensive simulated environments. DD-PPO is distributed (uses multiple machines), decentralized (lacks a centralized server), and synchronous (no computation is ever ""stale""), making it conceptually simple and easy to implement. In our experiments on training virtual robots to navigate in Habitat-Sim, DD-PPO exhibits near-linear scaling -- achieving a speedup of 107x on 128 GPUs over a serial implementation. We leverage this scaling to train an agent for 2.5 Billion steps of experience (the equivalent of 80 years of human experience) -- over 6 months of GPU-time training in under 3 days of wall-clock time with 64 GPUs. This massive-scale training not only sets the state of art on Habitat Autonomous Navigation Challenge 2019, but essentially ""solves"" the task -- near-perfect autonomous navigation in an unseen environment without access to a map, directly from an RGB-D camera and a GPS+Compass sensor. Fortuitously, error vs computation exhibits a power-law-like distribution; thus, 90% of peak performance is obtained relatively early (at 100 million steps) and relatively cheaply (under 1 day with 8 GPUs). Finally, we show that the scene understanding and navigation policies learned can be transferred to other navigation tasks -- the analog of ""ImageNet pre-training + task-specific fine-tuning"" for embodied AI. Our model outperforms ImageNet pre-trained CNNs on these transfer tasks and can serve as a universal resource (all models and code are publicly available). ""","""The authors present and implement a synchronous, distributed RL called Decentralized Distributed Proximal Policy Optimization. The proposed technique was validated for pointgoal visual navigation task on recently introduced Habitat challenge 2019 and got the state of art performance. Two reviews recommend this paper for acceptance with only some minor comments, such as revising the title. The Blind Review #2 has several major concerns about the implementation details. In the rebuttal, the authors provided the source code to make the results reproducible. Overall, the paper is well written with promising experimental results. I also recommend it for acceptance. """
1841,"""Black-Box Adversarial Attack with Transferable Model-based Embedding""","['adversarial examples', 'black-box attack', 'embedding']","""We present a new method for black-box adversarial attack. Unlike previous methods that combined transfer-based and scored-based methods by using the gradient or initialization of a surrogate white-box model, this new method tries to learn a low-dimensional embedding using a pretrained model, and then performs efficient search within the embedding space to attack an unknown target network. The method produces adversarial perturbations with high level semantic patterns that are easily transferable. We show that this approach can greatly improve the query efficiency of black-box adversarial attack across different target network architectures. We evaluate our approach on MNIST, ImageNet and Google Cloud Vision API, resulting in a significant reduction on the number of queries. We also attack adversarially defended networks on CIFAR10 and ImageNet, where our method not only reduces the number of queries, but also improves the attack success rate.""","""This paper proposes a new black-box adversarial attack approach which learns a low-dimensional embedding using a pretrained model and then performs efficient search in the embedding space to attack target networks. The proposed approach can produce perturbation with semantic patterns that are easily transferable and improve the query efficiency in black-box attacks. All reviewers are in support of the paper after author response. I am very happy to recommend accept. """
1842,"""Adversarial training with perturbation generator networks""","['Adversarial training', 'Generative model', 'Adaptive perturbation generator', 'Robust optimization']","""Despite the remarkable development of recent deep learning techniques, neural networks are still vulnerable to adversarial attacks, i.e., methods that fool the neural networks with perturbations that are too small for human eyes to perceive. Many adversarial training methods were introduced as to solve this problem, using adversarial examples as a training data. However, these adversarial attack methods used in these techniques are fixed, making the model stronger only to attacks used in training, which is widely known as an overfitting problem. In this paper, we suggest a novel adversarial training approach. In addition to the classifier, our method adds another neural network that generates the most effective adversarial perturbation by finding the weakness of the classifier. This perturbation generator network is trained to produce perturbations that maximize the loss function of the classifier, and these adversarial examples train the classifier with a true label. In short, the two networks compete with each other, performing a minimax game. In this scenario, attack patterns created by the generator network are adaptively altered to the classifier, mitigating the overfitting problem mentioned above. We theoretically proved that our minimax optimization problem is equivalent to minimizing the adversarial loss after all. Beyond this, we proposed an evaluation method that could accurately compare a wide-range of adversarial algorithms. Experiments with various datasets show that our method outperforms conventional adversarial algorithms. ""","""This paper proposes to use the GAN (i.e., minimax) framework for adversarial training, where another neural network was introduced to generate the most effective adversarial perturbation by finding the weakness of the classifier. The rebuttal was not fully convincing on why the proposed method should be superior to existing attacks."""
1843,"""BRIDGING ADVERSARIAL SAMPLES AND ADVERSARIAL NETWORKS""","['ADVERSARIAL SAMPLES', 'ADVERSARIAL NETWORKS']","""Generative adversarial networks have achieved remarkable performance on various tasks but suffer from sensitivity to hyper-parameters, training instability, and mode collapse. We find that this is partly due to gradient given by non-robust discriminator containing non-informative adversarial noise, which can hinder generator from catching the pattern of real samples. Inspired by defense against adversarial samples, we introduce adversarial training of discriminator on real samples that does not exist in classic GANs framework to make adversarial training symmetric, which can balance min-max game and make discriminator more robust. Robust discriminator can give more informative gradient with less adversarial noise, which can stabilize training and accelerate convergence. We validate the proposed method on image generation tasks with varied network architectures quantitatively. Experiments show that training stability, perceptual quality, and diversity of generated samples are consistently improved with small additional training computation cost.""","""This paper proposes incorporating adversarial training on real images to improve the stability of GAN training. The key idea relies on the observation that GAN training already implicitly does a form of adversarial training on the generated images and so this work proposes adding adversarial training on real images as well. In practice, adversarial training on real images is performed using FGSM and experiments are conducted on CelebA, CiFAR10, and LSUN reporting using standard generative metrics like FID. Initially all reviewers were in agreement that this work should not be accepted. However, in response to the discussion with the authors Reviewer 2 updated their score from weak reject to weak accept. The other reviewers recommendation remained unchanged. The core concerns of reviewers 3 and 1 is limited technical contribution and unconvincing experimental evidence. In particular, concerns were raised about the overlap with [1] from CVPR 2019. The authors argue that their work is different due to the focus on the unsupervised setting, however, this application distinction is minor and doesnt result in any major algorithmic changes. With respect to experiments, the authors do provide performance across multiple datasets and architectures which is encouraging, however, to distinguish this work it would have been helpful to provide further study and analysis into the aspects unique to this work -- such as the settings and type of adversarial attack (as mentioned by R3) and stability across GAN variants. After considering all reviewer and author comments, the AC does not recommend this work for publication in its current form and recommends the authors consider both additional experiments and text description to clarify and solidify their contributions over prior work. [1] Liu, X., & Hsieh, C. J. (2019). Rob-gan: Generator, discriminator, and adversarial attacker. CVPR 2019. """
1844,"""Learning to solve the credit assignment problem""","['biologically plausible deep learning', 'node perturbation', 'REINFORCE', 'synthetic gradients', 'feedback alignment']","""Backpropagation is driving today's artificial neural networks (ANNs). However, despite extensive research, it remains unclear if the brain implements this algorithm. Among neuroscientists, reinforcement learning (RL) algorithms are often seen as a realistic alternative: neurons can randomly introduce change, and use unspecific feedback signals to observe their effect on the cost and thus approximate their gradient. However, the convergence rate of such learning scales poorly with the number of involved neurons. Here we propose a hybrid learning approach. Each neuron uses an RL-type strategy to learn how to approximate the gradients that backpropagation would provide. We provide proof that our approach converges to the true gradient for certain classes of networks. In both feedforward and convolutional networks, we empirically show that our approach learns to approximate the gradient, and can match the performance of gradient-based learning. Learning feedback weights provides a biologically plausible mechanism of achieving good performance, without the need for precise, pre-specified learning rules.""","""Initial reviews of this paper cited some concerns about a lack of comparison to SOTA and baselines, and also some debate over claims of what is (or is not) ""biologically plausible."" However, after extensive back-and-forth between the authors and reviewers these issues have been addressed and the paper has been improved. There is now consensus among authors that this paper should be accepted. I would like to thank the reviewers and authors for taking the time to thoroughly discuss this paper."""
1845,"""UNIVERSAL MODAL EMBEDDING OF DYNAMICS IN VIDEOS AND ITS APPLICATIONS""","['Non-linear dynamics', 'Convolutional Autoencoder', 'Foreground modeling', 'Video classification', 'Dynamic mode decomposition']","""Extracting underlying dynamics of objects in image sequences is one of the challenging problems in computer vision. On the other hand, dynamic mode decomposition (DMD) has recently attracted attention as a way of obtaining modal representations of nonlinear dynamics from (general multivariate time-series) data without explicit prior knowledge about the dynamics. In this paper, we propose a convolutional autoencoder based DMD (CAE-DMD) that is an extended DMD (EDMD) approach, to extract underlying dynamics in videos. To this end, we develop a modified CAE model by incorporating DMD on the encoder, which gives a more meaningful compressed representation of input image sequences. On the reconstruction side, a decoder is used to minimize the reconstruction error after applying the DMD, which in result gives an accurate reconstruction of inputs. We empirically investigated the performance of CAE-DMD in two applications: background/foreground extraction and video classification, on publicly available datasets.""","""The paper focuses on extracting the underlying dynamics of objects in video frames, for background/foreground extraction and video classification. Generally speaking, the presentation of the paper should be improved. Novelty should be clarified, contrasting the proposed approach with existing literature. All reviewers also agree the experimental section is also too weak in its current form. """
1846,"""Plan2Vec: Unsupervised Representation Learning by Latent Plans""","['Unsupervised Learning', 'Reinforcement Learning', 'Manifold Learning']","""Creating a useful representation of the world takes more than just rote memorization of individual data samples. This is because fundamentally, we use our internal representation to plan, to solve problems, and to navigate the world. For a representation to be amenable to planning, it is critical for it to embody some notion of optimality. A representation learning objective that explicitly considers some form of planning should generate representations which are more computationally valuable than those that memorize samples. In this paper, we introduce \textbf{Plan2Vec}, an unsupervised representation learning objective inspired by value-based reinforcement learning methods. By abstracting away low-level control with a learned local metric, we show that it is possible to learn plannable representations that inform long-range structures, entirely passively from high-dimensional sequential datasets without supervision. A latent space is learned by playing an ``Imagined Planning Game"" on the graph formed by the data points, using a local metric function trained contrastively from context. We show that the global metric on this learned embedding can be used to plan with O(1) complexity by linear interpolation. This exponential speed-up is critical for planning with a learned representation on any problem containing non-trivial global topology. We demonstrate the effectiveness of Plan2Vec on simulated toy tasks from both proprioceptive and image states, as well as two real-world image datasets, showing that Plan2Vec can effectively plan using learned representations. Additional results and videos can be found at \url{pseudo-url}.""","""The paper proposes a representation learning objective that makes it amenable to planning, The initial submission contained clear holes, such as missing related work and only containing very simplistic baselines. The authors have substantially updated the paper based on this feedback, resulting in a clear improvement. Nevertheless, while the new version is a good step in the right direction, there is some additional work needed to fully address the reviewers' complaints. For example, the improved baselines are only evaluated in the most simple domain, while the more complex domains still only contain simplistic baselines that are destined to fail. There are also some unaddressed questions regarding the correctness of Eq. 4. Finally, the substantial rewrites have given the paper a less-than-polished feel. In short, while the work is interesting, it still needs a few iterations before it's ready for publication."""
1847,"""The Gambler's Problem and Beyond""","[""the gambler's problem"", 'reinforcement learning', 'fractal', 'self-similarity', 'Bellman equation']","""We analyze the Gambler's problem, a simple reinforcement learning problem where the gambler has the chance to double or lose their bets until the target is reached. This is an early example introduced in the reinforcement learning textbook by Sutton and Barto (2018), where they mention an interesting pattern of the optimal value function with high-frequency components and repeating non-smooth points. It is however without further investigation. We provide the exact formula for the optimal value function for both the discrete and the continuous cases. Though simple as it might seem, the value function is pathological: fractal, self-similar, derivative taking either zero or infinity, not smooth on any interval, and not written as elementary functions. It is in fact one of the generalized Cantor functions, where it holds a complexity that has been uncharted thus far. Our analyses could lead insights into improving value function approximation, gradient-based algorithms, and Q-learning, in real applications and implementations.""","""This paper studies the optimal value function for the gambler's problem, and presents some interesting characterizations thereof. The paper is well written and should be accepted."""
1848,"""Adversarially learned anomaly detection for time series data""","['anomaly detection', 'gan']","""Anomaly detection in time series data is an important topic in many domains. However, time series are known to be particular hard to analyze. Based on the recent developments in adversarially learned models, we propose a new approach for anomaly detection in time series data. We build upon the idea to use a combination of a reconstruction error and the output of a Critic network. To this end we propose a cycle-consistent GAN architecture for sequential data and a new way of measuring the reconstruction error. We then show in a detailed evaluation how the different parts of our model contribute to the final anomaly score and demonstrate how the method improves the results on several data sets. We also compare our model to other baseline anomaly detection methods to verify its performance.""","""The paper proposes a cycle-consistent GAN architecture with measuring the reconstruction error of time series for anomaly detection. The paper aims to address an important problem, but the current version is not ready for publication. We suggest the authors consider the following aspects for improving the paper: 1. The novelty of the proposed model: motivate the design choices and compare them with state-of-art methods 2. Evaluation: formalize the target anomalies and identify datasets/examples where the proposed model can significantly outperform existing solutions. """
1849,"""IMPACT: Importance Weighted Asynchronous Architectures with Clipped Target Networks""","['Reinforcement Learning', 'Artificial Intelligence', 'Distributed Computing', 'Neural Networks']","""The practical usage of reinforcement learning agents is often bottlenecked by the duration of training time. To accelerate training, practitioners often turn to distributed reinforcement learning architectures to parallelize and accelerate the training process. However, modern methods for scalable reinforcement learning (RL) often tradeoff between the throughput of samples that an RL agent can learn from (sample throughput) and the quality of learning from each sample (sample efficiency). In these scalable RL architectures, as one increases sample throughput (i.e. increasing parallelization in IMPALA (Espeholt et al., 2018)), sample efficiency drops significantly. To address this, we propose a new distributed reinforcement learning algorithm, IMPACT. IMPACT extends PPO with three changes: a target network for stabilizing the surrogate objective, a circular buffer, and truncated importance sampling. In discrete action-space environments, we show that IMPACT attains higher reward and, simultaneously, achieves up to 30% decrease in training wall-time than that of IMPALA. For continuous control environments, IMPACT trains faster than existing scalable agents while preserving the sample efficiency of synchronous PPO.""","""The authors propose a novel distributed reinforcement learning algorithm that includes 3 new components: a target network for the policy for stability, a circular buffer, and truncated importance sampling. The authors demonstrate that this improves performance while decreasing wall clock training time. Initially, reviewers were concerned about the fairness of hyper parameter tuning, the baseline implementation of algorithms, and the limited set of experiments done on the Atari games. After the author response, reviewers were satisfied with all 3 of those issues. I may have missed it, but I did not see that code was being released with this paper. I think it would greatly increase the impact of the paper at the authors release source code, so I strongly encourage them to do so. Generally, all the reviewers were in consensus that this is an interesting paper and I recommend acceptance."""
1850,"""A Mention-Pair Model of Annotation with Nonparametric User Communities""","['model of annotation', 'coreference resolution', 'anaphoric annotation', 'mention pair model', 'bayesian nonparametrics']","""The availability of large datasets is essential for progress in coreference and other areas of NLP. Crowdsourcing has proven a viable alternative to expert annotation, offering similar quality for better scalability. However, crowdsourcing require adjudication, and most models of annotation focus on classification tasks where the set of classes is predetermined. This restriction does not apply to anaphoric annotation, where coders relate markables to coreference chains whose number cannot be predefined. This gap was recently covered with the introduction of a mention pair model of anaphoric annotation (MPA). In this work we extend MPA to alleviate the effects of sparsity inherent in some crowdsourcing environments. Specifically, we use a nonparametric partially pooled structure (based on a stick breaking process), fitting jointly with the ability of the annotators hierarchical community profiles. The individual estimates can thus be improved using information about the community when the data is scarce. We show, using a recently published large-scale crowdsourced anaphora dataset, that the proposed model performs better than its unpooled counterpart in conditions of sparsity, and on par when enough observations are available. The model is thus more resilient to different crowdsourcing setups, and, further provides insights into the community of workers. The model is also flexible enough to be used in standard annotation tasks for classification where it registers on par performance with the state of the art.""","""Thanks to the reviewers and the authors for an interesting discussion. The reviewers are mixed, learning toward positive, but a few shortcomings were left unaddressed: (i) Turning the task into a mention-pair classification problem ignores the mention detection step, and synergies from joint modeling are lost. (ii) Lee et al. (2018) has been surpassed by some margin by BERT and spanBERT, models ignored in this paper. (iii) Several approaches to aggregating structured annotations have already been introduced, e.g., for sequence labelling tasks. [0] Overall, the limited novelty, the missing baselines, and the missing related work lead me to not favor acceptance at this point. [0]pseudo-url"""
1851,"""RATE-DISTORTION OPTIMIZATION GUIDED AUTOENCODER FOR GENERATIVE APPROACH""","['Autoencoder', 'Rate-distortion optimization', 'Generative model', 'Unsupervised learning', 'Jacobian']","""In the generative model approach of machine learning, it is essential to acquire an accurate probabilistic model and compress the dimension of data for easy treatment. However, in the conventional deep-autoencoder based generative model such as VAE, the probability of the real space cannot be obtained correctly from that of in the latent space, because the scaling between both spaces is not controlled. This has also been an obstacle to quantifying the impact of the variation of latent variables on data. In this paper, we propose a method to learn parametric probability distribution and autoencoder simultaneously based on Rate-Distortion Optimization to support scaling control. It is proved theoretically and experimentally that (i) the probability distribution of the latent space obtained by this model is proportional to the probability distribution of the real space because Jacobian between two spaces is constant: (ii) our model behaves as non-linear PCA, which enables to evaluate the influence of latent variables on data. Furthermore, to verify the usefulness on the practical application, we evaluate its performance in unsupervised anomaly detection and outperform current state-of-the-art methods.""","""Agreement by the reviewers: although the idea is good, the paper is very hard to read and not accurately enough formulated to merit publication. This can be repaired, and the authors should try again after a thorough revision and rewrite."""
1852,"""Selective Brain Damage: Measuring the Disparate Impact of Model Pruning""",['machine learning'],"""Neural network pruning techniques have demonstrated it is possible to remove the majority of weights in a network with surprisingly little degradation to top-1 test set accuracy. However, this measure of performance conceals significant differences in how different classes and images are impacted by pruning. We find that certain individual data points, which we term pruning identified exemplars (PIEs), and classes are systematically more impacted by the introduction of sparsity. Removing PIE images from the test-set greatly improves top-1 accuracy for both sparse and non-sparse models. These hard-to-generalize-to images tend to be of lower image quality, mislabelled, entail abstract representations, require fine-grained classification or depict atypical class examples.""","""This work investigates neural network pruning through the lens of its influence over specific exemplars (which are found to often be lower quality or mislabelled images) and how removing them greatly helps metrics. The insight from the paper is interesting, as recognized by reviewers. However, experiments do not suggest that the findings shown in the paper would generalize to more pruning methods. Nor do the authors give directions for tackling the ""hard exemplar"" problem. Authors' response did provide justifications and clarifications, however the core of the concern remains. Therefore, we recommend rejection."""
1853,"""UNITER: Learning UNiversal Image-TExt Representations""","['Self-supervised Representation Learning', 'Large-scale Pre-training', 'Vision and Language']","""Joint image-text embedding is the bedrock for most Vision-and-Language (V+L) tasks, where multimodality inputs are jointly processed for visual and textual understanding. In this paper, we introduce UNITER, a UNiversal Image-TExt Representation, learned through large-scale pre-training over four image-text datasets (COCO, Visual Genome, Conceptual Captions, and SBU Captions), which can power heterogeneous downstream V+L tasks with joint multimodal embeddings. We design three pre-training tasks: Masked Language Modeling (MLM), Image-Text Matching (ITM), and Masked Region Modeling (MRM, with three variants). Different from concurrent work on multimodal pre-training that apply joint random masking to both modalities, we use Conditioned Masking on pre-training tasks (i.e., masked language/region modeling is conditioned on full observation of image/text). Comprehensive analysis shows that conditioned masking yields better performance than unconditioned masking. We also conduct a thorough ablation study to find an optimal combination of pre-training tasks for UNITER. Extensive experiments show that UNITER achieves new state of the art across six V+L tasks over nine datasets, including Visual Question Answering, Image-Text Retrieval, Referring Expression Comprehension, Visual Commonsense Reasoning, Visual Entailment, and NLVR2. ""","""This submission proposes an approach to pre-train general-purpose image and text representations that can be effective on target tasks requiring embeddings for both modes. The authors propose several pre-training tasks beyond masked language modelling that are more suitable for the cross-modal context being addressed, and also investigate which dataset/pretraining task combinations are effective for given target tasks. All reviewers agree that the empirical results that were achieved were impressive. Shared points of concern were: - the novelty of the proposed pre-training schemes. - the lack of insight into the results that were obtained. These concerns were insufficiently addressed after the discussion period, particularly the limited novelty. Given the remaining concerns and the number of strong submissions to ICLR, this submission, while promising, does not meet the bar for acceptance. """
1854,"""Discriminator Based Corpus Generation for General Code Synthesis""","['Code Synthesis', 'Neural Code Synthesis']","""Current work on neural code synthesis consists of increasingly sophisticated architectures being trained on highly simplified domain-specific languages, using uniform sampling across program space of those languages for training. By comparison, program space for a C-like language is vast, and extremely sparsely populated in terms of `useful' functionalities; this requires a far more intelligent approach to corpus generation for effective training. We use a genetic programming approach using an iteratively retrained discriminator to produce a population suitable as labelled training data for a neural code synthesis architecture. We demonstrate that use of a discriminator-based training corpus generator, trained using only unlabelled problem specifications in classic Programming-by-Example format, greatly improves network performance compared to current uniform sampling techniques.""","""This paper proposes a method to automatically generate corpora for training program synthesis systems. The reviewers did seem to appreciate the core idea of the paper, but pointed out a number of problems with experimental design that preclude the publication of the paper at this time. The reviewers gave a number of good comments, so I hope that the authors can improve the paper for publication at a different venue in the future."""
1855,"""Cross-Dimensional Self-Attention for Multivariate, Geo-tagged Time Series Imputation""","['self-attention', 'cross-dimensional', 'multivariate time series', 'imputation']","""Many real-world applications involve multivariate, geo-tagged time series data: at each location, multiple sensors record corresponding measurements. For example, air quality monitoring system records PM2.5, CO, etc. The resulting time-series data often has missing values due to device outages or communication errors. In order to impute the missing values, state-of-the-art methods are built on Recurrent Neural Networks (RNN), which process each time stamp sequentially, prohibiting the direct modeling of the relationship between distant time stamps. Recently, the self-attention mechanism has been proposed for sequence modeling tasks such as machine translation, significantly outperforming RNN because the relationship between each two time stamps can be modeled explicitly. In this paper, we are the first to adapt the self-attention mechanism for multivariate, geo-tagged time series data. In order to jointly capture the self-attention across different dimensions (i.e. time, location and sensor measurements) while keep the size of attention maps reasonable, we propose a novel approach called Cross-Dimensional Self-Attention (CDSA) to process each dimension sequentially, yet in an order-independent manner. On three real-world datasets, including one our newly collected NYC-traffic dataset, extensive experiments demonstrate the superiority of our approach compared to state-of-the-art methods for both imputation and forecasting tasks. ""","""The paper proposes a solution based on self-attention RNN to addressing the missing value in spatiotemporal data. I myself read through the paper, followed by a discussion with the reviewers. We agree that the model is reasonable, and the results are promising. However, there is still some room for improvement: 1. The self-attention mechanism is not new. The specific way proposed in the paper is an interesting tweak of existing models, but not brand new per se. Most importantly, it is unclear if the proposed way is the optimal one and where the performance improvement comes from. As the reviewer suggested, more thorough empirical analysis should be performed for deeper insights of the model. 2. The datasets were adopted from existing work, but most of them do not have such complex models as the one proposed in the paper. Therefore, the suggestion for bigger datasets is valid. Given the considerations above, we agree that while the paper has a lot of good materials, the current version is not ready yet. Addressing the issues above could lead to a good publication in the future. """
1856,"""Alleviating Privacy Attacks via Causal Learning""","['Causal learning', 'Membership Inference Attacks', 'Differential Privacy']","""Machine learning models, especially deep neural networks have been shown to reveal membership information of inputs in the training data. Such membership inference attacks are a serious privacy concern, for example, patients providing medical records to build a model that detects HIV would not want their identity to be leaked. Further, we show that the attack accuracy amplifies when the model is used to predict samples that come from a different distribution than the training set, which is often the case in real world applications. Therefore, we propose the use of causal learning approaches where a model learns the causal relationship between the input features and the outcome. Causal models are known to be invariant to the training distribution and hence generalize well to shifts between samples from the same distribution and across different distributions. First, we prove that models learned using causal structure provide stronger differential privacy guarantees than associational models under reasonable assumptions. Next, we show that causal models trained on sufficiently large samples are robust to membership inference attacks across different distributions of datasets and those trained on smaller sample sizes always have lower attack accuracy than corresponding associational models. Finally, we confirm our theoretical claims with experimental evaluation on 4 datasets with moderately complex Bayesian networks. We observe that neural network-based associational models exhibit upto 80% attack accuracy under different test distributions and sample sizes whereas causal models exhibit attack accuracy close to a random guess. Our results confirm the value of the generalizability of causal models in reducing susceptibility to privacy attacks.""","""Maintaining the privacy of membership information contained within the data used to train machine learning models is paramount across many application domains. Moreover, this risk can be more acute when the model is used to make predictions using out-of-sample data. This paper applies a causal learning framework to mitigate this problem, motivated by the fact that causal models can be invariant to the training distribution and therefore potentially more resistant to certain privacy attacks. Both theoretical and empirical results are provided in support of this application of causal modeling. Overall, during the rebuttal period there was no strong support for this paper, and one reviewer in particular mentioned lingering unresolved yet non-trivial concerns. For example, to avoid counter-examples raised the reviewer, a deterministic labeling function must be introduced, which trivializes the distribution p(Y|X) and leads to a problematic training and testing scenario from a practical standpoint. Similarly the theoretical treatment involving Markov blankets was deemed confusing and/or misleading even after careful inspection of all author response details. At the very least, this suggests that another round of review is required to clarify these issues before publication, and hence the decision to reject at this time."""
1857,"""Searching to Exploit Memorization Effect in Learning from Corrupted Labels""","['Noisy Label', 'Deep Learning', 'Automated Machine Learning']","""Sample-selection approaches, which attempt to pick up clean instances from the training data set, have become one promising direction to robust learning from corrupted labels. These methods all build on the memorization effect, which means deep networks learn easy patterns first and then gradually over-fit the training data set. In this paper, we show how to properly select instances so that the training process can benefit the most from the memorization effect is a hard problem. Specifically, memorization can heavily depend on many factors, e.g., data set and network architecture. Nonetheless, there still exists general patterns of how memorization can occur. These facts motivate us to exploit memorization by automated machine learning (AutoML) techniques. First, we designed an expressive but compact search space based on observed general patterns. Then, we propose to use the natural gradient-based search algorithm to efficiently search through space. Finally, extensive experiments on both synthetic data sets and benchmark data sets demonstrate that the proposed method can not only be much efficient than existing AutoML algorithms but can also achieve much better performance than the state-of-the-art approaches for learning from corrupted labels.""","""This paper develops a method for sample selection that exploits the memorization effect. While the paper has been substantially improved from its original form, the paper still does not meet the quality bar of ICLR in terms of presentation of the results and experimental validation. The paper will benefit from a revision and resubmission to another venue."""
1858,"""Hallucinative Topological Memory for Zero-Shot Visual Planning""","['Visual Planning', 'Model-Based RL', 'Representation Learning']","""In visual planning (VP), an agent learns to plan goal-directed behavior from observations of a dynamical system obtained offline, e.g., images obtained from self-supervised robot interaction. VP algorithms essentially combine data-driven perception and planning, and are important for robotic manipulation and navigation domains, among others. A recent and promising approach to VP is the semi-parametric topological memory (SPTM) method, where image samples are treated as nodes in a graph, and the connectivity in the graph is learned using deep image classification. Thus, the learned graph represents the topological connectivity of the data, and planning can be performed using conventional graph search methods. However, training SPTM necessitates a suitable loss function for the connectivity classifier, which requires non-trivial manual tuning. More importantly, SPTM is constricted in its ability to generalize to changes in the domain, as its graph is constructed from direct observations and thus requires collecting new samples for planning. In this paper, we propose Hallucinative Topological Memory (HTM), which overcomes these shortcomings. In HTM, instead of training a discriminative classifier we train an energy function using contrastive predictive coding. In addition, we learn a conditional VAE model that generates samples given a context image of the domain, and use these hallucinated samples for building the connectivity graph, allowing for zero-shot generalization to domain changes. In simulated domains, HTM outperforms conventional SPTM and visual foresight methods in terms of both plan quality and success in long-horizon planning. ""","""The submission presents an approach to visual planning. The work builds on semi-parametric topological memory (SPTM) and introduces ideas that facilitate zero-shot generalization to new environments. The reviews are split. While the ideas are generally perceived as interesting, there are significant concerns about presentation and experimental evaluation. In particular, the work is evaluated in extremely simple environments and scenarios that do not match the experimental settings of other comparable works in this area. The paper was discussed and all reviewers expressed their views following the authors' responses and revision. In particular, R1 posted a detailed justification of their recommendation to reject the paper. The AC agrees that the paper is not ready for publication in a first-tier venue. The AC recommends that the authors seriously consider R1's recommendations."""
1859,"""Improving Visual Relation Detection using Depth Maps""","['Visual Relation Detection', 'Scene Graph Generation']","""State of the art visual relation detection methods mostly rely on object information extracted from RGB images such as predicted class probabilities, 2D bounding boxes and feature maps. In this paper, we argue that the 3D positions of objects in space can provide additional valuable information about object relations. This information helps not only to detect spatial relations, such as \textit{standing behind}, but also non-spatial relations, such as \textit{holding}. Since 3D information of a scene is not easily accessible, we propose incorporating a pre-trained RGB-to-Depth model within visual relation detection frameworks. We discuss different feature extraction strategies from depth maps and show their critical role in relation detection. Our experiments confirm that the performance of state-of-the-art visual relation detection approaches can significantly be improved by utilizing depth map information.""","""The paper proposes to improve visual relation prediction by using depth maps. Since existing RGB images do not contain depth informations, the authors use a monocular depth estimation method to predict depth maps. The authors show that using depths maps, they are able to improve prediction of relations between ground truth object bounding boxes and labels. The paper got relatively low scores (with 3 initial weak rejects). After the revision and suggested improvements, one of the reviewers updated their score so the paper now has 2 weak rejects and 1 weak accept. The paper had the following weaknesses: 1. The paper has limited technical novelty as it combines off the shelf components. The components also used different backbones (ResNet at some places, VGGNet at others) that were directly from prior work. Was there any attempt to have an unified architecture? As the main novelty of the work is not in the model aspect, the paper needs to have stronger experiments and analysis. 2. More analysis on the quality of the depth estimation is needed. Ideally, the work should provide some insight into whether some of the errors is due to having bad depth estimation? The depth estimation method used is from 2016, there are newer depth estimation methods now. Would having better depth estimation give improved results? Experiments that illustrates that method works well with predicted bounding boxes instead of ground truth bounding boxes will also strengthen the paper. 3. There was the question of whether the related Yang et al. 2018 workshop paper should be included as basis for comparison. In the AC's opinion, Yang et al. 2018 is not concurrent work and should be treated as prior work. However, it is not clear whether it is feasible to compare against that work. The authors should attempt to do so and if infeasible, clearly articulate why that is the case. 4. As pointed out by R3, once there is a depth map available, it is also possible to compare against 3D methods (such as those that operate on point clouds) Overall the paper had a nice insight by proposing the simple but effective idea of using depth information to help with visual relation prediction. Still the work is somewhat borderline in quality. In the AC's opinion, the main contribution and insight of the paper is of limited interest to the ICLR community, and it would be more appreciated in a computer vision conference. The authors are encouraged to improve the paper with stronger experiments and analysis, incorporate various suggestions from the reviewers, and resubmit to a vision conference. """
1860,"""Feature Interaction Interpretability: A Case for Explaining Ad-Recommendation Systems via Neural Interaction Detection""","['Feature Interaction', 'Interpretability', 'Black Box', 'AutoML']","""Recommendation is a prevalent application of machine learning that affects many users; therefore, it is important for recommender models to be accurate and interpretable. In this work, we propose a method to both interpret and augment the predictions of black-box recommender systems. In particular, we propose to interpret feature interactions from a source recommender model and explicitly encode these interactions in a target recommender model, where both source and target models are black-boxes. By not assuming the structure of the recommender system, our approach can be used in general settings. In our experiments, we focus on a prominent use of machine learning recommendation: ad-click prediction. We found that our interaction interpretations are both informative and predictive, e.g., significantly outperforming existing recommender models. What's more, the same approach to interpret interactions can provide new insights into domains even beyond recommendation, such as text and image classification.""","""The paper extracts feature interactions in recommender systems and studies the effect of these interactions on the recommendations. While the focus is on recommender systems the authors claim that the ideas can be generalised to other domains also. All reviewers found the empirical results and analysis thereof to be very interesting and useful. This paper saw a healthy discussion between the authors and reviewers and all reviewers agreed that this paper makes a useful contribution. I recommend that the authors address all the concerns of the reviewers in the final version of the paper. """
1861,"""Sparse Skill Coding: Learning Behavioral Hierarchies with Sparse Codes""","['hierarchical reinforcement learning', 'unsupervised learning', 'compression']","""Many approaches to hierarchical reinforcement learning aim to identify sub-goal structure in tasks. We consider an alternative perspective based on identifying behavioral `motifs'---repeated action sequences that can be compressed to yield a compact code of action trajectories. We present a method for iteratively compressing action trajectories to learn nested behavioral hierarchies of arbitrary depth, with actions of arbitrary length. The learned temporally extended actions provide new action primitives that can participate in deeper hierarchies as the agent learns. We demonstrate the relevance of this approach for tasks with non-trivial hierarchical structure and show that the approach can be used to accelerate learning in recursively more complex tasks through transfer.""","""The paper proposes an interesting idea of identifying repeated action sequences, or behavioral motifs, in the context of hierarchical reinforcement learning, using sparsity/compression. While this is a fresh and useful idea, it appears that the paper requires more work, both in terms of presentation/clarity and in terms of stronger empirical results. """
1862,"""Likelihood Contribution based Multi-scale Architecture for Generative Flows""","['Generative Flow', 'Normalizing Flow', 'Multi-scale Architecture', 'RealNVP', 'Dimension Factorization']","""Deep generative modeling using flows has gained popularity owing to the tractable exact log-likelihood estimation with efficient training and synthesis process. However, flow models suffer from the challenge of having high dimensional latent space, same in dimension as the input space. An effective solution to the above challenge as proposed by Dinh et al. (2016) is a multi-scale architecture, which is based on iterative early factorization of a part of the total dimensions at regular intervals. Prior works on generative flows involving a multi-scale architecture perform the dimension factorization based on a static masking. We propose a novel multi-scale architecture that performs data dependent factorization to decide which dimensions should pass through more flow layers. To facilitate the same, we introduce a heuristic based on the contribution of each dimension to the total log-likelihood which encodes the importance of the dimensions. Our proposed heuristic is readily obtained as part of the flow training process, enabling versatile implementation of our likelihood contribution based multi-scale architecture for generic flow models. We present such an implementation for the original flow introduced in Dinh et al. (2016), and demonstrate improvements in log-likelihood score and sampling quality on standard image benchmarks. We also conduct ablation studies to compare proposed method with other options for dimension factorization.""","""The authors propose a multi-scale architecture for generative flows that can learn which dimensions to pass through more flow layers based on a heuristic that judges the contribution to the likelihood. The authors compare the technique to some other flow based approaches. The reviewers asked for more experiments, which the authors delivered. However, the reviewers noted that a comparison to the SOTA for CIFAR in this setting was missing. Several reviewers raised their scores, but none were willing to argue for acceptance. """
1863,"""Clustered Reinforcement Learning""",[],"""Exploration strategy design is one of the challenging problems in reinforcement learning~(RL), especially when the environment contains a large state space or sparse rewards. During exploration, the agent tries to discover novel areas or high reward~(quality) areas. In most existing methods, the novelty and quality in the neighboring area of the current state are not well utilized to guide the exploration of the agent. To tackle this problem, we propose a novel RL framework, called \underline{c}lustered \underline{r}einforcement \underline{l}earning~(CRL), for efficient exploration in RL. CRL adopts clustering to divide the collected states into several clusters, based on which a bonus reward reflecting both novelty and quality in the neighboring area~(cluster) of the current state is given to the agent. Experiments on several continuous control tasks and several Atari-2600 games show that CRL can outperform other state-of-the-art methods to achieve the best performance in most cases.""","""The paper discusses a simple but apparently effective clustering technique to improve exploration. There are no theoretical results, hence the reader relies fully on the experiments to evaluate the method. Unfortunately, an in-dept analysis of the results is missing making it hard to properly evaluate the strength and weaknesses. Furthermore, the authors have not provided any rebuttal to the reviewers' concerns."""
1864,"""Infinite-Horizon Differentiable Model Predictive Control""","['Model Predictive Control', 'Riccati Equation', 'Imitation Learning', 'Safe Learning']","""This paper proposes a differentiable linear quadratic Model Predictive Control (MPC) framework for safe imitation learning. The infinite-horizon cost is enforced using a terminal cost function obtained from the discrete-time algebraic Riccati equation (DARE), so that the learned controller can be proven to be stabilizing in closed-loop. A central contribution is the derivation of the analytical derivative of the solution of the DARE, thereby allowing the use of differentiation-based learning methods. A further contribution is the structure of the MPC optimization problem: an augmented Lagrangian method ensures that the MPC optimization is feasible throughout training whilst enforcing hard constraints on state and input, and a pre-stabilizing controller ensures that the MPC solution and derivatives are accurate at each iteration. The learning capabilities of the framework are demonstrated in a set of numerical studies. ""","""This paper develops a linear quadratic model predictive control approach for safe imitation learning. The main contribution is an analytic solution for the derivative of the discrete-time algebraic Riccati equation (DARE). This allows an infinite horizon optimality objective to be used with differentiation-based learning methods. An additional contribution is the problem reformulation with a pre-stabilizing controller and the support of state constraints throughout the learning process. The method is tested on a damped-spring system and a vehicle platooning problem. The reviewers and the author response covered several topics. The reviewers appreciated the research direction and theoretical contributions of this work. The reviewers main concern was the experimental evaluation, which was originally limited to a damped spring system. The authors added another experiment for a substantially more complex continuous control domain. In response to the reviewers, the authors also described how this work relates to non-linear control problems. The authors also clarified the ability of the proposed method to handle state-based constraints that are not handled by earlier methods. The reviewers were largely satisfied with these changes. This paper should be accepted as the reviewers are satisfied that the paper has useful contributions."""
1865,"""A Generalized Training Approach for Multiagent Learning""","['multiagent learning', 'game theory', 'training', 'games']","""This paper investigates a population-based training regime based on game-theoretic principles called Policy-Spaced Response Oracles (PSRO). PSRO is general in the sense that it (1) encompasses well-known algorithms such as fictitious play and double oracle as special cases, and (2) in principle applies to general-sum, many-player games. Despite this, prior studies of PSRO have been focused on two-player zero-sum games, a regime where in Nash equilibria are tractably computable. In moving from two-player zero-sum games to more general settings, computation of Nash equilibria quickly becomes infeasible. Here, we extend the theoretical underpinnings of PSRO by considering an alternative solution concept, -Rank, which is unique (thus faces no equilibrium selection issues, unlike Nash) and applies readily to general-sum, many-player settings. We establish convergence guarantees in several games classes, and identify links between Nash equilibria and -Rank. We demonstrate the competitive performance of -Rank-based PSRO against an exact Nash solver-based PSRO in 2-player Kuhn and Leduc Poker. We then go beyond the reach of prior PSRO applications by considering 3- to 5-player poker games, yielding instances where -Rank achieves faster convergence than approximate Nash solvers, thus establishing it as a favorable general games solver. We also carry out an initial empirical validation in MuJoCo soccer, illustrating the feasibility of the proposed approach in another complex domain.""","""This paper analyzes and extends learning methods based on Policy-Spaced Response Oracles (PSRO) through the application of alpha-rank. In doing so, the paper explores connections with Nash equilibria, establishes convergence guarantees in multiple settings, and presents promising empirical results on (among other things) 3-to-5 player poker games. Although this paper originally received mixed scores, after the rebuttal period all reviewers converged to a consensus. A revised version also includes new experiments from the MuJoCo soccer domain, and new poker results as well. Overall, this paper provides a nice balance of theoretical support and practical relevance that should be of high impact to the RL community. """
1866,"""Programmable Neural Network Trojan for Pre-trained Feature Extractor""","['Neural Network', 'Trojan', 'Security']","""Neural network (NN) trojaning attack is an emerging and important attack that can broadly damage the system deployed with NN models. Different from adversarial attack, it hides malicious functionality in the weight parameters of NN models. Existing studies have explored NN trojaning attacks in some small datasets for specific domains, with limited numbers of fixed target classes. In this paper, we propose a more powerful trojaning attack method for large models, which outperforms existing studies in capability, generality, and stealthiness. First, the attack is programmable that the malicious misclassification target is not fixed and can be generated on demand even after the victim's deployment. Second, our trojaning attack is not limited in a small domain; one trojaned model on a large-scale dataset can affect applications of different domains that reuses its general features. Third, our trojan shows no biased behavior for different target classes, which makes it more difficult to defend.""","""This paper proposes a general framework for constructing Trojan/Backdoor attacks on deep neural networks. The authors argue that the proposed method can support dynamic and out-of-scope target classes, which are particularly applicable to backdoor attacks in the transfer learning setting. This paper has been very carefully discussed. While the idea is interesting and could be of interest to the broader community, all reviewers agree that it lacks of experimental comparison with existing methods for backdoor attacks on benchmark problems. The paper needs to be significantly revised before publication. I encourage the authors to improve this paper and resubmit to future conference."""
1867,"""SPECTRA: Sparse Entity-centric Transitions""","['representation learning', 'slot-structured representations', 'sparse slot-structured transitions', 'entity-centric representation', 'unsupervised learning', 'object-centric']","""Learning an agent that interacts with objects is ubiquituous in many RL tasks. In most of them the agent's actions have sparse effects : only a small subset of objects in the visual scene will be affected by the action taken. We introduce SPECTRA, a model for learning slot-structured transitions from raw visual observations that embodies this sparsity assumption. Our model is composed of a perception module that decomposes the visual scene into a set of latent objects representations (i.e. slot-structured) and a transition module that predicts the next latent set slot-wise and in a sparse way. We show that learning a perception module jointly with a sparse slot-structured transition model not only biases the model towards more entity-centric perceptual groupings but also enables intrinsic exploration strategy that aims at maximizing the number of objects changed in the agents trajectory.""","""This paper introduces a model that learns a slot-based representation and its transition model to predict the representation changes over time. While all the reviewers agree that this paper is focusing on an important problem, they expressed multiple concerns regarding the novelty of the approach as well as lacking experiments. It certainly is missing multiple important relevant works, thereby overclaiming at a few places. The authors provided a short general response to compare their approach with some of the previous works and conduct stronger experiments for a future submission. We believe this paper is not at the stage to be published at this point."""
1868,"""Walking on the Edge: Fast, Low-Distortion Adversarial Examples""","['Deep learning', 'adversarial attack']","""Adversarial examples of deep neural networks are receiving ever increasing attention because they help in understanding and reducing the sensitivity to their input. This is natural given the increasing applications of deep neural networks in our everyday lives. When white-box attacks are almost always successful, it is typically only the distortion of the perturbations that matters in their evaluation. In this work, we argue that speed is important as well, especially when considering that fast attacks are required by adversarial training. Given more time, iterative methods can always find better solutions. We investigate this speed-distortion trade-off in some depth and introduce a new attack called boundary projection BP that improves upon existing methods by a large margin. Our key idea is that the classification boundary is a manifold in the image space: we therefore quickly reach the boundary and then optimize distortion on this manifold.""","""In this paper the authors highlight the role of time in adversarial training and study various speed-distortion trade-offs. They introduce an attack called boundary projection BP which relies on utilizing the classification boundary. The reviewers agree that searching on the class boundary manifold, is interesting and promising but raise important concerns about evaluations on state of the art data sets. Some of the reviewers also express concern about the quality of presentation and lack of detail. While the authors have addressed some of these issues in the response, the reviewers continue to have some concerns. Overall I agree with the assessment of the reviewers and do not recommend acceptance at this time."""
1869,"""Learning Good Policies By Learning Good Perceptual Models""","['visual representation learning', 'reinforcement learning', 'curiosity']",""" Reinforcement learning (RL) has led to increasingly complex looking behavior in recent years. However, such complexity can be misleading and hides over-fitting. We find that visual representations may be a useful metric of complexity, and both correlates well objective optimization and causally effects reward optimization. We then propose curious representation learning (CRL) which allows us to use better visual representation learning algorithms to correspondingly increase visual representation in policy through an intrinsic objective on both simulated environments and transfer to real images. Finally, we show better visual representations induced by CRL allows us to obtain better performance on Atari without any reward than other curiosity objectives.""","""This paper investigates using ""curiosity"" to improve representation learning. This paper is not ready for publication. The main issues was the reviewers found the paper did not support the claim contributions in terms of (1) evaluating the new representations and improvement due to the representation, and (2) the novelty of the method compared to the long literature in this area. In general the reviewers found the empirical evidence unconvincing, and the too many missing details. The results in this paper have many issues: claims of performance based on three runs; undefined error measures; bolding entries in tables which appear not significantly better without explanation; unclear/informal meta-parameter tuning. Finally, there are some terminology issues in this paper. I suggest an excellent paper on the topic: pseudo-url """
1870,"""Identifying through Flows for Recovering Latent Representations""","['Representation learning', 'identifiable generative models', 'nonlinear-ICA']","""Identifiability, or recovery of the true latent representations from which the observed data originates, is de facto a fundamental goal of representation learning. Yet, most deep generative models do not address the question of identifiability, and thus fail to deliver on the promise of the recovery of the true latent sources that generate the observations. Recent work proposed identifiable generative modelling using variational autoencoders (iVAE) with a theory of identifiability. Due to the intractablity of KL divergence between variational approximate posterior and the true posterior, however, iVAE has to maximize the evidence lower bound (ELBO) of the marginal likelihood, leading to suboptimal solutions in both theory and practice. In contrast, we propose an identifiable framework for estimating latent representations using a flow-based model (iFlow). Our approach directly maximizes the marginal likelihood, allowing for theoretical guarantees on identifiability, thereby dispensing with variational approximations. We derive its optimization objective in analytical form, making it possible to train iFlow in an end-to-end manner. Simulations on synthetic data validate the correctness and effectiveness of our proposed method and demonstrate its practical advantages over other existing methods.""","""Main content: Blind review #1 summarizes it well: This paper is about learning an identifiable generative model, iFlow, that builds upon a recent result on nonlinear ICA. The key idea is providing side information to identify the latent representation, i.e., essentially a prior conditioned on extra information such as labels and restricting the mapping to flows for being able to compute the likelihood. As the loglikelihood of a flow model is readily available, a direct approach can be used for learning that optimizes both the prior and the observation model. -- Discussion: Reviewer questions were mostly about clarification, which the authors addressed during the rebuttal period. -- Recommendation and justification: All reviewers agree the paper is a weak accept based on degree of depth, novelty, and impact."""
1871,"""Disentangling Improves VAEs' Robustness to Adversarial Attacks""",[],"""This paper is concerned with the robustness of VAEs to adversarial attacks. We highlight that conventional VAEs are brittle under attack but that methods recently introduced for disentanglement such as -TCVAE (Chen et al., 2018) improve robustness, as demonstrated through a variety of previously proposed adversarial attacks (Tabacof et al. (2016); Gondim-Ribeiro et al. (2018); Kos et al.(2018)). This motivated us to develop Seatbelt-VAE, a new hierarchical disentangled VAE that is designed to be significantly more robust to adversarial attacks than existing approaches, while retaining high quality reconstructions.""","""This work a ""Seatbelt-VAE"" algorithm to improve the robustness of VAE against adversarial attacks. The proposed method is promising but the paper appears to be hastily written and leave many places to improve and clarify. This paper can be turned into an excellent paper with another round of throughout modification. """
1872,"""BackPACK: Packing more into Backprop""",[],"""Automatic differentiation frameworks are optimized for exactly one thing: computing the average mini-batch gradient. Yet, other quantities such as the variance of the mini-batch gradients or many approximations to the Hessian can, in theory, be computed efficiently, and at the same time as the gradient. While these quantities are of great interest to researchers and practitioners, current deep learning software does not support their automatic calculation. Manually implementing them is burdensome, inefficient if done naively, and the resulting code is rarely shared. This hampers progress in deep learning, and unnecessarily narrows research to focus on gradient descent and its variants; it also complicates replication studies and comparisons between newly developed methods that require those quantities, to the point of impossibility. To address this problem, we introduce BackPACK, an efficient framework built on top of PyTorch, that extends the backpropagation algorithm to extract additional information from first-and second-order derivatives. Its capabilities are illustrated by benchmark reports for computing additional quantities on deep neural networks, and an example application by testing several recent curvature approximations for optimization.""","""The paper efficiently computes quantities, such as variance estimates of the gradient or various Hessian approximations, jointly with the gradient, and the paper also provides a software package for this. All reviewers agree that this is a very good paper and should be accepted."""
1873,"""Learning vector representation of local content and matrix representation of local motion, with implications for V1""","['Representation learning', 'V1', 'neuroscience']","""This paper proposes a representational model for image pair such as consecutive video frames that are related by local pixel displacements, in the hope that the model may shed light on motion perception in primary visual cortex (V1). The model couples the following two components. (1) The vector representations of local contents of images. (2) The matrix representations of local pixel displacements caused by the relative motions between the agent and the objects in the 3D scene. When the image frame undergoes changes due to local pixel displacements, the vectors are multiplied by the matrices that represent the local displacements. Our experiments show that our model can learn to infer local motions. Moreover, the model can learn Gabor-like filter pairs of quadrature phases.""","""The paper received mixed reviews. On one hand, there is interesting novelty in relation to biological vision systems. On the other hand, there are some serious experimental issues with the machine learning model. While reviewers initially raised concerns about the motivation of the work, the rebuttal addressed those concerns. However, concerns about experiments remained. """
1874,"""Unsupervised Learning of Node Embeddings by Detecting Communities""","['Unsupervised Learning', 'Graph Embedding', 'Community Detection', 'Mincut', 'Normalized cut', 'Deep Learning']","""We present Deep MinCut (DMC), an unsupervised approach to learn node embeddings for graph-structured data. It derives node representations based on their membership in communities. As such, the embeddings directly provide interesting insights into the graph structure, so that the separate node clustering step of existing methods is no longer needed. DMC learns both, node embeddings and communities, simultaneously by minimizing the mincut loss, which captures the number of connections between communities. Striving for high scalability, we also propose a training process for DMC based on minibatches. We provide empirical evidence that the communities learned by DMC are meaningful and that the node embeddings are competitive in different node classification benchmarks.""","""The authors present an approach to learn node embeddings by minimising the mincut loss which ensures that the network simultaneously learns node representations and communities. To ensure scalability, the authors also propose an iterative process using mini-batches. I think this is a good paper with interesting results. However, I would suggest that the authors try to make it more accessible to a larger audience (2 reviewers have indicated that they had difficulty in following the paper). For example, while Theorem 1 and Theorem 2 are interesting they could have been completely pushed to the Appendix and it would have sufficed to say that your work/results are grounded in well-proven theorems as mentioned in 1 and 2. I agree that the authors have done a good job of responding to reviewers' queries and addressed the main concerns. However, since the reviewers have unanimously given a low rating to this paper, I do not feel confident about overriding their rating and accepting this paper. Hence, at this point I will have to recommend that this paper cannot be accepted. This paper has good potential and the authors should submit it to another suitable venue soon."""
1875,"""RGBD-GAN: Unsupervised 3D Representation Learning From Natural Image Datasets via RGBD Image Synthesis""","['image generation', '3D vision', 'unsupervised representation learning']","""Understanding three-dimensional (3D) geometries from two-dimensional (2D) images without any labeled information is promising for understanding the real world without incurring annotation cost. We herein propose a novel generative model, RGBD-GAN, which achieves unsupervised 3D representation learning from 2D images. The proposed method enables camera parameter--conditional image generation and depth image generation without any 3D annotations, such as camera poses or depth. We use an explicit 3D consistency loss for two RGBD images generated from different camera parameters, in addition to the ordinal GAN objective. The loss is simple yet effective for any type of image generator such as DCGAN and StyleGAN to be conditioned on camera parameters. Through experiments, we demonstrated that the proposed method could learn 3D representations from 2D images with various generator architectures.""","""The paper has initially received mixed reviews, with two reviewers being weakly positive and one being negative. Following the author's revision, however, the negative reviewer was satisfied with the changes, and one of the positive reviewers increased the score as well. In general, the reviewers agree that the paper contains a simple and well-executed idea for recovering geometry in unsupervised way with generative modeling from a collection of 2D images, even though the results are a bit underwhelming. The authors are encouraged to expand the related work section in the revision and to follow our suggestion of the reviewers."""
1876,"""AdaGAN: Adaptive GAN for Many-to-Many Non-Parallel Voice Conversion""","['Voice Conversion', 'Deep Learning', 'Non parallel', 'GAN', 'AdaGAN', 'AdaIN']","""Voice Conversion (VC) is a task of converting perceived speaker identity from a source speaker to a particular target speaker. Earlier approaches in the literature primarily find a mapping between the given source-target speaker-pairs. Developing mapping techniques for many-to-many VC using non-parallel data, including zero-shot learning remains less explored areas in VC. Most of the many-to-many VC architectures require training data from all the target speakers for whom we want to convert the voices. In this paper, we propose a novel style transfer architecture, which can also be extended to generate voices even for target speakers whose data were not used in the training (i.e., case of zero-shot learning). In particular, propose Adaptive Generative Adversarial Network (AdaGAN), new architectural training procedure help in learning normalized speaker-independent latent representation, which will be used to generate speech with different speaking styles in the context of VC. We compare our results with the state-of-the-art StarGAN-VC architecture. In particular, the AdaGAN achieves 31.73%, and 10.37% relative improvement compared to the StarGAN in MOS tests for speech quality and speaker similarity, respectively. The key strength of the proposed architectures is that it yields these results with less computational complexity. AdaGAN is 88.6% less complex than StarGAN-VC in terms of FLoating Operation Per Second (FLOPS), and 85.46% less complex in terms of trainable parameters. ""","""The paper has major presentation issues. The rebuttal clarified some technical ones, but it is clear that the authors need to improve the reading substantially, ,so the paper is not acceptable in its current form."""
1877,"""CLAREL: classification via retrieval loss for zero-shot learning""","['zero-shot learning', 'representation learning', 'fine-grained classification']","""We address the problem of learning fine-grained cross-modal representations. We propose an instance-based deep metric learning approach in joint visual and textual space. The key novelty of this paper is that it shows that using per-image semantic supervision leads to substantial improvement in zero-shot performance over using class-only supervision. On top of that, we provide a probabilistic justification for a metric rescaling approach that solves a very common problem in the generalized zero-shot learning setting, i.e., classifying test images from unseen classes as one of the classes seen during training. We evaluate our approach on two fine-grained zero-shot learning datasets: CUB and FLOWERS. We find that on the generalized zero-shot classification task CLAREL consistently outperforms the existing approaches on both datasets.""","""This paper demonstrates that per-image semantic supervision, as opposed to class-only supervision, can benefit zero-shot learning performance in certain contexts. Evaluations are conducted using CUB and FLOWERS fine-grained zero-shot data sets. In terms of evaluation, the paper received mixed final scores (two reject, one accept). During the rebuttal period, both reject reviewers considered the author responses, but in the end did not find the counterarguments sufficiently convincing. For example, one reviewer maintained that in its present form, the paper appeared too shallow without additional experiments and analyses to justify the suitability of the contrastive loss used for obtaining embeddings applied to zero-shot learning. Another continued to believe post-rebuttal that reference Reed et al., (2016) undercut the novelty of the proposed approach. And consistent with these sentiments, even the reviewer who voted for acceptance alluded to the limited novelty of the proposed approach; however, the author response merely states that a future revision will clarify the novelty. But this then requires another round of reviewing to determine whether the contribution is sufficiently new, especially given that all reviewers raised this criticism in one way or another. Furthermore, the rebuttal also mentions the inclusion of some additional experiments, but again, we don't know how these will turn out. Based on these considerations then, the AC did not see sufficient justification for accepting a paper with aggregate scores that are otherwise well below the norm for successful ICLR submissions."""
1878,"""Learning to Reach Goals Without Reinforcement Learning""","['Reinforcement Learning', 'Goal Reaching', 'Imitation Learning']","""Imitation learning algorithms provide a simple and straightforward approach for training control policies via standard supervised learning methods. By maximizing the likelihood of good actions provided by an expert demonstrator, supervised imitation learning can produce effective policies without the algorithmic complexities and optimization challenges of reinforcement learning, at the cost of requiring an expert demonstrator -- typically a person -- to provide the demonstrations. In this paper, we ask: can we use imitation learning to train effective policies without any expert demonstrations? The key observation that makes this possible is that, in the multi-task setting, trajectories that are generated by a suboptimal policy can still serve as optimal examples for other tasks. In particular, in the setting where the tasks correspond to different goals, every trajectory is a successful demonstration for the state that it actually reaches. Informed by this observation, we propose a very simple algorithm for learning behaviors without any demonstrations, user-provided reward functions, or complex reinforcement learning methods. Our method simply maximizes the likelihood of actions the agent actually took in its own previous rollouts, conditioned on the goal being the state that it actually reached. Although related variants of this approach have been proposed previously in imitation learning settings with example demonstrations, we present the first instance of this approach as a method for learning goal-reaching policies entirely from scratch. We present a theoretical result linking self-supervised imitation learning and reinforcement learning, and empirical results showing that it performs competitively with more complex reinforcement learning methods on a range of challenging goal reaching problems.""","""The authors present an algorithm that utilizes ideas from imitation learning to improve on goal-conditioned policy learning methods that rely on RL, such as hindsight experience replay. Several issues of clarity and the correctness of the main theoretical result were addressed during the rebuttal period in way that satisfied the reviewers with respect to their concerns in these areas. However, after discussion, the reviewers still felt that there were some fundamental issues with the paper, namely that the applicability of this method to more general RL problems (complex reward functions rather than signle state goals, time ) is unclear. The basic idea seems interesting, but it needs further development, and non-trivial modifications, to be broadly applicable as an approach to problems that RL is typically used on. Thus, I recommend rejection of the paper at this time."""
1879,"""Generalized Zero-shot ICD Coding""","['Generalized Zero-shot Learning', 'ICD Coding', 'NLP', 'Generative Model', 'Deep Learning']","""The International Classification of Diseases (ICD) is a list of classification codes for the diagnoses. Automatic ICD coding is in high demand as the manual coding can be labor-intensive and error-prone. It is a multi-label text classification task with extremely long-tailed label distribution, making it difficult to perform fine-grained classification on both frequent and zero-shot codes at the same time. In this paper, we propose a latent feature generation framework for generalized zero-shot ICD coding, where we aim to improve the prediction on codes that have no labeled data without compromising the performance on seen codes. Our framework generates pseudo features conditioned on the ICD code descriptions and exploits the ICD code hierarchical structure. To guarantee the semantic consistency between the generated features and real features, we reconstruct the keywords in the input documents that are related to the conditioned ICD codes. To the best of our knowledge, this works represents the first one that proposes an adversarial generative model for the generalized zero-shot learning on multi-label text classification. Extensive experiments demonstrate the effectiveness of our approach. On the public MIMIC-III dataset, our methods improve the F1 score from nearly 0 to 20.91% for the zero-shot codes, and increase the AUC score by 3% (absolute improvement) from previous state of the art. We also show that the framework improves the performance on few-shot codes.""","""This paper proposes a method to do zero-shot ICD coding, which involves assigning natural language labels (ICD codes) to input text. This is an important practical problem in healthcare, and it is not straightforward to solve, because many ICD codes have none or very few training examples due to the long distribution tail. The authors adapt a GAN-based technique previously used in vision to solve this problem. All of the reviewers agree that the paper is well written and well executed, and that the results are good. However, the reviewers have expressed concerns about the novelty of the GAN adaptation step, and left this paper very much borderline based on the scores it received. Due to the capacity restrictions I therefore have to recommend rejection, however I hope that the authors resubmit elsewhere. """
1880,"""Option Discovery using Deep Skill Chaining""","['Hierarchical Reinforcement Learning', 'Reinforcement Learning', 'Skill Discovery', 'Deep Learning', 'Deep Reinforcement Learning']","""Autonomously discovering temporally extended actions, or skills, is a longstanding goal of hierarchical reinforcement learning. We propose a new algorithm that combines skill chaining with deep neural networks to autonomously discover skills in high-dimensional, continuous domains. The resulting algorithm, deep skill chaining, constructs skills with the property that executing one enables the agent to execute another. We demonstrate that deep skill chaining significantly outperforms both non-hierarchical agents and other state-of-the-art skill discovery techniques in challenging continuous control tasks.""","""This paper tackles the problem of autonomous skill discovery by recursively chaining skills backwards from the goal in a deep learning setting, taking the initial conditions of one skill to be the goal of the previous one. The approach is evaluated on several domains and compared against other state of the art algorithms. This is clearly a novel and interesting paper. Two minor outstanding issues are that the domains are all related to navigation, and it would be interesting to see the approach on other domains, and that the method involves a fair bit of engineering in piecing different methods together. Regardless, this paper should be accepted."""
1881,"""Regularization Matters in Policy Optimization""","['Regularization', 'Policy Optimization', 'Reinforcement Learning']","""Deep Reinforcement Learning (Deep RL) has been receiving increasingly more attention thanks to its encouraging performance on a variety of control tasks. Yet, conventional regularization techniques in training neural networks (e.g., pseudo-formula regularization, dropout) have been largely ignored in RL methods, possibly because agents are typically trained and evaluated in the same environment. In this work, we present the first comprehensive study of regularization techniques with multiple policy optimization algorithms on continuous control tasks. Interestingly, we find conventional regularization techniques on the policy networks can often bring large improvement on the task performance, and the improvement is typically more significant when the task is more difficult. We also compare with the widely used entropy regularization and find pseudo-formula regularization is generally better. Our findings are further confirmed to be robust against the choice of training hyperparameters. We also study the effects of regularizing different components and find that only regularizing the policy network is typically enough. We hope our study provides guidance for future practices in regularizing policy optimization algorithms.""","""This paper proposes an analysis of regularization for policy optimization. While the multiple effects of regularization are well known in the statistics and optimization community, it is less the case in the RL community. This makes the novelty of the paper difficult to judge as it depends on the familiarity of RL researchers with the two aforementioned communities. Besides the novelty aspect, which is debatable, reviewers had doubts on the significance of the results, and in particular on the metrics chosen (based on the rank). While defining a ""best"" algorithm is notoriously difficult, and could be considered outside of the scope of this paper, the fact is that the conclusions reached are still sensitive to that difficulty. I thus regret to reject this paper as I feel not much more work is necessary to provide a compelling story. I encourage the authors to extend their choice of metrics to be more convincing in their conclusions."""
1882,"""Consistency-Based Semi-Supervised Active Learning: Towards Minimizing Labeling Budget""","['Active learning', 'semi-supervised learning']","""Active learning (AL) aims to integrate data labeling and model training in a unified way, and to minimize the labeling budget by prioritizing the selection of high value data that can best improve model performance. Readily-available unlabeled data are used to evaluate selection mechanisms, but are not used for model training in conventional pool-based AL. To minimize the labeling budget, we unify unlabeled sample selection and model training based on two principles. First, we exploit both labeled and unlabeled data using semi-supervised learning (SSL) to distill information from unlabeled data that improves representation learning and sample selection. Second, we propose a simple yet effective selection metric that is coherent with the training objective such that the selected samples are effective at improving model performance. Our experimental results demonstrate superior performance with our proposed principles for limited labeled data compared to alternative AL and SSL combinations. In addition, we study the AL phenomena of `cold start', which is becoming an increasingly more important factor to enable optimal unification of data labeling, model training and labeling budget minimization. We propose a measure that is found to be empirically correlated with the AL target loss. This measure can be used to assist in determining the proper start size.""","""The authors leverage advances in semi-supervised learning and data augmentation to propose a method for active learning. The AL method is based on the principle that a model should consistently label across perturbation/augmentations of examples, and thus propose to choose samples for active learning based on how much the estimated label distribution changes based on different perturbations of a given example. The method is intuitive and the experiments provide some evidence of efficacy. However, during discussion there was a lingering question of novelty that eventually swayed the group to reject this paper. """
1883,"""ROBUST DISCRIMINATIVE REPRESENTATION LEARNING VIA GRADIENT RESCALING: AN EMPHASIS REGULARISATION PERSPECTIVE""","['examples weighting', 'emphasis regularisation', 'gradient scaling', 'abnormal training examples']","""It is fundamental and challenging to train robust and accurate Deep Neural Networks (DNNs) when semantically abnormal examples exist. Although great progress has been made, there is still one crucial research question which is not thoroughly explored yet: What training examples should be focused and how much more should they be emphasised to achieve robust learning? In this work, we study this question and propose gradient rescaling (GR) to solve it. GR modifies the magnitude of logit vectors gradient to emphasise on relatively easier training data points when noise becomes more severe, which functions as explicit emphasis regularisation to improve the generalisation performance of DNNs. Apart from regularisation, we connect GR to examples weighting and designing robust loss functions. We empirically demonstrate that GR is highly anomaly-robust and outperforms the state-of-the-art by a large margin, e.g., increasing 7% on CIFAR100 with 40% noisy labels. It is also significantly superior to standard regularisers in both clean and abnormal settings. Furthermore, we present comprehensive ablation studies to explore the behaviours of GR under different cases, which is informative for applying GR in real-world scenarios.""","""The paper proposes a gradient rescaling method to make deep neural network training more robust to label noise. The intuition of focusing more on easier examples is not particularly new, but empirical results are promising. On the weak side, no theoretical justification is provided, and the method introduces extra hyperparameters that need to be tuned. Finally, more discussions on recent SOTA methods (e.g., Lee et al. 2019) as well as further comprehensive evaluations on various cases, such as asymmetric label noise, semantic label noise, and open-set label noise, would be needed to justify and demonstrate the effectiveness of the proposed method. """
1884,"""Extreme Value k-means Clustering""","['unsupervised learning', 'clustering', 'k-means', 'Extreme Value Theory']","""Clustering is the central task in unsupervised learning and data mining. k-means is one of the most widely used clustering algorithms. Unfortunately, it is generally non-trivial to extend k-means to cluster data points beyond Gaussian distribution, particularly, the clusters with non-convex shapes (Beliakov & King, 2006). To this end, we, for the first time, introduce Extreme Value Theory (EVT) to improve the clustering ability of k-means. Particularly, the Euclidean space was transformed into a novel probability space denoted as extreme value space by EVT. We thus propose a novel algorithm called Extreme Value k-means (EV k-means), including GEV k-means and GPD k-means. In addition, we also introduce the tricks to accelerate Euclidean distance computation in improving the computational efficiency of classical k-means. Furthermore, our EV k-means is extended to an online version, i.e., online Extreme Value k-means, in utilizing the Mini Batch k-means to cluster streaming data. Extensive experiments are conducted to validate our EV k-means and online EV k-means on synthetic datasets and real datasets. Experimental results show that our algorithms significantly outperform competitors in most cases.""","""This paper explores extending k-means to allow to clusters with non-convex shapes. This paper introduces a new algorithm, relying on empirical comparisons to illustrate its contribution. The main issue with the paper is that the empirical claims do not support that the new method is indeed better. The paper claims the new method outperforms the competitors in most cases. However, the original submission reported median performance and when the authors provided mean performance and additional baseline methods (at the reviewers' request) there appear to be little evidence to support the claim. In addition there are no measures of significance provided. The authors provided no commentary to help the reviewers understand the new results. There might be some important speed gains at the cost of final performance, but on the evidence provided we are not able to evaluate the cost in final performance. The text changes size after section 5.3 and is 9% smaller. Watch out for this formatting issue in future submissions """
1885,"""A NEW POINTWISE CONVOLUTION IN DEEP NEURAL NETWORKS THROUGH EXTREMELY FAST AND NON PARAMETRIC TRANSFORMS""","['Pointwise Convolution', 'Discrete Walsh-Hadamard Transform', 'Discrete Cosine-Transform']",""" Some conventional transforms such as Discrete Walsh-Hadamard Transform (DWHT) and Discrete Cosine Transform (DCT) have been widely used as feature extractors in image processing but rarely applied in neural networks. However, we found that these conventional transforms have the ability to capture the cross-channel correlations without any learnable parameters in DNNs. This paper firstly proposes to apply conventional transforms on pointwise convolution, showing that such transforms significantly reduce the computational complexity of neural networks without accuracy performance degradation. Especially for DWHT, it requires no floating point multiplications but only additions and subtractions, which can considerably reduce computation overheads. In addition, its fast algorithm further reduces complexity of floating point addition from O(n^2) to O(nlog n). These non-parametric and low computational properties construct extremely efficient networks in the number parameters and operations, enjoying accuracy gain. Our proposed DWHT-based model gained 1.49% accuracy increase with 79.4% reduced parameters and 48.4% reduced FLOPs compared with its baseline model (MoblieNet-V1) on the CIFAR 100 dataset.""","""This paper presents an approach to utilize conventional frequency domain basis such as DWHT and DCT to replace the standard point-wise convolution, which can significantly reduce the computational complexity. The paper is generally well-written and easy to follow. However, the technical novelty seems limited as it is basically a simple combination of CNNs and traditional filters. Moreover, as reviewers suggested, it is our history and current consensus in the community that learned representations have significantly outperformed traditional pre-defined features or filters as the training data expands. I do understand the scientific value of revisiting and challenging that belief as commented by R1, but in order to provoke meaningful discussion, experiments on large-scale dataset like ImageNet are definitely necessary. For these reasons, I think the paper is not ready for publication at ICLR and would like to recommend rejection."""
1886,"""Learning Numeral Embedding""","['Natural Language Processing', 'Numeral Embedding', 'Word Embedding', 'Out-of-vocabulary Problem']","""Word embedding is an essential building block for deep learning methods for natural language processing. Although word embedding has been extensively studied over the years, the problem of how to effectively embed numerals, a special subset of words, is still underexplored. Existing word embedding methods do not learn numeral embeddings well because there are an infinite number of numerals and their individual appearances in training corpora are highly scarce. In this paper, we propose two novel numeral embedding methods that can handle the out-of-vocabulary (OOV) problem for numerals. We first induce a finite set of prototype numerals using either a self-organizing map or a Gaussian mixture model. We then represent the embedding of a numeral as a weighted average of the prototype number embeddings. Numeral embeddings represented in this manner can be plugged into existing word embedding learning approaches such as skip-gram for training. We evaluated our methods and showed its effectiveness on four intrinsic and extrinsic tasks: word similarity, embedding numeracy, numeral prediction, and sequence labeling. ""","""This paper proposes better methods to handle numerals within word embeddings. Overall, my impression is that this paper is solid, but not super-exciting. The scope is a little bit limited (to only numbers), and it is not by any means the first paper to handle understanding numbers within word embeddings. A more thorough theoretical and empirical comparison to other methods, e.g. Spithourakis & Riedel (2018) and Chen et al. (2019), could bring the paper a long way. I think this paper is somewhat borderline, but am recommending not to accept because I feel that the paper could be greatly improved by making the above-mentioned comparisons more complete, and thus this could find a better place as a better paper in a new venue."""
1887,"""Expected Tight Bounds for Robust Deep Neural Network Training""","['network robustness', 'network verification', 'interval bound propagation']","""Training Deep Neural Networks (DNNs) that are robust to norm bounded adversarial attacks remains an elusive problem. While verification based methods are generally too expensive to robustly train large networks, it was demonstrated by Gowal et. al. that bounded input intervals can be inexpensively propagated from layer to layer through deep networks. This interval bound propagation (IBP) approach led to high robustness and was the first to be employed on large networks. However, due to the very loose nature of the IBP bounds, particularly for large/deep networks, the required training procedure is complex and involved. In this paper, we closely examine the bounds of a block of layers composed of an affine layer, followed by a ReLU, followed by another affine layer. To this end, we propose \emph{expected} bounds (true bounds in expectation), which are provably tighter than IBP bounds in expectation. We then extend this result to deeper networks through blockwise propagation and show that we can achieve orders of magnitudes tighter bounds compared to IBP. Using these tight bounds, we demonstrate that a simple standard training procedure can achieve impressive robustness-accuracy trade-off across several architectures on both MNIST and CIFAR10.""","""The authors propose a new technique for training networks to be robust to adversarial perturbations. They do this by computing bounds on the impact of the worst case adversarial attack, but that only hold under strong assumptions on the distribution of the network weights. While these bounds are not rigorous, the authors show that they can produce networks that improve the robustness-accuracy tradeoff on image classification tasks. While the idea proposed by the authors is interesting, the reviewers had several concerns about this paper: 1) The assumptions required for the bounds to hold are unrealistic and unlikely to hold in practice, especially for convolutional neural networks. 2) The comparisons are not presented in a fair manner that allow the reader to interpret the difference between the nature of certificates computed by the authors and those computed in prior work. 3) The empirical gains are not substantial if one normalizes for the non-rigorous nature of the certificates computed (given that they only hold under hard-to-justify assumptions). The rebuttal phase clarified some issues in the paper, but the fundamental flaws with the approach remain unaddressed. Thus, I recommend rejection and suggest that the authors revisit the assumptions and develop more convincing arguments and/or experiments justifying them for practical deep learning scenarios."""
1888,"""Uncertainty-Aware Prediction for Graph Neural Networks""","['Uncertainty', 'Graph Neural Networks', 'Subjective Logic', 'Bayesian']","""Thanks to graph neural networks (GNNs), semi-supervised node classification has shown the state-of-the-art performance in graph data. However, GNNs do not consider any types of uncertainties associated with the class probabilities to minimize risk due to misclassification under uncertainty in real life. In this work, we propose a Bayesian deep learning framework reflecting various types of uncertainties for classification predictions by leveraging the powerful modeling and learning capabilities of GNNs. We considered multiple uncertainty types in both deep learning (DL) and belief/evidence theory domains. We treat the predictions of a Bayesian GNN (BGNN) as nodes' multinomial subjective opinions in a graph based on Dirichlet distributions where each belief mass is a belief probability of each class. By collecting evidence from the given labels of training nodes, the BGNN model is designed for accurately predicting probabilities of each class and detecting out-of-distribution. We validated the outperformance of the proposed BGNN, compared to the state-of-the-art counterparts in terms of the accuracy of node classification prediction and out-of-distribution detection based on six real network datasets.""","""The authors propose a way to produce uncertainty measures in graph neural networks. However, the reviewers find that the methods proposed lack novelty and are incremental additions to prior work."""
1889,"""Modeling question asking using neural program generation""","['question asking', 'language generation', 'program induction', 'reinforcement learning', 'density estimation', 'cognitive science']","""People ask questions that are far richer, more informative, and more creative than current AI systems. We propose a neural program generation framework for modeling human question asking, which represents questions as formal programs and generates programs with an encoder-decoder based deep neural network. From extensive experiments using an information-search game, we show that our method can ask optimal questions in synthetic settings, and predict which questions humans are likely to ask in unconstrained settings. We also propose a novel grammar-based question generation framework trained with reinforcement learning, which is able to generate creative questions without supervised data.""","""The authors explore different ways to generate questions about the current state of a Battleship game. Overall the reviewers feel that the problem setting is interesting, and the program generation part is also interesting. However, the proposed approach is evaluated in tangential tasks rather than learning to generate question to achieve the goal. Improving this part is essential to improve the quality of the work. """
1890,"""Self-Supervised Policy Adaptation""","['reinforcement learning', 'environment representation', 'representation learning', 'model mismatch']","""We consider the problem of adapting an existing policy when the environment representation changes. Upon a change of the encoding of the observations the agent can no longer make use of its policy as it cannot correctly interpret the new observations. This paper proposes Greedy State Representation Learning (GSRL) to transfer the original policy by translating the environment representation back into its original encoding. To achieve this GSRL samples observations from both the environment and a dynamics model trained from prior experience. This generates pairs of state encodings, i.e., a new representation from the environment and a (biased) old representation from the forward model, that allow us to bootstrap a neural network model for state translation. Although early translations are unsatisfactory (as expected), the agent eventually learns a valid translation as it minimizes the error between expected and observed environment dynamics. Our experiments show the efficiency of our approach and that it translates the policy in considerably less steps than it would take to retrain the policy.""","""The submission proposes to improve generalization in RL environments, by addressing the scenario where the observations change even though the underlying environment dynamics do not change. The authors address this by learning an adaptation function which maps back to the original representation. The approach is empirically evaluated on the Mountain Car domain. The reviewers were unanimously unimpressed with the experiments, the baselines, and the results. While they agree that the problem is well-motivated, they requested additional evidence that the method works as described and that a simpler approach such as fine-tuning would not be sufficient. The recommendation is to reject the paper at this time."""
1891,"""Semantics Preserving Adversarial Attacks""","['black-box adversarial attacks', 'stein variational inference', 'adversarial images and tex']","""While progress has been made in crafting visually imperceptible adversarial examples, constructing semantically meaningful ones remains a challenge. In this paper, we propose a framework to generate semantics preserving adversarial examples. First, we present a manifold learning method to capture the semantics of the inputs. The motivating principle is to learn the low-dimensional geometric summaries of the inputs via statistical inference. Then, we perturb the elements of the learned manifold using the Gram-Schmidt process to induce the perturbed elements to remain in the manifold. To produce adversarial examples, we propose an efficient algorithm whereby we leverage the semantics of the inputs as a source of knowledge upon which we impose adversarial constraints. We apply our approach on toy data, images and text, and show its effectiveness in producing semantics preserving adversarial examples which evade existing defenses against adversarial attacks.""","""This paper describes a method for generating adversarial examples from images and text such that they maintain the semantics of the input. The reviewers saw a lot of value in this work, but also some flaws. The review process seemed to help answer many questions, but a few remain: there are some questions about the strength of the empirical results on text after the author's updates. Wether the adversarial images stay on the manifold is questioned (are blurry or otherwise noisy images ""on manifold""?). One reviewer raises good questions about the soundness of the comparison to the Song paper. I think this review process has been very productive, and I hope the authors will agree. I hope this feedback helps them to improve their paper."""
1892,"""Contextual Inverse Reinforcement Learning""","['Contextual MDP', 'Inverse Reinforcement Learning', 'Reinforcement Learning', 'Mirror Descent']","""We consider the Inverse Reinforcement Learning problem in Contextual Markov Decision Processes. In this setting, the reward, which is unknown to the agent, is a function of a static parameter referred to as the context. There is also an expert who knows this mapping and acts according to the optimal policy for each context. The goal of the agent is to learn the experts mapping by observing demonstrations. We define an optimization problem for finding this mapping and show that when it is linear, the problem is convex. We present and analyze the sample complexity of three algorithms for solving this problem: the mirrored descent algorithm, evolution strategies, and the ellipsoid method. We also extend the first two methods to work with general reward functions, e.g., deep neural networks, but without the theoretical guarantees. Finally, we compare the different techniques empirically in driving simulation and a medical treatment regime.""","""The authors introduce a framework for inverse reinforcement learning tasks whose reward functions are dependent on context variables and provide a solution by formulating it as a convex optimization problem. Overall, the authors agreed that the method appears to be sound. However, after discussion there were lingering concerns about (1) in what situations this framework is useful or advantageous, (2) how it compares to existing, modern IRL algorithms that take context into account, and (3) if the theoretical and experimental results were truly useful in evaluating the algorithm. Given that these issues were not able to be fully resolved, I recommend that this paper be rejected at this time."""
1893,"""DeepSimplex: Reinforcement Learning of Pivot Rules Improves the Efficiency of Simplex Algorithm in Solving Linear Programming Problems""","['Simplex Algorithm', 'Pivoting Rules', 'Reinforcement Learning', 'Combinatorial Optimization', 'Supervised Learning', 'Travelling Salesman Problem']","""Linear Programs (LPs) are a fundamental class of optimization problems with a wide variety of applications. Fast algorithms for solving LPs are the workhorse of many combinatorial optimization algorithms, especially those involving integer programming. One popular method to solve LPs is the simplex method which, at each iteration, traverses the surface of the polyhedron of feasible solutions. At each vertex of the polyhedron, one of several heuristics chooses the next neighboring vertex, and these vary in accuracy and computational cost. We use deep value-based reinforcement learning to learn a pivoting strategy that at each iteration chooses between two of the most popular pivot rules -- Dantzig and steepest edge. Because the latter is typically more accurate and computationally costly than the former, we assign a higher wall time-based cost to steepest edge iterations than Dantzig iterations. We optimize this weighted cost on a neural net architecture designed for the simplex algorithm. We obtain between 20% to 50% reduction in the gap between weighted iterations of the individual pivoting rules, and the best possible omniscient policies for LP relaxations of randomly generated instances of five-city Traveling Salesman Problem. ""","""This paper present a learning method for speeding up of LP, and apply it to the TSP problem. Reviewers and AC agree that the idea is quite interesting and promising. However, I think the paper is far from being ready to publish in various aspects: (a) much more editorial efforts are necessary (b) the TPS application of small scale is not super appealing Hence, I recommend rejection."""
1894,"""Don't Use Large Mini-batches, Use Local SGD""",[],"""Mini-batch stochastic gradient methods (SGD) are state of the art for distributed training of deep neural networks. Drastic increases in the mini-batch sizes have lead to key efficiency and scalability gains in recent years. However, progress faces a major roadblock, as models trained with large batches often do not generalize well, i.e. they do not show good accuracy on new data. As a remedy, we propose a \emph{post-local} SGD and show that it significantly improves the generalization performance compared to large-batch training on standard benchmarks while enjoying the same efficiency (time-to-accuracy) and scalability. We further provide an extensive study of the communication efficiency vs. performance trade-offs associated with a host of \emph{local SGD} variants. ""","""The authors propose a simple modification of local SGD for parallel training, starting with standard SGD and then switching to local SGD. The resulting method provides good results and makes a practical contribution. Please carefully account for reviewer comments in future revisions."""
1895,"""Lite Transformer with Long-Short Range Attention""","['efficient model', 'transformer']","""Transformer has become ubiquitous in natural language processing (e.g., machine translation, question answering); however, it requires enormous amount of computations to achieve high performance, which makes it not suitable for mobile applications that are tightly constrained by the hardware resources and battery. In this paper, we present an efficient mobile NLP architecture, Lite Transformer to facilitate deploying mobile NLP applications on edge devices. The key primitive is the Long-Short Range Attention (LSRA), where one group of heads specializes in the local context modeling (by convolution) while another group specializes in the long-distance relationship modeling (by attention). Such specialization brings consistent improvement over the vanilla transformer on three well-established language tasks: machine translation, abstractive summarization, and language modeling. Under constrained resources (500M/100M MACs), Lite Transformer outperforms transformer on WMT'14 English-French by 1.2/1.7 BLEU, respectively. Lite Transformer reduces the computation of transformer base model by 2.5x with 0.3 BLEU score degradation. Combining with pruning and quantization, we further compressed the model size of Lite Transformer by 18.2x. For language modeling, Lite Transformer achieves 1.8 lower perplexity than the transformer at around 500M MACs. Notably, Lite Transformer outperforms the AutoML-based Evolved Transformer by 0.5 higher BLEU for the mobile NLP setting without the costly architecture search that requires more than 250 GPU years. Code has been made available at pseudo-url.""","""This paper presents an efficient architecture of Transformer to facilitate implementations on mobile settings. The core idea is to decompose the self-attention layers to focus on local and global information separately. In the experiments on machine translation, it is shown to outperform baseline Transformer as well as the Evolved Transformer obtained by a costly architecture search. While all reviewers admitted the practical impact of the results in terms of engineering, the main concerns in the initial paper were the clarification of the mobile settings and scientific contributions. Through the discussion, reviewers are fairly satisfied with the authors response and are now all positive to the acceptance. Although we are still curious how it works on other tasks (as the title says mobile applications), I think the paper provides enough insights valuable to the community, so Id like to recommend acceptance. """
1896,"""Robustness Verification for Transformers""","['Robustness', 'Verification', 'Transformers']","""Robustness verification that aims to formally certify the prediction behavior of neural networks has become an important tool for understanding model behavior and obtaining safety guarantees. However, previous methods can usually only handle neural networks with relatively simple architectures. In this paper, we consider the robustness verification problem for Transformers. Transformers have complex self-attention layers that pose many challenges for verification, including cross-nonlinearity and cross-position dependency, which have not been discussed in previous works. We resolve these challenges and develop the first robustness verification algorithm for Transformers. The certified robustness bounds computed by our method are significantly tighter than those by naive Interval Bound Propagation. These bounds also shed light on interpreting Transformers as they consistently reflect the importance of different words in sentiment analysis.""","""A robustness verification method for transformers is presented. While robustness verification has previously been attempted for other types of neural networks, this is the first method for transformers. Reviewers are generally happy with the work done, but there were complaints about not comparing with and citing previous work, and only analyzing a simple one-layer version of transformers. The authors convincingly respond to these complaints. I think that the paper can be accepted, given that the reviewers' complaints have been addressed and the paper seems to be sufficiently novel and have practical importance for understanding transformers."""
1897,"""Deep Nonlinear Stochastic Optimal Control for Systems with Multiplicative Uncertainties""","['Deep Learning', 'Stochastic Optimal Control', 'Robotics', 'Biomechanics', 'LSTM']","""We present a deep recurrent neural network architecture to solve a class of stochastic optimal control problems described by fully nonlinear Hamilton Jacobi Bellman partial differential equations. Such PDEs arise when one considers stochastic dynamics characterized by uncertainties that are additive and control multiplicative. Stochastic models with the aforementioned characteristics have been used in computational neuroscience, biology, finance and aerospace systems and provide a more accurate representation of actuation than models with additive uncertainty. Previous literature has established the inadequacy of the linear HJB theory and instead rely on a non-linear Feynman-Kac lemma resulting in a second order forward-backward stochastic differential equations representation. However, the proposed solutions that use this representation suffer from compounding errors and computational complexity leading to lack of scalability. In this paper, we propose a deep learning based algorithm that leverages the second order Forward-Backward SDE representation and LSTM based recurrent neural networks to not only solve such Stochastic Optimal Control problems but also overcome the problems faced by previous approaches and scales well to high dimensional systems. The resulting control algorithm is tested on non-linear systems in robotics and biomechanics to demonstrate feasibility and out-performance against previous methods. ""","""A nice paper, but quite some unclarities; it's unclear in particular if the paper improves w.r.t. SOTA. Esp. scaling is an issue here. Also, the understandability is below par and more work can make this into an acceptable submission."""
1898,"""Probability Calibration for Knowledge Graph Embedding Models""","['knowledge graph embeddings', 'probability calibration', 'calibration', 'graph representation learning', 'knowledge graphs']","""Knowledge graph embedding research has overlooked the problem of probability calibration. We show popular embedding models are indeed uncalibrated. That means probability estimates associated to predicted triples are unreliable. We present a novel method to calibrate a model when ground truth negatives are not available, which is the usual case in knowledge graphs. We propose to use Platt scaling and isotonic regression alongside our method. Experiments on three datasets with ground truth negatives show our contribution leads to well calibrated models when compared to the gold standard of using negatives. We get significantly better results than the uncalibrated models from all calibration methods. We show isotonic regression offers the best the performance overall, not without trade-offs. We also show that calibrated models reach state-of-the-art accuracy without the need to define relation-specific decision thresholds.""","""The paper proposes a novel method to calibrate a knowledge graph embedding method when ground truth negatives are not available. Essentially, the method relies on generating corrupted triples as negative examples to be used by known approaches (Platt scaling and isotonic regression). This is claimed as the first approach of probability calibration for knowledge graph embedding models, which is considered to be very relevant for practitioners working on knowledge graph embedding (although this is a narrow audience). The paper does not propose a wholly novel method for probability calibration. Instead, the value in experimental insights provided. Some reviewers would have liked to see a more in-depth analysis, but reviewers appreciated the thoroughness of the results in the clear articulation of the findings and the fact that multiple datasets and models are studied. There was an animated discussion about this paper, but the paper seems a useful contribution to the ICLR community and I would like to recommend acceptance. """
1899,"""Retrieving Signals in the Frequency Domain with Deep Complex Extractors""","['Deep Complex Networks', 'Signal Extraction']","""Recent advances have made it possible to create deep complex-valued neural networks. Despite this progress, the potential power of fully complex intermediate computations and representations has not yet been explored for many challenging learning problems. Building on recent advances, we propose a novel mechanism for extracting signals in the frequency domain. As a case study, we perform audio source separation in the Fourier domain. Our extraction mechanism could be regarded as a local ensembling method that combines a complex-valued convolutional version of Feature-Wise Linear Modulation (FiLM) and a signal averaging operation. We also introduce a new explicit amplitude and phase-aware loss, which is scale and time invariant, taking into account the complex-valued components of the spectrogram. Using the Wall Street Journal Dataset, we compare our phase-aware loss to several others that operate both in the time and frequency domains and demonstrate the effectiveness of our proposed signal extraction method and proposed loss. When operating in the complex-valued frequency domain, our deep complex-valued network substantially outperforms its real-valued counterparts even with half the depth and a third of the parameters. Our proposed mechanism improves significantly deep complex-valued networks' performance and we demonstrate the usefulness of its regularizing effect.""","""The paper discusses audio source separation with complex NNs. The approach is good and may increase an area of research. But the experimental section is very weak and needs to be improved to merit publication."""
1900,"""Evaluating and Calibrating Uncertainty Prediction in Regression Tasks""","['Uncertainty Estimation', 'Regression', 'Deep learning']","""Predicting not only the target but also an accurate measure of uncertainty is important for many applications and in particular safety-critical ones. In this work we study the calibration of uncertainty prediction for regression tasks which often arise in real-world systems. We show that the existing definition for calibration of a regression uncertainty [Kuleshov et al. 2018] has severe limitations in distinguishing informative from non-informative uncertainty predictions. We propose a new definition that escapes this caveat and an evaluation method using a simple histogram-based approach inspired by reliability diagrams used in classification tasks. Our method clusters examples with similar uncertainty prediction and compares the prediction with the empirical uncertainty on these examples. We also propose a simple scaling-based calibration that preforms well in our experimental tests. We show results on both a synthetic, controlled problem and on the object detection bounding-box regression task using the COCO and KITTI datasets.""","""The paper investigates calibration for regression problems. The paper identifies a shortcoming of previous work by Kuleshov et al. 2018 and proposes an alternative. All the reviewers agreed that while this is an interesting direction, the paper requires more work before it can be accepted. In particular, the reviewers raised several concerns about motivation, clarity of the presentation and lack of in-depth empirical evaluation. I encourage the authors to revise the draft based on the reviewers feedback and resubmit to a different venue. """
1901,"""Adversarial Policies: Attacking Deep Reinforcement Learning""","['deep RL', 'adversarial examples', 'security', 'multi-agent']","""Deep reinforcement learning (RL) policies are known to be vulnerable to adversarial perturbations to their observations, similar to adversarial examples for classifiers. However, an attacker is not usually able to directly modify another agent's observations. This might lead one to wonder: is it possible to attack an RL agent simply by choosing an adversarial policy acting in a multi-agent environment so as to create natural observations that are adversarial? We demonstrate the existence of adversarial policies in zero-sum games between simulated humanoid robots with proprioceptive observations, against state-of-the-art victims trained via self-play to be robust to opponents. The adversarial policies reliably win against the victims but generate seemingly random and uncoordinated behavior. We find that these policies are more successful in high-dimensional environments, and induce substantially different activations in the victim policy network than when the victim plays against a normal opponent. Videos are available at pseudo-url.""","""This paper demonstrates that for deep RL problems one can construct adversarial examples where the examples don't really need to be even better than the best opponent. Surprisingly, sometimes, the adversarial opponent is less capable than normal opponents which the victim plays successfully against, yet they can disrupt the policies. The authors present a physically realistic threat model and demonstrate that adversarial policies can exist in this threat model. The reviewers agree with this paper presents results (proof of concept) that is ""timely"" and the RL community will benefit from this result. Based on reviewers comment, I recommend to accept this paper. """
1902,"""Improving Gradient Estimation in Evolutionary Strategies With Past Descent Directions""","['Evolutionary Strategies', 'Surrogate Gradients']","""We propose a novel method to optimally incorporate surrogate gradient information. Our approach, unlike previous work, needs no information about the quality of the surrogate gradients and is always guaranteed to find a descent direction that is better than the surrogate gradient. This allows to iteratively use the previous gradient estimate as surrogate gradient for the current search point. We theoretically prove that this yields fast convergence to the true gradient for linear functions and show under simplifying assumptions that it significantly improves gradient estimates for general functions. Finally, we evaluate our approach empirically on MNIST and reinforcement learning tasks and show that it considerably improves the gradient estimation of ES at no extra computational cost.""","""The authors propose a novel approach to using surrogate gradient information in ES. Unlike previous approaches, their method always finds a descent direction that is better than the surrogate gradient. This allows them to use previous gradient estimates as the surrogate gradient. They prove results for the linear case and under simplifying assumptions that it extends beyond the linear case. Finally, they evaluate on MNIST and RL tasks and show improvements over ES. After the revisions, reviewers were concerned about: * The strong (and potentially unrealistic) assumptions for the theorems. They felt that these assumptions trivialized the theorems. * Limited experiments demonstrating advantages in situations where other more effective methods could be used. The performance on the RL tasks shows small gains compared to a vanilla ES approach. Thus, the usefulness of the approach is not clearly demonstrated. I think that the paper has the potential to be a strong submission if the authors can extend their experiments to more complex problems and demonstrate gains. At this time however, I recommend rejection."""
1903,"""Efficient High-Dimensional Data Representation Learning via Semi-Stochastic Block Coordinate Descent Methods""","['Sparse learning', 'Hard thresholding', 'High-dimensional regression']","""With the increase of data volume and data dimension, sparse representation learning attracts more and more attention. For high-dimensional data, randomized block coordinate descent methods perform well because they do not need to calculate the gradient along the whole dimension. Existing hard thresholding algorithms evaluate gradients followed by a hard thresholding operation to update the model parameter, which leads to slow convergence. To address this issue, we propose a novel hard thresholding algorithm, called Semi-stochastic Block Coordinate Descent Hard Thresholding Pursuit (SBCD-HTP). Moreover, we present its sparse and asynchronous parallel variants. We theoretically analyze the convergence properties of our algorithms, which show that they have a significantly lower hard thresholding complexity than existing algorithms. Our empirical evaluations on real-world datasets and face recognition tasks demonstrate the superior performance of our algorithms for sparsity-constrained optimization problems.""","""All the reviewers reach a consensus to reject the current submission. In addition, there are two assumptions in the proof which seemed never included in Theorem conditions or verified in typical cases. 1) Between Eq (16) and (17), the authors assumed the 'extended restricted strong convexity given by the un-numbered equation. 2) In Eq. (25), the authors assume the existence of \sigma making the inequality true. However those assumptions are neither explicitly stated in theorem conditions, nor verified for typical cases in applications, e.g. even the square or logistic loss. The authors need to address these assumptions explicitly rather than using them from nowhere."""
1904,"""Goten: GPU-Outsourcing Trusted Execution of Neural Network Training and Prediction""","['machine learning', 'security', 'privacy', 'TEE', 'trusted processors', 'Intel SGX', 'GPU', 'high-performance']","""Before we saw worldwide collaborative efforts in training machine-learning models or widespread deployments of prediction-as-a-service, we need to devise an efcient privacy-preserving mechanism which guarantees the privacy of all stakeholders (data contributors, model owner, and queriers). Slaom (ICLR 19) preserves privacy only for prediction by leveraging both trusted environment (e.g., Intel SGX) and untrusted GPU. The challenges for enabling private training are explicitly left open its pre-computation technique does not hide the model weights and fails to support dynamic quantization corresponding to the large changes in weight magnitudes during training. Moreover, it is not a truly outsourcing solution since (ofine) pre-computation for a job takes as much time as computing the job locally by SGX, i.e., it only works before all pre-computations are exhausted. We propose Goten, a privacy-preserving framework supporting both training and prediction. We tackle all the above challenges by proposing a secure outsourcing protocol which 1) supports dynamic quantization, 2) hides the model weight from GPU, and 3) performs better than a pure-SGX solution even if we perform the precomputation online. Our solution leverages a non-colluding assumption which is often employed by cryptographic solutions aiming for practical efciency (IEEE SP 13, Usenix Security 17, PoPETs 19). We use three servers, which can be reduced to two if the pre-computation is done ofine. Furthermore, we implement our tailor-made memory-aware measures for minimizing the overhead when the SGX memory limit is exceeded (cf., EuroSys 17, Usenix ATC 19). Compared to a pure-SGX solution, our experiments show that Goten can speed up linear-layer computations in VGG up to 40, and overall speed up by 8.64 on VGG11.""","""This paper proposes a framework for privacy-preserving training of neural networks within a Trusted Execution Environment (TEE) such as Intel SGX. The reviewers found that this is a valuable research directions, but found that there were significant flaws in the experimental setup that need to be addressed. In particular, the paper does not run all the experiments in the same setup, which leads to the use of scaling factor in some cases. The reviewers found that this made it difficult to make sense of the results. The writing of this paper should be streamlined, along with the experiments before resubmission."""
1905,"""A Simple Recurrent Unit with Reduced Tensor Product Representations""","['RNNs', 'TPRs']","""Widely used recurrent units, including Long-short Term Memory (LSTM) and Gated Recurrent Unit (GRU), perform well on natural language tasks, but their ability to learn structured representations is still questionable. Exploiting reduced Tensor Product Representations (TPRs) --- distributed representations of symbolic structure in which vector-embedded symbols are bound to vector-embedded structural positions --- we propose the TPRU, a simple recurrent unit that, at each time step, explicitly executes structural-role binding and unbinding operations to incorporate structural information into learning. The gradient analysis of our proposed TPRU is conducted to support our model design, and its performance on multiple datasets shows the effectiveness of it. Furthermore, observations on linguistically grounded study demonstrate the interpretability of our TPRU.""","""This paper has been reviewed by three reviewers and received scores such as 3/3/6. The reviewers took into account the rebuttal in their final verdict. The major criticism concerned the somewhat ad-hoc notion of interpretability, the analysis of vanishing/exploding gradients in TPRU is experimental lacking theory. Finally, all reviewers noted the paper is difficult to read and contains grammar issues etc. which does not help. On balance, we regret that this paper cannot be accepted to ICLR2020. """
1906,"""Deep End-to-end Unsupervised Anomaly Detection """,[],"""This paper proposes a novel method to detect anomalies in large datasets under a fully unsupervised setting. The key idea behind our algorithm is to learn the representation underlying normal data. To this end, we leverage the latest clustering technique suitable for handling high dimensional data. This hypothesis provides a reliable starting point for normal data selection. We train an autoencoder from the normal data subset, and iterate between hypothesizing normal candidate subset based on clustering and representation learning. The reconstruction error from the learned autoencoder serves as a scoring function to assess the normality of the data. Experimental results on several public benchmark datasets show that the proposed method outperforms state-of-the-art unsupervised techniques and is comparable to semi-supervised techniques in most cases.""","""The authors propose an approach for anomaly detection in the setting where the training data includes both normal and anomalous data. Their approach is a fairly straightforward extension of existing ideas, in which they iterate between clustering the data into normal vs. anomalous and learning an autoencoder representation of normal data that is then used to score normality of new data. The results are promising, but the experiments are fairly limited. The authors argue that their experimental settings follow those of prior work, but I think that for such an incremental contribution, more empirical work should be done, regardless of the limitations of particular prior work."""
1907,"""Redundancy-Free Computation Graphs for Graph Neural Networks""","['Graph Neural Networks', 'Runtime Performance']","""Graph Neural Networks (GNNs) are based on repeated aggregations of information across nodes neighbors in a graph. However, because common neighbors are shared between different nodes, this leads to repeated and inefficient computations.We propose Hierarchically Aggregated computation Graphs (HAGs), a new GNN graph representation that explicitly avoids redundancy by managing intermediate aggregation results hierarchically, and eliminating repeated computations and unnecessary data transfers in GNN training and inference. We introduce an accurate cost function to quantitatively evaluate the runtime performance of different HAGsand use a novel search algorithm to find optimized HAGs. Experiments show that the HAG representation significantly outperforms the standard GNN graph representation by increasing the end-to-end training throughput by up to 2.8x and reducing the aggregations and data transfers in GNN training by up to 6.3x and 5.6x. Meanwhile, HAGs improve runtime performance by preserving GNNcomputation, and maintain the original model accuracy for arbitrary GNNs.""","""This paper proposes a new graph Hierarchy representation (HAG) which eliminates the redundancy during the aggregation stage and improves computation efficiency. It achieves good speedup and also provide theoretical analysis. There has been several concerns from the reviewers; authors' response addressed them partially. Despite this, due to the large number of strong papers, we cannot accept the paper at this time. We encourage the authors to further improve the work for a future version. """
1908,"""Environmental drivers of systematicity and generalization in a situated agent""","['systematicitiy', 'systematic', 'generalization', 'combinatorial', 'agent', 'policy', 'language', 'compositionality']","""The question of whether deep neural networks are good at generalising beyond their immediate training experience is of critical importance for learning-based approaches to AI. Here, we consider tests of out-of-sample generalisation that require an agent to respond to never-seen-before instructions by manipulating and positioning objects in a 3D Unity simulated room. We first describe a comparatively generic agent architecture that exhibits strong performance on these tests. We then identify three aspects of the training regime and environment that make a significant difference to its performance: (a) the number of object/word experiences in the training set; (b) the visual invariances afforded by the agent's perspective, or frame of reference; and (c) the variety of visual input inherent in the perceptual aspect of the agent's perception. Our findings indicate that the degree of generalisation that networks exhibit can depend critically on particulars of the environment in which a given task is instantiated. They further suggest that the propensity for neural networks to generalise in systematic ways may increase if, like human children, those networks have access to many frames of richly varying, multi-modal observations as they learn.""","""The paper studies out-of-sample generalisation that require an agent to respond to never-seen-before instructions by manipulating and positioning objects in a 3D Unity simulated room, and analyzes factors which promote combinatorial generalization in such environment. The paper is a very thought provoking work, and would make a valuable contribution to the line of works on systematic generalization in embodied agents. The draft has been improved significantly after the rebuttal. After the discussion, we agree that it is worthwhile presenting at ICLR. """
1909,"""Analysis of Video Feature Learning in Two-Stream CNNs on the Example of Zebrafish Swim Bout Classification""","['convolutional neural networks', 'neural network transparency', 'AI explainability', 'deep Taylor decomposition', 'supervised classification', 'zebrafish', 'transparency', 'behavioral research', 'optical flow']","""Semmelhack et al. (2014) have achieved high classification accuracy in distinguishing swim bouts of zebrafish using a Support Vector Machine (SVM). Convolutional Neural Networks (CNNs) have reached superior performance in various image recognition tasks over SVMs, but these powerful networks remain a black box. Reaching better transparency helps to build trust in their classifications and makes learned features interpretable to experts. Using a recently developed technique called Deep Taylor Decomposition, we generated heatmaps to highlight input regions of high relevance for predictions. We find that our CNN makes predictions by analyzing the steadiness of the tail's trunk, which markedly differs from the manually extracted features used by Semmelhack et al. (2014). We further uncovered that the network paid attention to experimental artifacts. Removing these artifacts ensured the validity of predictions. After correction, our best CNN beats the SVM by 6.12%, achieving a classification accuracy of 96.32%. Our work thus demonstrates the utility of AI explainability for CNNs.""","""This paper presents a case study of training a video classifier and subsequently analyzing the features to reduce reliance on spurious artifacts. The supervised learning task is zebrafish bout classification which is relevant for biological experiments. The paper analyzed the image support for the learned neural net features using a previously developed technique called Deep Taylor Decomposition. This analysis showed that the CNNs when applied to the raw video were relying on artifacts of the data collection process, which spuriously increased classification accuracies by a ""clever Hans"" mechanism. By identifying and removing these artifacts, a retrained CNN classifier was able to outperform an older SVM classifier. More importantly, the analysis of the network features enabled the researchers to isolate which parts of the zebrafish motion were relevant for the classification. The reviewers found the paper to be well-written and the experiments to be well-designed. The reviewers suggested a some changes to the phrasing in the document, which the authors adopted. In response to the reviewers, the authors also clarified their use of ImageNet for pre-training and examined alternative approaches for building saliency maps. This paper should be published as the reviewers found the paper to be a good case study of how model interpretability can be useful in practice. """
1910,"""Implementing Inductive bias for different navigation tasks through diverse RNN attrractors""","['navigation', 'Recurrent Neural Networks', 'dynamics', 'inductive bias', 'pre-training', 'reinforcement learning']","""Navigation is crucial for animal behavior and is assumed to require an internal representation of the external environment, termed a cognitive map. The precise form of this representation is often considered to be a metric representation of space. An internal representation, however, is judged by its contribution to performance on a given task, and may thus vary between different types of navigation tasks. Here we train a recurrent neural network that controls an agent performing several navigation tasks in a simple environment. To focus on internal representations, we split learning into a task-agnostic pre-training stage that modifies internal connectivity and a task-specific Q learning stage that controls the network's output. We show that pre-training shapes the attractor landscape of the networks, leading to either a continuous attractor, discrete attractors or a disordered state. These structures induce bias onto the Q-Learning phase, leading to a performance pattern across the tasks corresponding to metric and topological regularities. Our results show that, in recurrent networks, inductive bias takes the form of attractor landscapes -- which can be shaped by pre-training and analyzed using dynamical systems methods. Furthermore, we demonstrate that non-metric representations are useful for navigation tasks. ""","""Navigation is learned in a two-stage process, where the (recurrent) network is first pre-trained in a task-agnostic stage and then fine-tuned using Q-learning. The analysis of the learned network confirms that what has been learned in the task-agnostic pre-training stage takes the form of attractors. The reviewers generally liked this work, but complained about lack of comparison studies / baselines. The authors then carried out such studies and did a major update of the paper. Given that the extensive update of the paper seems to have addressed the reviewers' complaints, I think this paper can be accepted."""
1911,"""Learning representations for binary-classification without backpropagation""","['feedback alignment', 'alternatives to backpropagation', 'biologically motivated learning algorithms']","""The family of feedback alignment (FA) algorithms aims to provide a more biologically motivated alternative to backpropagation (BP), by substituting the computations that are unrealistic to be implemented in physical brains. While FA algorithms have been shown to work well in practice, there is a lack of rigorous theory proofing their learning capabilities. Here we introduce the first feedback alignment algorithm with provable learning guarantees. In contrast to existing work, we do not require any assumption about the size or depth of the network except that it has a single output neuron, i.e., such as for binary classification tasks. We show that our FA algorithm can deliver its theoretical promises in practice, surpassing the learning performance of existing FA methods and matching backpropagation in binary classification tasks. Finally, we demonstrate the limits of our FA variant when the number of output neurons grows beyond a certain quantity.""","""This paper provides a rigorous analysis of feedback alignment under two restrictions 1) that all, except the first, layers are constrained to realize monotone functions and 2) the task is binary classification. Overall, all reviewers agree that this is an interesting submission providing important results on the topic and as such all agree that it should feature at the ICLR program. Thus, I recommend acceptance. However, I ask the authors to take into account the reviewers' concerns and include a discussion about limitations (and general applicability) of this work."""
1912,"""Implementation Matters in Deep RL: A Case Study on PPO and TRPO""","['deep policy gradient methods', 'deep reinforcement learning', 'trpo', 'ppo']","""We study the roots of algorithmic progress in deep policy gradient algorithms through a case study on two popular algorithms: Proximal Policy Optimization (PPO) and Trust Region Policy Optimization (TRPO). Specifically, we investigate the consequences of ""code-level optimizations:"" algorithm augmentations found only in implementations or described as auxiliary details to the core algorithm. Seemingly of secondary importance, such optimizations turn out to have a major impact on agent behavior. Our results show that they (a) are responsible for most of PPO's gain in cumulative reward over TRPO, and (b) fundamentally change how RL methods function. These insights show the difficulty, and importance, of attributing performance gains in deep reinforcement learning. ""","""This paper provides a careful and well-executed evaluation of the code-level details of two leading policy search algorithms, which are typically considered implementation details and therefore often unstated or brushed aside in papers. These are revealed to have major implications for the performance of both algorithms. The reviewers are all in agreement that this paper has important reproducibility and evaluation implications for the field, and adds substantially to our body of knowledge on policy gradient algorithms. I therefore recommend it be accepted. However, a serious limitation is that only 3 random seeds were used to get average performance in the first, key experiment. Experiments are expensive, but that result is not meaningful without more runs, and arguably could be misleading rather than informative. The authors should increase the number of runs as much as possible, at least to 10 but ideally more."""
1913,"""Fourier networks for uncertainty estimates and out-of-distribution detection""","['Fourier network', 'out-of-distribution detection', 'large initialization', 'uncertainty', 'ensembles']","""A simple method for obtaining uncertainty estimates for Neural Network classifiers (e.g. for out-of-distribution detection) is to use an ensemble of independently trained networks and average the softmax outputs. While this method works, its results are still very far from human performance on standard data sets. We investigate how this method works and observe three fundamental limitations: ""Unreasonable"" extrapolation, ""unreasonable"" agreement between the networks in an ensemble, and the filtering out of features that distinguish the training distribution from some out-of-distribution inputs, but do not contribute to the classification. To mitigate these problems we suggest ""large"" initializations in the first layers and changing the activation function to sin(x) in the last hidden layer. We show that this combines the out-of-distribution behavior from nearest neighbor methods with the generalization capabilities of neural networks, and achieves greatly improved out-of- distribution detection on standard data sets (MNIST/fashionMNIST/notMNIST, SVHN/CIFAR10).""","""This paper presents a new method for detecting out-of-distribution (OOD) samples. A reviewer pointed out that the paper discovers an interesting finding and the addressed problem is important. On the other hand, other reviewers pointed out theoretical/empirical justifications are limited. In particular, I think that experimental supports why the proposed method is superior beyond the existing ones are limited. I encourages the authors to consider more scenarios of OOD detection (e.g., datasets and architectures) and more baselines as the problem of measuring the confidence of neural networks or detecting outliers have rich literature. This would guide more comprehensive understandings on the proposed method. Hence, I recommend rejection. """
1914,"""Logic and the 2-Simplicial Transformer""","['transformer', 'logic', 'reinforcement learning', 'reasoning']","""We introduce the 2-simplicial Transformer, an extension of the Transformer which includes a form of higher-dimensional attention generalising the dot-product attention, and uses this attention to update entity representations with tensor products of value vectors. We show that this architecture is a useful inductive bias for logical reasoning in the context of deep reinforcement learning. ""","""This paper extends the Transformer, implementing higher-dimensional attention generalizing the dot-product attention. The AC agrees that Reviewer3's comment that generalizing attention from 2nd- to 3rd-order relations is an important upgrade, that the mathematical context is insightful, and that this could lead to the further potential development. The readability of the paper still remains as an issue, and it needs to be address in the final version of the paper."""
1915,"""LEARNING TO IMPUTE: A GENERAL FRAMEWORK FOR SEMI-SUPERVISED LEARNING""","['Semi-supervised Learning', 'Meta-Learning', 'Learning to label']","""Recent semi-supervised learning methods have shown to achieve comparable results to their supervised counterparts while using only a small portion of labels in image classification tasks thanks to their regularization strategies. In this paper, we take a more direct approach for semi-supervised learning and propose learning to impute the labels of unlabeled samples such that a network achieves better generalization when it is trained on these labels. We pose the problem in a learning-to-learn formulation which can easily be incorporated to the state-of-the-art semi-supervised techniques and boost their performance especially when the labels are limited. We demonstrate that our method is applicable to both classification and regression problems including image classification and facial landmark detection tasks.""","""There is insufficient support to recommend accepting this paper. The reviewers unanimously criticize the quality of the exposition, noting that many key elements in the main development and experimental set up are not clear. The significance of the contribution could be made stronger with some form of theoretical analysis. The current paper lacks depth and insufficient justification for the proposed approach. The submitted comments should be able to help the authors improve the paper."""
1916,"""Evaluating Semantic Representations of Source Code""","['embeddings', 'representation', 'source code', 'identifiers']","""Learned representations of source code enable various software developer tools, e.g., to detect bugs or to predict program properties. At the core of code representations often are word embeddings of identifier names in source code, because identifiers account for the majority of source code vocabulary and convey important semantic information. Unfortunately, there currently is no generally accepted way of evaluating the quality of word embeddings of identifiers, and current evaluations are biased toward specific downstream tasks. This paper presents IdBench, the first benchmark for evaluating to what extent word embeddings of identifiers represent semantic relatedness and similarity. The benchmark is based on thousands of ratings gathered by surveying 500 software developers. We use IdBench to evaluate state-of-the-art embedding techniques proposed for natural language, an embedding technique specifically designed for source code, and lexical string distance functions, as these are often used in current developer tools. Our results show that the effectiveness of embeddings varies significantly across different embedding techniques and that the best available embeddings successfully represent semantic relatedness. On the downside, no existing embedding provides a satisfactory representation of semantic similarities, e.g., because embeddings consider identifiers with opposing meanings as similar, which may lead to fatal mistakes in downstream developer tools. IdBench provides a gold standard to guide the development of novel embeddings that address the current limitations. ""","""This paper presents a dataset to evaluate the quality of embeddings learnt for source code. The dataset consists of three different subtasks: relatedness, similarity, and contextual similarity. The main contribution of the paper is the construction of these datasets which should be useful to the community. However, there are valid concerns raised about the size of the datasets (which is pretty small) and the baselines used to evaluate the embeddings -- there should be a baselines using a contextual embeddings model like BERT which could have been fine-tuned on the source code data. If these comments are addressed, the paper can be a good contribution in an NLP conference. As of now, I recommend a Rejection."""
1917,"""Recurrent Independent Mechanisms""","['modular representations', 'better generalization', 'learning mechanisms']","""Learning modular structures which reflect the dynamics of the environment can lead to better generalization and robustness to changes which only affect a few of the underlying causes. We propose Recurrent Independent Mechanisms (RIMs), a new recurrent architecture in which multiple groups of recurrent cells operate with nearly independent transition dynamics, communicate only sparingly through the bottleneck of attention, and are only updated at time steps where they are most relevant. We show that this leads to specialization amongst the RIMs, which in turn allows for dramatically improved generalization on tasks where some factors of variation differ systematically between training and evaluation.""","""This paper has, at its core, a potential for constituting a valuable contribution. However, there was a shared belief among reviewers (that I also share) that the paper still has much room for improvement in terms of presentation and justification of the claims. I hope that the authors will be able to address the feedback they received to make this submission get where it should be. """
1918,"""Sparse and Structured Visual Attention""","['Sparsity', 'attention', 'structured attention', 'total variation', 'fused lasso', 'image captioning']","""Visual attention mechanisms have been widely used in image captioning models. In this paper, to better link the image structure with the generated text, we replace the traditional softmax attention mechanism by two alternative sparsity-promoting transformations: sparsemax and Total-Variation Sparse Attention (TVmax). With sparsemax, we obtain sparse attention weights, selecting relevant features. In order to promote sparsity and encourage fusing of the related adjacent spatial locations, we propose TVmax. By selecting relevant groups of features, the TVmax transformation improves interpretability. We present results in the Microsoft COCO and Flickr30k datasets, obtaining gains in comparison to softmax. TVmax outperforms the other compared attention mechanisms in terms of human-rated caption quality and attention relevance.""","""This paper presents sparse attention mechanisms for image captioning. In addition to recent sparsemax based method, authors proposed to extend it by incorporating structural constraints in 2D images, which is called TVMAX. The proposed methods are shown to improve the quality of captioning, particularly in terms of fewer erroneous repetitions, and obtain better human evaluation scores. Through reviewer discussion, one reviewer updated the score to rejection. A major concern raised by the reviewers is that the motivation of introducing sparse attention is not clear, and the reason why it improves the quality (particularly, why it can reduce repetition) is not convincing. While we understand it is plausible for long sequences as in text domain, we are not convinced that it is really necessary for image captioning problems. Although authors seem to have some ideas, we cannot see how they will be reflected in the paper so Id like to recommend rejection. I recommend authors to polish the paper with a clearer description of the motivation and high-level analysis of the method as well as testing on other visual tasks to show its generality. """
1919,"""Projection-Based Constrained Policy Optimization""","['Reinforcement learning with constraints', 'Safe reinforcement learning']","""We consider the problem of learning control policies that optimize a reward function while satisfying constraints due to considerations of safety, fairness, or other costs. We propose a new algorithm - Projection-Based Constrained Policy Optimization (PCPO), an iterative method for optimizing policies in a two-step process - the first step performs an unconstrained update while the second step reconciles the constraint violation by projecting the policy back onto the constraint set. We theoretically analyze PCPO and provide a lower bound on reward improvement, as well as an upper bound on constraint violation for each policy update. We further characterize the convergence of PCPO with projection based on two different metrics - L2 norm and Kullback-Leibler divergence. Our empirical results over several control tasks demonstrate that our algorithm achieves superior performance, averaging more than 3.5 times less constraint violation and around 15% higher reward compared to state-of-the-art methods.""","""The paper proposes a new algorithm for solving constrained MDPs called Projection Based Constrained Policy Optimization. Compared to CPO, it projects the solution back to the feasible region after each step, which results in improvements on some of the tasks considered. The problem addressed is relevant, as many tasks could have important constraints e.g. concerning fairness or safety. The method is supported through theory and empirical results. It is great to have theoretical bounds on the policy improvement and constraint violation of the algorithm, although they only apply to the intractable version of the algorithm (another approximate algorithm is proposed that is used in practice). The experimental evidence is a bit mixed, with the best of the proposed projections (based on the KL approach) sometimes beating CPO but also sometimes being beaten by it, both on the obtained reward and on constraint satisfaction. The method only considers a single constraint. I'm not sure how trivial it would be to add more than one constraint. The reviewers also mention that the paper does not implement TRPO as in the original paper, as in the original paper the step size in the direction of the natural gradient is refined with a line search if the original step size (calculated using the quadratic expansion of the expected KL) does violate the original constraints. (Line search on the constraint as mentioned by the authors would be a different issue). Futhermore, the quadratic expansion of the KL is symmetric around the current policy in parameter space. This means that starting from a feasible solution the trust region should always overlap with the constraint set when feasibility is maintained, going somewhat agains the argument for PCPO as opposed to CPO brought up by the authors in the discussion with R2. I would also show this symmetry in illustrations such as Fig 1 to aid understanding. """
1920,"""Neural-Guided Symbolic Regression with Asymptotic Constraints""","['symbolic regression', 'program synthesis', 'monte carlo tree search']","""Symbolic regression is a type of discrete optimization problem that involves searching expressions that fit given data points. In many cases, other mathematical constraints about the unknown expression not only provide more information beyond just values at some inputs, but also effectively constrain the search space. We identify the asymptotic constraints of leading polynomial powers as the function approaches 0 and infinity as useful constraints and create a system to use them for symbolic regression. The first part of the system is a conditional expression generating neural network which preferentially generates expressions with the desired leading powers, producing novel expressions outside the training domain. The second part, which we call Neural-Guided Monte Carlo Tree Search, uses the network during a search to find an expression that conforms to a set of data points and desired leading powers. Lastly, we provide an extensive experimental validation on thousands of target expressions showing the efficacy of our system compared to exiting methods for finding unknown functions outside of the training set.""","""This paper proposes 1) using neural-guided Monte-Carlo Tree Search to search for expressions that match a dataset and 2) Augments the loss to match the asymptotics of the true function when these are given. The use of MCTS sounds more sensible than standard evolutionary search. The augmented loss could make sense but seems extremely niche, requiring specific side information about the problem being solved. Overall, the task is so niche that I don't think it'll be of wide interest. It's not clear that it's solving a real problem."""
1921,"""Deep Interaction Processes for Time-Evolving Graphs""","['deep temporal point process', 'multiple time resolutions', 'dynamic continuous time-evolving graph', 'anti-fraud detection']","""Time-evolving graphs are ubiquitous such as online transactions on an e-commerce platform and user interactions on social networks. While neural approaches have been proposed for graph modeling, most of them focus on static graphs. In this paper we present a principled deep neural approach that models continuous time-evolving graphs at multiple time resolutions based on a temporal point process framework. To model the dependency between latent dynamic representations of each node, we define a mixture of temporal cascades in which a node's neural representation depends on not only this node's previous representations but also the previous representations of related nodes that have interacted with this node. We generalize LSTM on this temporal cascade mixture and introduce novel time gates to model time intervals between interactions. Furthermore, we introduce a selection mechanism that gives important nodes large influence in both pseudo-formula hop subgraphs of nodes in an interaction. To capture temporal dependency at multiple time-resolutions, we stack our neural representations in several layers and fuse them based on attention. Based on the temporal point process framework, our approach can naturally handle growth (and shrinkage) of graph nodes and interactions, making it inductive. Experimental results on interaction prediction and classification tasks -- including a real-world financial application -- illustrate the effectiveness of the time gate, the selection and attention mechanisms of our approach, as well as its superior performance over the alternative approaches.""","""All reviewers rated this paper as a weak reject. The author response was just not enough to sway any of the reviewers to revise their assessment. The AC recommends rejection."""
1922,"""Learning to Discretize: Solving 1D Scalar Conservation Laws via Deep Reinforcement Learning""","['Numerical Methods', 'Conservation Laws', 'Reinforcement Learning']","""Conservation laws are considered to be fundamental laws of nature. It has broad application in many fields including physics, chemistry, biology, geology, and engineering. Solving the differential equations associated with conservation laws is a major branch in computational mathematics. Recent success of machine learning, especially deep learning, in areas such as computer vision and natural language processing, has attracted a lot of attention from the community of computational mathematics and inspired many intriguing works in combining machine learning with traditional methods. In this paper, we are the first to explore the possibility and benefit of solving nonlinear conservation laws using deep reinforcement learning. As a proof of concept, we focus on 1-dimensional scalar conservation laws. We deploy the machinery of deep reinforcement learning to train a policy network that can decide on how the numerical solutions should be approximated in a sequential and spatial-temporal adaptive manner. We will show that the problem of solving conservation laws can be naturally viewed as a sequential decision making process and the numerical schemes learned in such a way can easily enforce long-term accuracy. Furthermore, the learned policy network is carefully designed to determine a good local discrete approximation based on the current state of the solution, which essentially makes the proposed method a meta-learning approach. In other words, the proposed method is capable of learning how to discretize for a given situation mimicking human experts. Finally, we will provide details on how the policy network is trained, how well it performs compared with some state-of-the-art numerical solvers such as WENO schemes, and how well it generalizes. Our code is released anomynously at \url{pseudo-url}.""","""This paper proposes using RL to solve PDEs, with application to solving conservation laws. It is quite borderline, with one reviewer weakly recommending acceptance, one finding the paper interesting but the application not sufficiently novel, and one confessing they have not understood the paper. I concur with R2 this is a difficult subject matter, but the other reviewers seem satisfied with the clarity of the presentation. R3 seems to believe the paper sufficiently proves the concept to warrant publication. I confess I do not understand R1's argument for lack of novelty, despite my pushing for further detail. I see this as a novel application of RL methods, and R1 admits this will be seen as novel for a PDE conference. I am in favour of a certain degree of interdisciplinarity at ICLR, and believe this paper could bring a bit of subject matter diversity to the programme. However, due to the number of high quality submissions in my area, I'm afraid this one must be rejected due to limited space. The authors are encouraged to submit to another ML conference after addressing (or having addressed) some of the action items from the more sympathetic reviewers."""
1923,"""Disentangled GANs for Controllable Generation of High-Resolution Images""","['Disentangled GANs', 'controllable generation', 'high-resolution image synthesis', 'semantic manipulation', 'fine-grained factors']","""Generative adversarial networks (GANs) have achieved great success at generating realistic samples. However, achieving disentangled and controllable generation still remains challenging for GANs, especially in the high-resolution image domain. Motivated by this, we introduce AC-StyleGAN, a combination of AC-GAN and StyleGAN, for demonstrating that the controllable generation of high-resolution images is possible with sufficient supervision. More importantly, only using 5% of the labelled data significantly improves the disentanglement quality. Inspired by the observed separation of fine and coarse styles in StyleGAN, we then extend AC-StyleGAN to a new image-to-image model called FC-StyleGAN for semantic manipulation of fine-grained factors in a high-resolution image. In experiments, we show that FC-StyleGAN performs well in only controlling fine-grained factors, with the use of instance normalization, and also demonstrate its good generalization ability to unseen images. Finally, we create two new datasets -- Falcor3D and Isaac3D with higher resolution, more photorealism, and richer variation, as compared to existing disentanglement datasets.""","""The paper presents a model combining AC-GAN and StyleGAN for semi-supervised learning of disentangled generative adversarial networks. It also proposes new datasets of 3d images as benchmarks. The main claim is that the proposed model can achieve strong disentanglement property by using 1-5% of the annotations on the factors of variation. The technical contribution is moderate but the architecture itself is not highly novel. While the proposed method seems to work for controlled/synthetic datasets, overall technical contribution seems incremental and it's unclear whether it can perform well on larger-scale, real datasets. The experimental results on CelebA don't look convincing enough. """
1924,"""Min-Max Optimization without Gradients: Convergence and Applications to Adversarial ML""","['nonconvex optimization', 'min-max optimization', 'robust optimization', 'adversarial attack']","""In this paper, we study the problem of constrained robust (min-max) optimization ina black-box setting, where the desired optimizer cannot access the gradients of the objective function but may query its values. We present a principled optimization framework, integrating a zeroth-order (ZO) gradient estimator with an alternating projected stochastic gradient descent-ascent method, where the former only requires a small number of function queries and the later needs just one-step descent/ascent update. We show that the proposed framework, referred to as ZO-Min-Max, has a sub-linear convergence rate under mild conditions and scales gracefully with problem size. From an application side, we explore a promising connection between black-box min-max optimization and black-box evasion and poisoning attacks in adversarial machine learning (ML). Our empirical evaluations on these use cases demonstrate the effectiveness of our approach and its scalability to dimensions that prohibit using recent black-box solvers.""","""This paper proposes convergence results for zeroth-order optimization. One of the main complaints was that ZO has limited use in ML. I appreciate the authors' response that there are cases where gradients are not easily available, especially for black-box attacks. However, I find the limited applicability an issue for ICLR and I encourage the authors to find a conference that is more suited to that work."""
1925,"""Improving Model Compatibility of Generative Adversarial Networks by Boundary Calibration""","['generative adversarial network', 'GAN', 'model compatibility', 'machine learning efficacy']","""Generative Adversarial Networks (GANs) is a powerful family of models that learn an underlying distribution to generate synthetic data. Many existing studies of GANs focus on improving the realness of the generated image data for visual applications, and few of them concern about improving the quality of the generated data for training other classifiers---a task known as the model compatibility problem. As a consequence, existing GANs often prefer generating `easier' synthetic data that are far from the boundaries of the classifiers, and refrain from generating near-boundary data, which are known to play an important roles in training the classifiers. To improve GAN in terms of model compatibility, we propose Boundary-Calibration GANs (BCGANs), which leverage the boundary information from a set of pre-trained classifiers using the original data. In particular, we introduce an auxiliary Boundary-Calibration loss (BC-loss) into the generator of GAN to match the statistics between the posterior distributions of original data and generated data with respect to the boundaries of the pre-trained classifiers. The BC-loss is provably unbiased and can be easily coupled with different GAN variants to improve their model compatibility. Experimental results demonstrate that BCGANs not only generate realistic images like original GANs but also achieves superior model compatibility than the original GANs.""","""The paper presents a method for increasing the ""model compatibility"" of Generative Adversarial Networks by adding a term to the loss function relating to classification boundaries. The reviewers recognized the importance of the problem, but several concerns were raised about the clarity of the paper, as well as the significance of the experimental results."""
1926,"""Ensemble Distribution Distillation""","['Ensemble Distillation', 'Knowledge Distillation', 'Uncertainty Estimation', 'Density Estimation']","""Ensembles of models often yield improvements in system performance. These ensemble approaches have also been empirically shown to yield robust measures of uncertainty, and are capable of distinguishing between different forms of uncertainty. However, ensembles come at a computational and memory cost which may be prohibitive for many applications. There has been significant work done on the distillation of an ensemble into a single model. Such approaches decrease computational cost and allow a single model to achieve an accuracy comparable to that of an ensemble. However, information about the diversity of the ensemble, which can yield estimates of different forms of uncertainty, is lost. This work considers the novel task of Ensemble Distribution Distillation (EnD^2) - distilling the distribution of the predictions from an ensemble, rather than just the average prediction, into a single model. EnD^2 enables a single model to retain both the improved classification performance of ensemble distillation as well as information about the diversity of the ensemble, which is useful for uncertainty estimation. A solution for EnD^2 based on Prior Networks, a class of models which allow a single neural network to explicitly model a distribution over output distributions, is proposed in this work. The properties of EnD^2 are investigated on both an artificial dataset, and on the CIFAR-10, CIFAR-100 and TinyImageNet datasets, where it is shown that EnD^2 can approach the classification performance of an ensemble, and outperforms both standard DNNs and Ensemble Distillation on the tasks of misclassification and out-of-distribution input detection.""","""The paper investigates how to distill an ensemble effectively (using a prior network) in order to reap the benefits of uncertainty estimation provided by ensembling (in addition to the accuracy gains provided by ensembling). Overall, the paper is nicely written, and makes a valuable contribution. The authors also addressed most of the initial concerns raised by the reviewers. I recommend the paper for acceptance, and encourage the authors to take into account the reviewer feedback when preparing the final version."""
1927,"""Not All Features Are Equal: Feature Leveling Deep Neural Networks for Better Interpretation""",[],"""Self-explaining models are models that reveal decision making parameters in an interpretable manner so that the model reasoning process can be directly understood by human beings. General Linear Models (GLMs) are self-explaining because the model weights directly show how each feature contributes to the output value. However, deep neural networks (DNNs) are in general not self-explaining due to the non-linearity of the activation functions, complex architectures, obscure feature extraction and transformation process. In this work, we illustrate the fact that existing deep architectures are hard to interpret because each hidden layer carries a mix of low level features and high level features. As a solution, we propose a novel feature leveling architecture that isolates low level features from high level features on a per-layer basis to better utilize the GLM layer in the proposed architecture for interpretation. Experimental results show that our modified models are able to achieve competitive results comparing to main-stream architectures on standard datasets while being more self-explainable. Our implementations and configurations are publicly available for reproductions.""","""This paper proposes to learn self-explaining neural networks using a feature leveling idea. Unfortunately, the reviewers have raised several concerns on the paper, including insufficiency of novelty, weakness on experiments, etc. The authors did not provide rebuttal. We hope the authors can improve the paper in future submission based on the comments. """
1928,"""Guiding Program Synthesis by Learning to Generate Examples""","['program synthesis', 'programming by examples']","""A key challenge of existing program synthesizers is ensuring that the synthesized program generalizes well. This can be difficult to achieve as the specification provided by the end user is often limited, containing as few as one or two input-output examples. In this paper we address this challenge via an iterative approach that finds ambiguities in the provided specification and learns to resolve these by generating additional input-output examples. The main insight is to reduce the problem of selecting which program generalizes well to the simpler task of deciding which output is correct. As a result, to train our probabilistic models, we can take advantage of the large amounts of data in the form of program outputs, which are often much easier to obtain than the corresponding ground-truth programs.""","""The paper consider the problem of program induction from a small dataset of input-output pairs; the small amount of available data results a large set of valid candidate programs. The authors propose to train an neural oracle by unsupervised learning on the given data, and synthesizing new pairs to augment the given data, therefore reducing the set of admissible programs. This is reminiscent of data augmentation schemes, eg elastic transforms for image data. The reviewers appreciate the simplicity and effectiveness of this approach, as demonstrated on an android UI dataset. The authors successfully addressed most negative points raised by the reviewers in the rebuttal, except the lack of experimental validating on other datasets. I recommend to accept this paper, based on reviews and my own reading. I think the manuscript could be further improved by more explicitly discussing (early in the paper) the intuition why the authors think this approach is sensible: The additional information for more successfully infering the correct program has to come from somewhere; as no new information is eg given by a human oracle, it was injected by the choice of prior over neural oracles. It is essential that the paper discuss this. """
1929,"""Synthesizing Programmatic Policies that Inductively Generalize""","['Program synthesis', 'reinforcement learning', 'inductive generalization']","""Deep reinforcement learning has successfully solved a number of challenging control tasks. However, learned policies typically have difficulty generalizing to novel environments. We propose an algorithm for learning programmatic state machine policies that can capture repeating behaviors. By doing so, they have the ability to generalize to instances requiring an arbitrary number of repetitions, a property we call inductive generalization. However, state machine policies are hard to learn since they consist of a combination of continuous and discrete structures. We propose a learning framework called adaptive teaching, which learns a state machine policy by imitating a teacher; in contrast to traditional imitation learning, our teacher adaptively updates itself based on the structure of the student. We show that our algorithm can be used to learn policies that inductively generalize to novel environments, whereas traditional neural network policies fail to do so. ""","""The authors consider control tasks that require ""inductive generalization"", ie the ability to repeat certain primitive behaviors. They propose state-machine machine policies, which switch between low-level policies based on learned transition criteria. The approach is tested on multiple continuous control environments and compared to RL baselines as well as an ablation. The reviewers appreciated the general idea of the paper. During the rebuttal, the authors addressed most of the issues raised in the reviews and hence reviewers increased their score. The paper is marginally above acceptance. On the positive side: Learning structured policies is clearly desirable but difficult and the paper proposes an interesting set of ideas to tackle this challenge. My main concern about this work is: The approach uses the true environment simulator, as the training relies on gradients of the reward function. This makes the tasks into planning and not an RL problems; this needs to be highlighted, as it severly limits its applicability of the proposed approach. Furthermore, this also means that the comparison to the model-free PPO baselines is less meaningful. The authors should clear mention this. Overall however, I think there are enough good ideas presented here to warrant acceptance. """
1930,"""Shifted and Squeezed 8-bit Floating Point format for Low-Precision Training of Deep Neural Networks""","['Low-precision training', 'numerics', 'deep learning']","""Training with larger number of parameters while keeping fast iterations is an increasingly adopted strategy and trend for developing better performing Deep Neural Network (DNN) models. This necessitates increased memory footprint and computational requirements for training. Here we introduce a novel methodology for training deep neural networks using 8-bit floating point (FP8) numbers. Reduced bit precision allows for a larger effective memory and increased computational speed. We name this method Shifted and Squeezed FP8 (S2FP8). We show that, unlike previous 8-bit precision training methods, the proposed method works out of the box for representative models: ResNet50, Transformer and NCF. The method can maintain model accuracy without requiring fine-tuning loss scaling parameters or keeping certain layers in single precision. We introduce two learnable statistics of the DNN tensors - shifted and squeezed factors that are used to optimally adjust the range of the tensors in 8-bits, thus minimizing the loss in information due to quantization.""","""Main description: paper focuses on training neural networks using 8-bit floating-point numbers (FP8). The goal is highly motivated: training neural networks faster, with smaller memory footprint and energy consumption. Discussions reviewer 3: gives a very short review and is not knowledagble in this area (rating is weak accept) reviewer 4: well written and convincing paper, some minor technical flaws (not very knowledgable) reviewer 1: interesting paper but argues not very practical (not very knowledgable) reviewer 2: this is the most thorough and knowledable review, and here the authors like the scope of the paper and its interest to ICLR. Recommendation: going mainly by reviewer 2, i vote to accept this as a poster"""
1931,"""Iterative energy-based projection on a normal data manifold for anomaly localization""","['deep learning', 'visual inspection', 'unsupervised anomaly detection', 'anomaly localization', 'autoencoder', 'variational autoencoder', 'gradient descent', 'inpainting']","""Autoencoder reconstructions are widely used for the task of unsupervised anomaly localization. Indeed, an autoencoder trained on normal data is expected to only be able to reconstruct normal features of the data, allowing the segmentation of anomalous pixels in an image via a simple comparison between the image and its autoencoder reconstruction. In practice however, local defects added to a normal image can deteriorate the whole reconstruction, making this segmentation challenging. To tackle the issue, we propose in this paper a new approach for projecting anomalous data on a autoencoder-learned normal data manifold, by using gradient descent on an energy derived from the autoencoder's loss function. This energy can be augmented with regularization terms that model priors on what constitutes the user-defined optimal projection. By iteratively updating the input of the autoencoder, we bypass the loss of high-frequency information caused by the autoencoder bottleneck. This allows to produce images of higher quality than classic reconstructions. Our method achieves state-of-the-art results on various anomaly localization datasets. It also shows promising results at an inpainting task on the CelebA dataset.""","""This paper proposed to use an autoencoder based approach for anomaly localization. The method shows promising on inpainting task compared with traditional auto-encoder. First two reviewers recommend this paper for acceptance. The last review has some concerns about the experimental design and whether VAE is a suitable baseline. The authors provide reasonable explanation in rebuttal while the reviewer did not give further comments. Overall, the paper proposes a promising approach for anomaly localization; thus, I recommend it for acceptance. """
1932,"""Automated curriculum generation through setter-solver interactions""","['Deep Reinforcement Learning', 'Automatic Curriculum']","""Reinforcement learning algorithms use correlations between policies and rewards to improve agent performance. But in dynamic or sparsely rewarding environments these correlations are often too small, or rewarding events are too infrequent to make learning feasible. Human education instead relies on curricula the breakdown of tasks into simpler, static challenges with dense rewards to build up to complex behaviors. While curricula are also useful for artificial agents, hand-crafting them is time consuming. This has lead researchers to explore automatic curriculum generation. Here we explore automatic curriculum generation in rich,dynamic environments. Using a setter-solver paradigm we show the importance of considering goal validity, goal feasibility, and goal coverage to construct useful curricula. We demonstrate the success of our approach in rich but sparsely rewarding 2D and 3D environments, where an agent is tasked to achieve a single goal selected from a set of possible goals that varies between episodes, and identify challenges for future work. Finally, we demonstrate the value of a novel technique that guides agents towards a desired goal distribution. Altogether, these results represent a substantial step towards applying automatic task curricula to learn complex, otherwise unlearnable goals, and to our knowledge are the first to demonstrate automated curriculum generation for goal-conditioned agents in environments where the possible goals vary between episodes.""","""The authors introduce a method to automatically generate a learning curriculum (of goals) in a sparse reward RL setting, examining several criteria for goal setting to induce a useful curriculum. The reviewers agreed that this was an exciting research direction but also had concerns about baseline comparisons, clarity of some technical points, hyperparameter tuning (and the effect on the strength of empirical results), and computational tractability. After discussion, the reviewers felt most of these points were sufficiently addressed. Thus, I recommend acceptance at this time."""
1933,"""Mixed Precision Training With 8-bit Floating Point""","['8-bit training', '8-bit floating point', 'low precision training', 'deep learning']","""Reduced precision computation is one of the key areas addressing the wideningcompute gap, driven by an exponential growth in deep learning applications. In recent years, deep neural network training has largely migrated to 16-bit precision,with significant gains in performance and energy efficiency. However, attempts to train DNNs at 8-bit precision have met with significant challenges, because of the higher precision and dynamic range requirements of back-propagation. In this paper, we propose a method to train deep neural networks using 8-bit floating point representation for weights, activations, errors, and gradients. We demonstrate state-of-the-art accuracy across multiple data sets (imagenet-1K, WMT16)and a broader set of workloads (Resnet-18/34/50, GNMT, and Transformer) than previously reported. We propose an enhanced loss scaling method to augment the reduced subnormal range of 8-bit floating point, to improve error propagation.We also examine the impact of quantization noise on generalization, and propose a stochastic rounding technique to address gradient noise. As a result of applying all these techniques, we report slightly higher validation accuracy compared to full precision baseline.""","""This paper propose a method to train DNNs using 8-bit floating point numbers, by using an enhanced loss scaling method and stochastic rounding method. However, the proposed method lacks novel and both the paper presentation and experiments need to be improved throughout. """
1934,"""Geometric Insights into the Convergence of Nonlinear TD Learning""","['TD', 'nonlinear', 'convergence', 'value estimation', 'reinforcement learning']","""While there are convergence guarantees for temporal difference (TD) learning when using linear function approximators, the situation for nonlinear models is far less understood, and divergent examples are known. Here we take a first step towards extending theoretical convergence guarantees to TD learning with nonlinear function approximation. More precisely, we consider the expected learning dynamics of the TD(0) algorithm for value estimation. As the step-size converges to zero, these dynamics are defined by a nonlinear ODE which depends on the geometry of the space of function approximators, the structure of the underlying Markov chain, and their interaction. We find a set of function approximators that includes ReLU networks and has geometry amenable to TD learning regardless of environment, so that the solution performs about as well as linear TD in the worst case. Then, we show how environments that are more reversible induce dynamics that are better for TD learning and prove global convergence to the true value function for well-conditioned function approximators. Finally, we generalize a divergent counterexample to a family of divergent problems to demonstrate how the interaction between approximator and environment can go wrong and to motivate the assumptions needed to prove convergence. ""","""This paper takes steps towards a theory of convergence for TD(0) with non-linear function approximation. The paper provides two theoretical results. One result bounds the error when training the sum of linear and homogenous parameterized functions. The second result shows global convergence when the environment dynamics are sufficiently reversible and the differentiable function approximation is sufficiently well-conditioned. The paper provides additional insight using a family of environments with partially reversible dynamics. The reviewers commented on several aspects of this work. The reviewers wrote that the presentation was clear and that the topic was relevant. The reviewers were satisfied with the correctness of the results. The reviewers liked the result that state value function estimation error is bounded when using homogeneous functions. They also noted that the deep networks in common use are not homogeneous so this result does not apply directly. The result showing global convergence of TD(0) with partial reversibility was also appreciated. Finally, the reviewers liked the family of examples. This paper is acceptable for publication as the presentation was clear, the results are solid, and the research direction could lead to additional insights."""
1935,"""A Fair Comparison of Graph Neural Networks for Graph Classification""","['graph neural networks', 'graph classification', 'reproducibility', 'graph representation learning']","""Experimental reproducibility and replicability are critical topics in machine learning. Authors have often raised concerns about their lack in scientific publications to improve the quality of the field. Recently, the graph representation learning field has attracted the attention of a wide research community, which resulted in a large stream of works. As such, several Graph Neural Network models have been developed to effectively tackle graph classification. However, experimental procedures often lack rigorousness and are hardly reproducible. Motivated by this, we provide an overview of common practices that should be avoided to fairly compare with the state of the art. To counter this troubling trend, we ran more than 47000 experiments in a controlled and uniform framework to re-evaluate five popular models across nine common benchmarks. Moreover, by comparing GNNs with structure-agnostic baselines we provide convincing evidence that, on some datasets, structural information has not been exploited yet. We believe that this work can contribute to the development of the graph learning field, by providing a much needed grounding for rigorous evaluations of graph classification models.""","""The paper provides a careful, reproducible empirical comparison of 5 graph neural network models on 9 datasets for graph classification. The paper shows that baseline methods that use only node features (either counting node types, or summing node features) can be competitive. The authors also provide some guidelines for ways to improve reproducibility in empirical comparisons of graph classification. The authors responded well to the issued raised during review, and updated the paper during the discussion period. The reviewers improved their score, and while there were reservations about the comprehensiveness of the set of experiments, they all agreed that the paper provides a solid empirical contribution to the literature. As machine learning becomes increasingly popular, papers that perform a careful empirical survey of baselines provide an important sanity check that future work can be built upon. Therefore, this paper, while not covering all possible graph neural network questions, provides an excellent starting point for future work to extend. """
1936,"""Fast is better than free: Revisiting adversarial training""","['adversarial examples', 'adversarial training', 'fast gradient sign method']","""Adversarial training, a method for learning robust deep networks, is typically assumed to be more expensive than traditional training due to the necessity of constructing adversarial examples via a first-order method like projected gradient decent (PGD). In this paper, we make the surprising discovery that it is possible to train empirically robust models using a much weaker and cheaper adversary, an approach that was previously believed to be ineffective, rendering the method no more costly than standard training in practice. Specifically, we show that adversarial training with the fast gradient sign method (FGSM), when combined with random initialization, is as effective as PGD-based training but has significantly lower cost. Furthermore we show that FGSM adversarial training can be further accelerated by using standard techniques for efficient training of deep networks, allowing us to learn a robust CIFAR10 classifier with 45% robust accuracy at epsilon=8/255 in 6 minutes, and a robust ImageNet classifier with 43% robust accuracy at epsilon=2/255 in 12 hours, in comparison to past work based on ``free'' adversarial training which took 10 and 50 hours to reach the same respective thresholds. ""","""This paper provides a surprising result: that randomization and FGSM can produce robust models faster than previous methods given the right mix of cyclic learning rate, mixed precision, etc. This paper produced a fair bit of controversy among both the community and the reviewers to the point where there were suggestions of bugs, evaluation problems, and other issues leading to the results. In the end, the authors released the code (and made significant updates to the paper based on all the feedback). Multiple reviewers checked the code and were happy. There was an extensive author response, and all the reviewers indicated that their primary concerns were address, save concerns about the sensitivity of step-size and the impact of early stopping. Overall, the paper is well written and clear. The proposed approach is simple and well explained. The result is certainly interesting, and this paper will continue to generate fruitful debate. There are still things to address to improve the paper, listed above. I strongly encourage the authors to continue to improve the work and make a more concerted effort to carefully discuss the impacts of early stopping. """
1937,"""DivideMix: Learning with Noisy Labels as Semi-supervised Learning""","['label noise', 'semi-supervised learning']","""Deep neural networks are known to be annotation-hungry. Numerous efforts have been devoted to reducing the annotation cost when learning with deep networks. Two prominent directions include learning with noisy labels and semi-supervised learning by exploiting unlabeled data. In this work, we propose DivideMix, a novel framework for learning with noisy labels by leveraging semi-supervised learning techniques. In particular, DivideMix models the per-sample loss distribution with a mixture model to dynamically divide the training data into a labeled set with clean samples and an unlabeled set with noisy samples, and trains the model on both the labeled and unlabeled data in a semi-supervised manner. To avoid confirmation bias, we simultaneously train two diverged networks where each network uses the dataset division from the other network. During the semi-supervised training phase, we improve the MixMatch strategy by performing label co-refinement and label co-guessing on labeled and unlabeled samples, respectively. Experiments on multiple benchmark datasets demonstrate substantial improvements over state-of-the-art methods. Code is available at pseudo-url .""","""This paper proposes an algorithm for noisy labels by adopting an idea in the recent semi-supervised learning algorithm. As two problems of training noisy labels and semi-supervised ones are closely related, it is not surprising to expect such results as pointed out by reviewers. However, reported thorough experimental results are strong and I think this paper can be useful for practitioners and following works. Hence, I recommend acceptance."""
1938,"""Model Imitation for Model-Based Reinforcement Learning""",['Model-Based Reinforcement Learning'],"""Model-based reinforcement learning (MBRL) aims to learn a dynamic model to reduce the number of interactions with real-world environments. However, due to estimation error, rollouts in the learned model, especially those of long horizon, fail to match the ones in real-world environments. This mismatching has seriously impacted the sample complexity of MBRL. The phenomenon can be attributed to the fact that previous works employ supervised learning to learn the one-step transition models, which has inherent difficulty ensuring the matching of distributions from multi-step rollouts. Based on the claim, we propose to learn the synthesized model by matching the distributions of multi-step rollouts sampled from the synthesized model and the real ones via WGAN. We theoretically show that matching the two can minimize the difference of cumulative rewards between the real transition and the learned one. Our experiments also show that the proposed model imitation method outperforms the state-of-the-art in terms of sample complexity and average return.""","""This paper addresses challenges in offline model learning, i.e., in the setting where some trajectories are given and can be used for learning a model, which in turn serves to train an RL agent or plan action sequences in simulation. A key issue in this setting is that of compounding errors: as the simulated trajectory deviates from observed data, errors build up, leading to suboptimal performance in the target domain. The paper proposes a distribution matching approach that considers trajectory sequence information and provides theoretical guarantees as well as some promising empirical results. Several issues were raised by reviewers, including missing references, clarity issues, questions about limitations of the theoretical analysis, and limitations of the empirical validation. Many of the issues raised by reviewers were addressed by the authors during the rebuttal phase. At the same time, several issues remain. First, the authors committed to adding results for additional tasks (initially deemed too easy or too hard to show differences). Even if the tasks show little separation between methods, these would be important data points to include as they support additional comparisons with prior and future work. The AC has to assess the paper without taking promised additional results into account. Second, questions about the results for Ant are not sufficiently addressed. The plot shows no learning. The author response mentions initialization but this is not deemed a sufficient explanation. Given the remaining questions, my assessment is that the quality and contribution of the submission are not yet ready for publication at the current stage."""
1939,"""Mutual Information Maximization for Robust Plannable Representations""","['reinforcement learning', 'robust learning', 'model based', 'planning', 'representation learning']","""Extending the capabilities of robotics to real-world complex, unstructured environments requires the capability of developing better perception systems while maintaining low sample complexity. When dealing with high-dimensional state spaces, current methods are either model-free, or model-based with reconstruction based objectives. The sample inefficiency of the former constitutes a major barrier for applying them to the real-world. While the latter present low sample complexity, they learn latent spaces that need to reconstruct every single detail of the scene. Real-world environments are unstructured and cluttered with objects. Capturing all the variability on the latent representation harms its applicability to downstream tasks. In this work, we present mutual information maximization for robust plannable representations (MIRO), an information theoretic representational learning objective for model-based reinforcement learning. Our objective optimizes for a latent space that maximizes the mutual information with future observations and emphasizes the relevant aspects of the dynamics, which allows to capture all the information needed for planning. We show that our approach learns a latent representation that in cluttered scenes focuses on the task relevant features, ignoring the irrelevant aspects. At the same time, state-of-the-art methods with reconstruction objectives are unable to learn in such environments.""","""The manuscript concerns a mutual information maximization objective for dynamics model learning, with the aim of using this representation for planning / skill learning. The central claim is that this objective promotes robustness to visual distractors, compared with reconstruction-based objectives. The proposed method is evaluated on DeepMind Control Suite tasks from rendered pixel observations, modified to include simple visual distractors. Reviewers concurred that the problem under consideration is important, and (for the most part) that the presentation was clear, though one reviewer disagreed, remarking that the method is only introduced on the 5th page. A central sticking point was whether the method would reliably give rise to representations that ignore distractors and preferentially encode task information. (I would note that a very similar phenomenon to the behaviour they describe has been empirically demonstrated before in Warde-Farley et al 2018, also on DM Control Suite tasks, where the most predictable/controllable elements of a scene are reliably imitated by a goal-conditioned policy trained against a MI-based reward). The distractors evaluated were criticized as unrealistically stochastic, that fully deterministic distractors may confound the procedure; while a revised version of the manuscript experimented with *less* random distractors, these distractors were still unpredictable at the scale of more than a few frames. While the manuscript has improved considerably in several ways based on reviewer feedback, reviewers remain unconvinced by the empirical investigation, particularly the choice of distractors. I therefore recommend rejection at this time, while encouraging the authors to incorporate criticisms to strengthen a resubmission."""
1940,"""Distance-based Composable Representations with Neural Networks""","['Representation learning', 'Wasserstein distance', 'Composability', 'Templates']","""We introduce a new deep learning technique that builds individual and class representations based on distance estimates to randomly generated contextual dimensions for different modalities. Recent works have demonstrated advantages to creating representations from probability distributions over their contexts rather than single points in a low-dimensional Euclidean vector space. These methods, however, rely on pre-existing features and are limited to textual information. In this work, we obtain generic template representations that are vectors containing the average distance of a class to randomly generated contextual information. These representations have the benefit of being both interpretable and composable. They are initially learned by estimating the Wasserstein distance for different data subsets with deep neural networks. Individual samples or instances can then be compared to the generic class representations, which we call templates, to determine their similarity and thus class membership. We show that this technique, which we call WDVec, delivers good results for multi-label image classification. Additionally, we illustrate the benefit of templates and their composability by performing retrieval with complex queries where we modify the information content in the representations. Our method can be used in conjunction with any existing neural network and create theoretically infinitely large feature maps.""","""The paper proposes an approach for learning class-level and individual-level (token-level) representations based on Wasserstein distances between data subsets. The idea is appealing and seems to have applicability to multiple tasks. The reviewers voiced significant concerns with the unclear writing of the paper and with the limited experiments. The authors have improved the paper, but to my mind it still needs a good amount of work on both of these aspects. The choice of wording in many places is imprecise. The tasks are non-standard ones so they don't have existing published numbers to compare against; in such a situation I would expect to see more baselines, such as alternative class/instance representations that would show the benefit specifically of the Wasserstein distance-based approach. I cannot tell from the paper in its current form whether or when I would want to use the proposed approach. In short, despite a very interesting initial idea, I believe the paper is too preliminary for publication."""
1941,"""EgoMap: Projective mapping and structured egocentric memory for Deep RL""","['Reinforcement Learning', 'Deep Learning', 'Computer Vision', 'Robotics', 'Neural Memory']","""Tasks involving localization, memorization and planning in partially observable 3D environments are an ongoing challenge in Deep Reinforcement Learning. We present EgoMap, a spatially structured neural memory architecture. EgoMap augments a deep reinforcement learning agents performance in 3D environments on challenging tasks with multi-step objectives. The EgoMap architecture incorporates several inductive biases including a differentiable inverse projection of CNN feature vectors onto a top-down spatially structured map. The map is updated with ego-motion measurements through a differentiable affine transform. We show this architecture outperforms both standard recurrent agents and state of the art agents with structured memory. We demonstrate that incorporating these inductive biases into an agents architecture allows for stable training with reward alone, circumventing the expense of acquiring and labelling expert trajectories. A detailed ablation study demonstrates the impact of key aspects of the architecture and through extensive qualitative analysis, we show how the agent exploits its structured internal memory to achieve higher performance. ""","""This paper presents a spatially structured neural memory architecture that supports navigation tasks. The paper describes a complex neural architecture that integrates visual information, camera parameters, egocentric velocities, and a differentiable 2D map canvas. This structure is trained end-to-end with A2C in the VizDoom environment. The strong inductive priors captured by these geometric transformations is demonstrated to be effective on navigation-related tasks in the experiments in this environment. The reviewers found many strengths and a few weaknesses in this paper. One strength is that the paper pulls together many related ideas in the mapping literature and combines them in one integrated system. The reviewers liked the method's ability to leverage semantic reasoning and spatial computation. They liked the careful updating of the maps and the use of projective geometry. The reviewers were less convinced of the generality of this method. The lack of realism in these simulated environments left the reviewers unconvinced that the benefits observed from using projective geometry in this setting will continue to hold in more realistic environments. The use of fixed geometric transformations with RGBD inputs instead of learned transformations also makes this approach less general than a system that could handle RGB inputs. Finally, the reviewers noted that the contributions of this paper are not well aligned with the paper's claims. This paper is not yet ready for publication as the paper's claims and experiments were not sufficiently convincing to the reviewers. """
1942,"""Superbloom: Bloom filter meets Transformer""","['Bloom filter', 'Transformer', 'word pieces', 'contextual embeddings']","""We extend the idea of word pieces in natural language models to machine learning tasks on opaque ids. This is achieved by applying hash functions to map each id to multiple hash tokens in a much smaller space, similarly to a Bloom filter. We show that by applying a multi-layer Transformer to these Bloom filter digests, we are able to obtain models with high accuracy. They outperform models of a similar size without hashing and, to a large degree, models of a much larger size trained using sampled softmax with the same computational budget. Our key observation is that it is important to use a multi-layer Transformer for Bloom filter digests to remove ambiguity in the hashed input. We believe this provides an alternative method to solving problems with large vocabulary size.""","""This paper presents to integrate the codes based on multiple hashing functions with Transformer networks to reduce vocabulary sizes in input and output spaces. Compared to non-hashed models, it enables training more complex and powerful models with the same number of overall parameters, thus leads to better performance. Although the technical contribution is limited considering hash-based approach itself is rather well-known and straightforward, all reviewers agree that some findings in the experiments are interesting. On the cons side, two reviewers were concerned about unclear presentation regarding the details of the method. More importantly, the proposed method is only evaluated on non-standard tasks without comparison to other previous methods. Considering that the main contribution of the paper is in empirical side, I agree it is necessary to evaluate the method on more standard benchmarking tasks in NLP where there should be many other state-of-the-art methods of model compression. For these reasons, Id like to recommend rejection. """
1943,"""AN EFFICIENT HOMOTOPY TRAINING ALGORITHM FOR NEURAL NETWORKS""","['Homotopy training algorithm', 'Convergence analysis', 'Neural networks']","""We present a Homotopy Training Algorithm (HTA) to solve optimization problems arising from neural networks. The HTA starts with several decoupled systems with low dimensional structure and tracks the solution to the high dimensional coupled system. The decoupled systems are easy to solve due to the low dimensionality but can be connected to the original system via a continuous homotopy path guided by the HTA. We have proved the convergence of HTA for the non-convex case and existence of the homotopy solution path for the convex case. The HTA has provided a better accuracy on several examples including VGG models on CIFAR-10. Moreover, the HTA would be combined with the dropout technique to provide an alternative way to train the neural networks.""","""The work proposes to learn neural networks using a homotopy-based continuation method. Reviewers found the idea interesting, but the manuscript poorly written, and lacking in experimental results. With no response from the authors, I recommend rejecting the paper."""
1944,"""RaPP: Novelty Detection with Reconstruction along Projection Pathway""","['Novelty Detection', 'Anomaly Detection', 'Outlier Detection', 'Semi-supervised Learning']","""We propose RaPP, a new methodology for novelty detection by utilizing hidden space activation values obtained from a deep autoencoder. Precisely, RaPP compares input and its autoencoder reconstruction not only in the input space but also in the hidden spaces. We show that if we feed a reconstructed input to the same autoencoder again, its activated values in a hidden space are equivalent to the corresponding reconstruction in that hidden space given the original input. In order to aggregate the hidden space activation values, we propose two metrics, which enhance the novelty detection performance. Through extensive experiments using diverse datasets, we validate that RaPP improves novelty detection performances of autoencoder-based approaches. Besides, we show that RaPP outperforms recent novelty detection methods evaluated on popular benchmarks. ""","""The paper proposes to extend the autoencoder loss in a deep generative model to include per-latent-layer loss terms. Two variants are proposed: SAP (simple aggregation along pathway) and NAP (normalized aggregation along pathway). SAP is simply the sum of the squared norm, while NAP performs decorrelation and normalization of the magnitude. This was viewed as novel by the reviewers, and the experiments supported the proposed approach. In the post rebuttal phase, the inclusion of an ablation study has led to an upgrade in the reviewer recommendation. As a result, there was a unanimous opinion that the paper is suitable for publication at ICLR."""
1945,"""Duration-of-Stay Storage Assignment under Uncertainty""","['Storage Assignment', 'Deep Learning', 'Duration-of-Stay', 'Application', 'Natural Language Processing', 'Parallel Network']","""Storage assignment, the act of choosing what goods are placed in what locations in a warehouse, is a central problem of supply chain logistics. Past literature has shown that the optimal method to assign pallets is to arrange them in increasing duration of stay in the warehouse (the Duration-of-Stay, or DoS, method), but the methodology requires perfect prior knowledge of DoS for each pallet, which is unknown and uncertain under realistic conditions. Attempts to predict DoS have largely been unfruitful due to the multi-valuedness nature (every shipment contains multiple identical pallets with different DoS) and data sparsity induced by lack of matching historical conditions. In this paper, we introduce a new framework for storage assignment that provides a solution to the DoS prediction problem through a distributional reformulation and a novel neural network, ParallelNet. Through collaboration with a world-leading cold storage company, we show that the system is able to predict DoS with a MAPE of 29%, a decrease of ~30% compared to a CNN-LSTM model, and suffers less performance decay into the future. The framework is then integrated into a first-of-its-kind Storage Assignment system, which is being deployed in warehouses across United States, with initial results showing up to 21% in labor savings. We also release the first publicly available set of warehousing records to facilitate research into this central problem.""","""Thanks to the authors for the submission and the active discussion. The paper applies deep learning to the duration-of-stay estimation problem in the warehouse storage application. The authors provide problem formulation and describe the pipeline of their solutions, including datasets preparation and loss functions design. The reviewers agree that this is a good application paper that showcases how deep learning can be useful for a real-world problem. The release of the dataset can also be a nice contribution. A major debate during the discussion is whether this paper is in scope of ICLR given that it is mostly a straightforward application existing techniques. After several rounds of discussion, reviewers think that this should fit under the category ""applications in vision, ... , computational biology, and others."" Overall, this paper can be a good example of applying deep learning to real-world problems. """
1946,"""Topic Models with Survival Supervision: Archetypal Analysis and Neural Approaches""",[],"""We introduce two approaches to topic modeling supervised by survival analysis. Both approaches predict time-to-event outcomes while simultaneously learning topics over features that help prediction. The high-level idea is to represent each data point as a distribution over topics using some underlying topic model. Then each data point's distribution over topics is fed as input to a survival model. The topic and survival models are jointly learned. The two approaches we propose differ in the generality of topic models they can learn. The first approach finds topics via archetypal analysis, a nonnegative matrix factorization method that optimizes over a wide class of topic models encompassing latent Dirichlet allocation (LDA), correlated topic models, and topic models based on the ``anchor word'' assumption; the resulting survival-supervised variant solves an alternating minimization problem. Our second approach builds on recent work that approximates LDA in a neural net framework. We add a survival loss layer to this neural net to form an approximation to survival-supervised LDA. Both of our approaches can be combined with a variety of survival models. We demonstrate our approach on two survival datasets, showing that survival-supervised topic models can achieve competitive time-to-event prediction accuracy while outputting clinically interpretable topics.""","""The paper proposes two approaches to topic modeling supervised by survival analysis. The reviewers find some problems in novelty, algorithm and experiments, which is not ready for publish."""
1947,"""Scheduling the Learning Rate Via Hypergradients: New Insights and a New Algorithm""","['automl', 'hyperparameter optimization', 'learning rate', 'deep learning']","""We study the problem of fitting task-specific learning rate schedules from the perspective of hyperparameter optimization. This allows us to explicitly search for schedules that achieve good generalization. We describe the structure of the gradient of a validation error w.r.t. the learning rates, the hypergradient, and based on this we introduce a novel online algorithm. Our method adaptively interpolates between two recently proposed techniques (Franceschi et al., 2017; Baydin et al.,2018), featuring increased stability and faster convergence. We show empirically that the proposed technique compares favorably with baselines and related methodsin terms of final test accuracy.""","""First, I'd like to apologize once again for failing to secure a third reviewer for this paper. To compensate, I checked the paper more thoroughly than standard. The area of online adaptation of the learning rate is of great importance and I appreciate the authors' effort in that direction. The authors carefully abundantly cite the research on gradient-based hyperparameter optimization but I would have appreciated to also see past works on stochastic line search (for instance ""A stochastic line-search method with convergence rate"") or statistical methods (""Using Statistics to Automate Stochastic Optimization""). The issue with these methods is that, despite usually very positive claims in the paper, they are not that competitive against a carefully tuned fixed schedule and end up not being used in practice. Hence, it is critical to develop a convincing experimental section to assuage doubts. Unfortunately, the experimental section of this work is a bit lacking, as pointed by both reviewers. I would like to comment on two points specifically: - First, no plot uses wall-clock time as the x-axis. Since the authors state that it can be up to 4 times as slow per iteration, the gains compared to a carefully tuned schedule are unclear. - Second, the use of a single (albeit two variants) dataset also leads to skepticism. Datasets have vastly different optimization properties and, by not using a wide range of them, one can miss the true sensitivity of the proposed algorithm. While I do not think that the paper is ready for publication, I feel like there is a clear path to an improved version that could be submitted to a later conference."""
1948,"""On Incorporating Semantic Prior Knowlegde in Deep Learning Through Embedding-Space Constraints""","['regularizers', 'vision', 'language', 'vqa', 'visual question answering']","""The knowledge that humans hold about a problem often extends far beyond a set of training data and output labels. While the success of deep learning mostly relies on supervised training, important properties cannot be inferred efficiently from end-to-end annotations alone, for example causal relations or domain-specific invariances. We present a general technique to supplement supervised training with prior knowledge expressed as relations between training instances. We illustrate the method on the task of visual question answering to exploit various auxiliary annotations, including relations of equivalence and of logical entailment between questions. Existing methods to use these annotations, including auxiliary losses and data augmentation, cannot guarantee the strict inclusion of these relations into the model since they require a careful balancing against the end-to-end objective. Our method uses these relations to shape the embedding space of the model, and treats them as strict constraints on its learned representations. %The resulting model encodes relations that better generalize across instances. In the context of VQA, this approach brings significant improvements in accuracy and robustness, in particular over the common practice of incorporating the constraints as a soft regularizer. We also show that incorporating this type of prior knowledge with our method brings consistent improvements, independently from the amount of supervised data used. It demonstrates the value of an additional training signal that is otherwise difficult to extract from end-to-end annotations alone.""","""The paper proposes a technique for incorporating prior knowledge as relations between training instances. The reviewers had a mixed set of concerns, with one common one being an insufficient comparison with / discussion of related work. Some reviewers also found the clarity lacking, but were satisfied with the revision. One reviewer found the claim of the approach being general but only tested and valid for the VQA dataset problematic. Following the discussion, I recommend rejection at this time, but encourage the authors to take the feedback into account and resubmit to another venue."""
1949,"""Meta-Learning without Memorization""","['meta-learning', 'memorization', 'regularization', 'overfitting', 'mutually-exclusive']","""The ability to learn new concepts with small amounts of data is a critical aspect of intelligence that has proven challenging for deep learning methods. Meta-learning has emerged as a promising technique for leveraging data from previous tasks to enable efficient learning of new tasks. However, most meta-learning algorithms implicitly require that the meta-training tasks be mutually-exclusive, such that no single model can solve all of the tasks at once. For example, when creating tasks for few-shot image classification, prior work uses a per-task random assignment of image classes to N-way classification labels. If this is not done, the meta-learner can ignore the task training data and learn a single model that performs all of the meta-training tasks zero-shot, but does not adapt effectively to new image classes. This requirement means that the user must take great care in designing the tasks, for example by shuffling labels or removing task identifying information from the inputs. In some domains, this makes meta-learning entirely inapplicable. In this paper, we address this challenge by designing a meta-regularization objective using information theory that places precedence on data-driven adaptation. This causes the meta-learner to decide what must be learned from the task training data and what should be inferred from the task testing input. By doing so, our algorithm can successfully use data from non-mutually-exclusive tasks to efficiently adapt to novel tasks. We demonstrate its applicability to both contextual and gradient-based meta-learning algorithms, and apply it in practical settings where applying standard meta-learning has been difficult. Our approach substantially outperforms standard meta-learning algorithms in these settings.""","""The paper introduces the concept of overfitting in meta learning and proposes some solutions to address this problem. Overall, this is a good paper. It would be good if the authors could relate this work to meta learning approaches, which are based on hierarchical (Bayesian) modeling for learning a task embedding. [1] Hausman et al. (ICLR 2018): Learning an Embedding Space for Transferable Robot Skills pseudo-url [2] Saemundsson et al. (UAI 2018): Meta Reinforcement Learning with Latent Variable Gaussian Processes pseudo-url """
1950,"""Learning Structured Communication for Multi-agent Reinforcement Learning""","['Learning to communicate', 'Multi-agent reinforcement learning', 'Hierarchical communication network']","""Learning to cooperate is crucial for many practical large-scale multi-agent applications. In this work, we consider an important collaborative task, in which agents learn to efciently communicate with each other under a multi-agent reinforcement learning (MARL) setting. Despite the fact that there has been a number of existing works along this line, achieving global cooperation at scale is still challenging. In particular, most of the existing algorithms suffer from issues such as scalability and high communication complexity, in the sense that when the agent population is large, it can be difcult to extract effective information for high-performance MARL. In contrast, the proposed algorithmic framework, termed Learning Structured Communication (LSC), is not only scalable but also communication high-qualitative (learning efcient). The key idea is to allow the agents to dynamically learn a hierarchical communication structure, while under such a structure the graph neural network (GNN) is used to efciently extract useful information to be exchanged between the neighboring agents. A number of new techniques are proposed to tightly integrate the communication structure learning, GNN optimization and MARL tasks. Extensive experiments are performed to demonstrate that, the proposed LSC framework enjoys high communication efciency, scalability and global cooperation capability.""","""The paper focuses on large-scale multi-agent reinforcement learning and proposes Learning Structured Communication (LSC) to deal issues of scale and learn sample efficiently. Reviewers are positive about the presented ideas, but note remaining limitations. In particular, the empirical validation does not lead to sufficiently novel insights, and additional analysis is needed to round out the paper."""
1951,"""Adaptive Structural Fingerprints for Graph Attention Networks""","['Graph attention networks', 'graph neural networks', 'node classification']","""Graph attention network (GAT) is a promising framework to perform convolution and massage passing on graphs. Yet, how to fully exploit rich structural information in the attention mechanism remains a challenge. In the current version, GAT calculates attention scores mainly using node features and among one-hop neighbors, while increasing the attention range to higher-order neighbors can negatively affect its performance, reflecting the over-smoothing risk of GAT (or graph neural networks in general), and the ineffectiveness in exploiting graph structural details. In this paper, we propose an ``""adaptive structural fingerprint"" (ADSF) model to fully exploit graph topological details in graph attention network. The key idea is to contextualize each node with a weighted, learnable receptive field encoding rich and diverse local graph structures. By doing this, structural interactions between the nodes can be inferred accurately, thus significantly improving subsequent attention layer as well as the convergence of learning. Furthermore, our model provides a useful platform for different subspaces of node features and various scales of graph structures to ``cross-talk'' with each other through the learning of multi-head attention, being particularly useful in handling complex real-world data. Empirical results demonstrate the power of our approach in exploiting rich structural information in GAT and in alleviating the intrinsic oversmoothing problem in graph neural networks.""","""This paper is consistently supported by all three reviewers during initial review and discussions. Thus an accept is recommended."""
1952,"""Semi-Supervised Few-Shot Learning with a Controlled Degree of Task-Adaptive Conditioning""","['few-shot learning', 'meta-learning', 'semi-supervised learning', 'task-adaptive clustering', 'task-adaptive projection space']","""Few-shot learning aims to handle previously unseen tasks using only a small amount of new training data. In preparing (or meta-training) a few-shot learner, however, massive labeled data are necessary. In the real world, unfortunately, labeled data are expensive and/or scarce. In this work, we propose a few-shot learner that can work well under the semi-supervised setting where a large portion of training data is unlabeled. Our method employs explicit task-conditioning in which unlabeled sample clustering for the current task takes place in a new projection space different from the embedding feature space. The conditioned clustering space is linearly constructed so as to quickly close the gap between the class centroids for the current task and the independent per-class reference vectors meta-trained across tasks. In a more general setting, our method introduces a concept of controlling the degree of task-conditioning for meta-learning: the amount of task-conditioning varies with the number of repetitive updates for the clustering space. During each update, the soft labels of the unlabeled samples estimated in the conditioned clustering space are used to update the class averages in the original embedded space, which in turn are used to reconstruct the clustering space. Extensive simulation results based on the miniImageNet and tieredImageNet datasets show state-of-the-art semi-supervised few-shot classification performance of the proposed method. Simulation results also indicate that the proposed task-adaptive clustering shows graceful degradation with a growing number of distractor samples, i.e., unlabeled samples coming from outside the candidate classes.""","""This paper proposes an approach to semi-supervised few-shot learning. In a discussion after the rebuttal phase, the reviewers were somewhat split on this paper, appreciating the advantages of the algorithm such as increased robustness to distractors and the ability to adapt with additional iterations, but were concerned that the contributions over Ren et al were not significant. Overall, the contributions of this paper don't quite warrant publication at ICLR."""
1953,"""Multiagent Reinforcement Learning in Games with an Iterated Dominance Solution""","['multiagent', 'reinforcement learning', 'iterated dominance', 'mechanism design', 'Nash equilibrium']","""Multiagent reinforcement learning (MARL) attempts to optimize policies of intelligent agents interacting in the same environment. However, it may fail to converge to a Nash equilibrium in some games. We study independent MARL under the more demanding solution concept of iterated elimination of strictly dominated strategies. In dominance solvable games, if players iteratively eliminate strictly dominated strategies until no further strategies can be eliminated, we obtain a single strategy profile. We show that convergence to the iterated dominance solution is guaranteed for several reinforcement learning algorithms (for multiple independent learners). We illustrate an application of our results by studying mechanism design for principal-agent problems, where a principal wishes to incentivize agents to exert costly effort in a joint project when it can only observe whether the project succeeded, but not whether agents actually exerted effort. We show that MARL converges to the desired outcome if the rewards are designed so that exerting effort is the iterated dominance solution, but fails if it is merely a Nash equilibrium.""","""The paper proofs that reinforcement learning (using two different algorithms) converge to iterative dominance solutions for a class of multi-player games (dominance solvable games). There was a lively discussion around the paper. However, two of the reviewers remain unconvinced of the novelty of the approach, pointing to [1] and [2], with [1] only pertaining to supermodular games. The exact contribution over such existing results is currently not addressed in the manuscript. There were also concerns about the scaling and applicability of the results, as dominance solvable games are limited. [1] pseudo-url [2] Friedman, James W., and Claudio Mezzetti. ""Learning in games by random sampling."" Journal of Economic Theory 98.1 (2001): 55-84."""
1954,"""Stable Rank Normalization for Improved Generalization in Neural Networks and GANs""","['Generelization', 'regularization', 'empirical lipschitz']","""Exciting new work on generalization bounds for neural networks (NN) given by Bartlett et al. (2017); Neyshabur et al. (2018) closely depend on two parameter- dependant quantities: the Lipschitz constant upper bound and the stable rank (a softer version of rank). Even though these bounds typically have minimal practical utility, they facilitate questions on whether controlling such quantities together could improve the generalization behaviour of NNs in practice. To this end, we propose stable rank normalization (SRN), a novel, provably optimal, and computationally efficient weight-normalization scheme which minimizes the stable rank of a linear operator. Surprisingly we find that SRN, despite being non-convex, can be shown to have a unique optimal solution. We provide extensive analyses across a wide variety of NNs (DenseNet, WideResNet, ResNet, Alexnet, VGG), where applying SRN to their linear layers leads to improved classification accuracy, while simultaneously showing improvements in genealization, evaluated empirically using(a) shattering experiments (Zhang et al., 2016); and (b) three measures of sample complexity by Bartlett et al. (2017), Neyshabur et al. (2018), & Wei & Ma. Additionally, we show that, when applied to the discriminator of GANs, it improves Inception, FID, and Neural divergence scores, while learning mappings with low empirical Lipschitz constant.""","""The authors propose stable rank normalization, which minimizes the stable rank of a linear operator and apply this to neural network training. The authors present techniques for performing the normalization efficiently and evaluate it empirically in a range of situations. The only issues raised by reviewers related to the empirical evaluation. The authors addressed these in their revisions. """
1955,"""Certified Defenses for Adversarial Patches""","['certified defenses', 'patch attack', 'adversarial robustness', 'sparse defense']","""Adversarial patch attacks are among one of the most practical threat models against real-world computer vision systems. This paper studies certified and empirical defenses against patch attacks. We begin with a set of experiments showing that most existing defenses, which work by pre-processing input images to mitigate adversarial patches, are easily broken by simple white-box adversaries. Motivated by this finding, we propose the first certified defense against patch attacks, and propose faster methods for its training. Furthermore, we experiment with different patch shapes for testing, obtaining surprisingly good robustness transfer across shapes, and present preliminary results on certified defense against sparse attacks. Our complete implementation can be found on: pseudo-url.""","""This paper presents a certified defense method for adversarial patch attacks. The proposed approach provides certifiable guarantees to the attacks, and the reviewers particularly find its experiments results interesting and promising. The added new experiments during the rebuttal phase strengthened the paper. There still is a remaining concern that is novelty is limited as this paper could be viewed as the application of the original IBP to patch attacks, but the reviewers believe in that its empirical results are important."""
1956,"""Data-Driven Approach to Encoding and Decoding 3-D Crystal Structures""",[],"""Generative models have achieved impressive results in many domains including image and text generation. In the natural sciences, generative models have lead to rapid progress in automated drug discovery. Many of the current methods focus on either 1-D or 2-D representations of typically small, drug-like molecules. However, many molecules require 3-D descriptors and exceed the chemical complexity of commonly used dataset. We present a method to encode and decode the position of atoms in 3-D molecules along with a dataset of nearly 50,000 stable crystal unit cells that vary from containing 1 to over 100 atoms. We construct a smooth and continuous 3-D density representation of each crystal based on the positions of different atoms. Two different neural networks were trained on a dataset of over 120,000 three-dimensional samples of single and repeating crystal structures. The first, an Encoder-Decoder pair, constructs a compressed latent space representation of each molecule and then decodes this description into an accurate reconstruction of the input. The second network segments the resulting output into atoms and assigns each atom an atomic number. By generating compressed, continuous latent spaces representations of molecules we are able to decode random samples, interpolate between two molecules, and alter known molecules.""","""This paper presents an encoder-decoder based approach to construct a compressed latent space representation of each molecule. Then a second neural network segments the output and assigns an atomic number. Unlike previous works using 1D or 2D representations, the proposed method focuses on the 3D representations. The reviewers have several major concerns. Firstly, the novelty of the paper seems to be limited as the proposed method mainly use the existing techniques. Secondly, there is no clear baseline to compare with. Finally, there is no clear quantitative results to measure the proposed method. The rebuttal did not well address these problems. Overall, this paper did not meet the standard of ICLR and I choose to reject the paper. """
1957,"""Universal approximations of permutation invariant/equivariant functions by deep neural networks""","['finite group', 'invariant', 'equivariant', 'neural networks']","""In this paper, we develop a theory about the relationship between pseudo-formula -invariant/equivariant functions and deep neural networks for finite group pseudo-formula . Especially, for a given pseudo-formula -invariant/equivariant function, we construct its universal approximator by deep neural network whose layers equip pseudo-formula -actions and each affine transformations are pseudo-formula -equivariant/invariant. Due to representation theory, we can show that this approximator has exponentially fewer free parameters than usual models. ""","""The article studies universal approximation for the restricted class of equivariant functions, which can have a smaller number of free parameters. The reviewers found the topic important and also that the approach has merits. However, they pointed out that the article is very hard to read and that more intuitions, a clearer comparison with existing work, and connections to practice would be important. The responses did clarify some of the differences to previous works. However, there was no revision addressing the main concerns. """
1958,"""Attacking Graph Convolutional Networks via Rewiring""","['Graph Neural Networks', 'Rewiring', 'Adversarial Attacks']","""Graph Neural Networks (GNNs) have boosted the performance of many graph related tasks such as node classification and graph classification. Recent researches show that graph neural networks are vulnerable to adversarial attacks, which deliberately add carefully created unnoticeable perturbation to the graph structure. The perturbation is usually created by adding/deleting a few edges, which might be noticeable even when the number of edges modified is small. In this paper, we propose a graph rewiring operation which affects the graph in a less noticeable way compared to adding/deleting edges. We then use reinforcement learning to learn the attack strategy based on the proposed rewiring operation. Experiments on real world graphs demonstrate the effectiveness of the proposed framework. To understand the proposed framework, we further analyze how its generated perturbation to the graph structure affects the output of the target model.""","""This paper proposes a method for attacking graph convolutional networks, where a graph rewiring operation was introduced that affects the graph in a less noticeable way compared to adding/deleting edges. Reinforcement learning is applied to learn the attack strategy based on the proposed rewiring operation. The paper should be improved by acknowledging/comparing with previous work in a more proper way. In particular, I view the major innovation is on the rewiring operation and its analysis. The reinforcement learning formulation is similar to Dai et al (2018). This connection should be made more clear in the technical part. One issue that needs to be discussed on is that if you directly consider the triples as actions, the space will be huge. Do you apply some hierarchical treatment as suggested by Dai et al. (2018)? The review comments should be considered to further improve too."""
1959,"""Long-term planning, short-term adjustments""","['Deep Reinforcement Learning', 'Control']","""Deep Reinforcement Learning (RL) algorithms can learn complex policies to optimize agent operation over time. RL algorithms have shown promising results in solving complicated problems in recent years. However, their application on real-world physical systems remains limited. Despite the advancements in RL algorithms, the industries often prefer traditional control strategies. Traditional methods are simple, computationally efficient and easy to adjust. In this paper, we propose a new Q-learning algorithm for continuous action space, which can bridge the control and RL algorithms and bring us the best of both worlds. Our method can learn complex policies to achieve long-term goals and at the same time it can be easily adjusted to address short-term requirements without retraining. We achieve this by modeling both short-term and long-term prediction models. The short-term prediction model represents the estimation of the system dynamic while the long-term prediction model represents the Q-value. The case studies demonstrate that our proposed method can achieve short-term and long-term goals without complex reward functions.""","""This paper proposes a reinforcement learning algorithm for continuous action domains that combines a short-horizon model-based objective and a long-horizon value estimate. The stated benefits to this approach are the ability to modify the model-based objective without extensive retraining. This model-based objective can also capture custom constraints and implements a linear dynamics model that is used in conventional control theory. The proposed method was evaluated on two domains (the mountain car domain and a custom crane domain) and compared to a continuous action space method (DDPG). This discussion of this paper highlighted both strengths and weaknesses. The reviewers said the presentation was clear. The reviewers also appreciated the relevance of the problem. The primary weakness was the evaluation of the method. One repeated concern from the reviewers was having only one standard domain for evaluation (mountain car). Another concern was the absence of other model-based algorithms, which was addressed by the author response. This paper is not yet ready to be published, despite its possible benefits, due to the lack of evidence for this method on more continuous action problems. """
1960,"""Putting Machine Translation in Context with the Noisy Channel Model""","['machine translation', 'context-aware machine translation', 'bayes rule']","""We show that Bayes' rule provides a compelling mechanism for controlling unconditional document language models, using the long-standing challenge of effectively leveraging document context in machine translation. In our formulation, we estimate the probability of a candidate translation as the product of the unconditional probability of the candidate output document and the ``reverse translation probability'' of translating the candidate output back into the input source language document---the so-called ``noisy channel'' decomposition. A particular advantage of our model is that it requires only parallel sentences to train, rather than parallel documents, which are not always available. Using a new beam search reranking approximation to solve the decoding problem, we find that document language models outperform language models that assume independence between sentences, and that using either a document or sentence language model outperform comparable models that directly estimate the translation probability. We obtain the best-published results on the NIST Chinese--English translation task, a standard task for evaluating document translation. Our model also outperforms the benchmark Transformer model by approximately 2.5 BLEU on the WMT19 Chinese--English translation task.""","""The authors propose using a noisy channel formulation which allows them to combine a sentence level target-source translation model with a language model trained over target side document-level information. They use reranking of a 50-best list generated by a standard Transformer model for forward translation and show reasonably strong results. The reviewers were concerned about the efficiency of this approach and the limited novelty as compared to the sentence-level noisy channel research Yu et al. 2017. The authors responded in depth, adding results with another baseline which includes backtranslated data. I feel that although this paper is interesting, it is not compelling enough for inclusion in ICLR. """
1961,"""GraphMix: Regularized Training of Graph Neural Networks for Semi-Supervised Learning""","['Regularization', 'Graph Neural Networks', 'Mixup', 'Manifold Mixup', 'Semi-supervised Object Classification over graph Data']","""We present GraphMix, a regularization technique for Graph Neural Network based semi-supervised object classification, leveraging the recent advances in the regularization of classical deep neural networks. Specifically, we propose a unified approach in which we train a fully-connected network jointly with the graph neural network via parameter sharing, interpolation-based regularization and self-predicted-targets. Our proposed method is architecture agnostic in the sense that it can be applied to any variant of graph neural networks which applies a parametric transformation to the features of the graph nodes. Despite its simplicity, with GraphMix we can consistently improve results and achieve or closely match state-of-the-art performance using even simpler architectures such as Graph Convolutional Networks, across three established graph benchmarks: Cora, Citeseer and Pubmed citation network datasets, as well as three newly proposed datasets : Cora-Full, Co-author-CS and Co-author-Physics.""","""The authors integrate an interpolation based regularization to develop a graph neural network for semi-supervised learning. While reviewers enjoyed the paper, and the authors have provided a thoughtful response, there were remaining questions about clarity of presentation and novelty remaining after the rebuttal period. The authors are encouraged to continue with this work, accounting for reviewer comments in future revisions."""
1962,"""You Only Train Once: Loss-Conditional Training of Deep Networks""","['deep learning', 'image generation']","""In many machine learning problems, loss functions are weighted sums of several terms. A typical approach to dealing with these is to train multiple separate models with different selections of weights and then either choose the best one according to some criterion or keep multiple models if it is desirable to maintain a diverse set of solutions. This is inefficient both at training and at inference time. We propose a method that allows replacing multiple models trained on one loss function each by a single model trained on a distribution of losses. At test time a model trained this way can be conditioned to generate outputs corresponding to any loss from the training distribution of losses. We demonstrate this approach on three tasks with parametrized losses: beta-VAE, learned image compression, and fast style transfer.""","""The paper proposes and validates a simple idea of training a neural network for a parametric family of losses, using a popular AdaIN mechanism. Following the rebuttal and the revision, all three reviewers recommend acceptance (though weakly). There is a valid concern about the overlap with an ICLR19-workshop paper with essentially the same idea, however the submission is broader in scope and validates the idea on several applications."""
1963,"""DeepV2D: Video to Depth with Differentiable Structure from Motion""","['Structure-from-Motion', 'Video to Depth', 'Dense Depth Estimation']","""We propose DeepV2D, an end-to-end deep learning architecture for predicting depth from video. DeepV2D combines the representation ability of neural networks with the geometric principles governing image formation. We compose a collection of classical geometric algorithms, which are converted into trainable modules and combined into an end-to-end differentiable architecture. DeepV2D interleaves two stages: motion estimation and depth estimation. During inference, motion and depth estimation are alternated and converge to accurate depth. ""","""This work proposes a CNN architecture for joint depth and camera motion estimation from videos. The paper presents a differentiable formulation of the problem to allow its end-to-end learning, and the reviewers unanimously find the proposed approach reasonable and agree that this is a solid paper. Some of the reviewers find the method itself to be too mechanical, but they all agree that this is a well-engineered solution."""
1964,"""TinyBERT: Distilling BERT for Natural Language Understanding""","['BERT Compression', 'Transformer Distillation', 'TinyBERT']","""Language model pre-training, such as BERT, has significantly improved the performances of many natural language processing tasks. However, the pre-trained language models are usually computationally expensive and memory intensive, so it is difficult to effectively execute them on resource-restricted devices. To accelerate inference and reduce model size while maintaining accuracy, we firstly propose a novel Transformer distillation method that is specially designed for knowledge distillation (KD) of the Transformer-based models. By leveraging this new KD method, the plenty of knowledge encoded in a large teacher BERT can be well transferred to a small student TinyBERT. Moreover, we introduce a new two-stage learning framework for TinyBERT, which performs Transformer distillation at both the pre-training and task-specific learning stages. This framework ensures that TinyBERT can capture the general domain as well as the task-specific knowledge in BERT. TinyBERT is empirically effective and achieves comparable results with BERT on GLUE benchmark, while being 7.5x smaller and 9.4x faster on inference. TinyBERT is also significantly better than state-of-the-art baselines on BERT distillation, with only 28% parameters and 31% inference time of them. ""","""This paper proposes a new distillation-based method for using large pretrained models like BERT to produce much *smaller* fine-tuned target-task models. This paper is low-borderline: It has merit and meets our basic standards, but owing to capacity limitations we had to give preference to papers we see as having a higher potential impact. Reviewers had some concerns about experimental design, but those seem to have been fully resolved after discussion. Reviewers were not convinced, even after some discussion, that the method and results were sufficiently novel and effective to have a substantial impact on the state of practice in this area."""
1965,"""YaoGAN: Learning Worst-case Competitive Algorithms from Self-generated Inputs""",[],"""We tackle the challenge of using machine learning to find algorithms with strong worst-case guarantees for online combinatorial optimization problems. Whereas the previous approach along this direction (Kong et al., 2018) relies on significant domain expertise to provide hard distributions over input instances at training, we ask whether this can be accomplished from first principles, i.e., without any human-provided data beyond specifying the objective of the optimization problem. To answer this question, we draw insights from classic results in game theory, analysis of algorithms, and online learning to introduce a novel framework. At the high level, similar to a generative adversarial network (GAN), our framework has two components whose respective goals are to learn the optimal algorithm as well as a set of input instances that captures the essential difficulty of the given optimization problem. The two components are trained against each other and evolved simultaneously. We test our ideas on the ski rental problem and the fractional AdWords problem. For these well-studied problems, our preliminary results demonstrate that the framework is capable of finding algorithms as well as difficult input instances that are consistent with known optimal results. We believe our new framework points to a promising direction which can facilitate the research of algorithm design by leveraging ML to improve the state of the art both in theory and in practice. ""","""The authors propose an intriguing way to designing competitive online algorithms. However, the state of the paper and the provided evidence of the success of the proposed methodology is too preliminary to merit acceptance."""
1966,"""COMBINED FLEXIBLE ACTIVATION FUNCTIONS FOR DEEP NEURAL NETWORKS""","['Flexible', 'Activation Functions', 'Deep Learning', 'Regularization']","""Activation in deep neural networks is fundamental to achieving non-linear mappings. Traditional studies mainly focus on finding fixed activations for a particular set of learning tasks or model architectures. The research on flexible activation is quite limited in both designing philosophy and application scenarios. In this study, we propose a general combined form of flexible activation functions as well as three principles of choosing flexible activation component. Based on this, we develop two novel flexible activation functions that can be implemented in LSTM cells and auto-encoder layers. Also two new regularisation terms based on assumptions as prior knowledge are proposed. We find that LSTM and auto-encoder models with proposed flexible activations provides significant improvements on time series forecasting and image compressing tasks, while layer-wise regularization can improve the performance of CNN (LeNet-5) models with RPeLu activation in image classification tasks.""","""Main content: Proposes combining flexible activation functions Discussion: reviewer 1: main issue is unfamiliar with stock dataset, and CIFAR dataset has a bad baseline. reviewer 2: main issue is around baselines and writing. reviewer 3: main issue is paper does not compare with NAS. Recommendation: All 3 reviewers vote reject. Paper can be improved with stronger baselines and experiments. I recommend Reject."""
1967,"""LIA: Latently Invertible Autoencoder with Adversarial Learning""","['variational autoencoder', 'generative adversarial network']","""Deep generative models such as Variational AutoEncoder (VAE) and Generative Adversarial Network (GAN) play an increasingly important role in machine learning and computer vision. However, there are two fundamental issues hindering their real-world applications: the difficulty of conducting variational inference in VAE and the functional absence of encoding real-world samples in GAN. In this paper, we propose a novel algorithm named Latently Invertible Autoencoder (LIA) to address the above two issues in one framework. An invertible network and its inverse mapping are symmetrically embedded in the latent space of VAE. Thus the partial encoder first transforms the input into feature vectors and then the distribution of these feature vectors is reshaped to fit a prior by the invertible network. The decoder proceeds in the reverse order of the encoder's composite mappings. A two-stage stochasticity-free training scheme is designed to train LIA via adversarial learning, in the sense that the decoder of LIA is first trained as a standard GAN with the invertible network and then the partial encoder is learned from an autoencoder by detaching the invertible network from LIA. Experiments conducted on the FFHQ face dataset and three LSUN datasets validate the effectiveness of LIA for inference and generation.""","""A nice idea: the latent prior is replaced by a GAN. A general agreement between all four reviewers to reject the submission, based on a not thorough enough description of the approach, and possibly not being novel."""
1968,"""A Learning-based Iterative Method for Solving Vehicle Routing Problems""","['vehicle routing', 'reinforcement learning', 'optimization', 'heuristics']","""This paper is concerned with solving combinatorial optimization problems, in particular, the capacitated vehicle routing problems (CVRP). Classical Operations Research (OR) algorithms such as LKH3 (Helsgaun, 2017) are extremely inefficient (e.g., 13 hours on CVRP of only size 100) and difficult to scale to larger-size problems. Machine learning based approaches have recently shown to be promising, partly because of their efficiency (once trained, they can perform solving within minutes or even seconds). However, there is still a considerable gap between the quality of a machine learned solution and what OR methods can offer (e.g., on CVRP-100, the best result of learned solutions is between 16.10-16.80, significantly worse than LKH3's 15.65). In this paper, we present learn to Improve (L2I), the first learning based approach for CVRP that is efficient in solving speed and at the same time outperforms OR methods. Starting with a random initial solution, L2I learns to iteratively refine the solution with an improvement operator, selected by a reinforcement learning based controller. The improvement operator is selected from a pool of powerful operators that are customized for routing problems. By combining the strengths of the two worlds, our approach achieves the new state-of-the-art results on CVRP, e.g., an average cost of 15.57 on CVRP-100.""",""" The paper proposed the use of a combination of RL-based iterative improvement operator to refine the solution progressively for the capacitated vehicle routing problem. It has been shown to outperform both classical non-learning based and SOTA learning based methods. The idea is novel and the results are impressive, the presentation is clear. Also the authors addressed the concern of lacking justification on larger tasks by including an appendix of additional experiments. """
1969,"""Neural Execution of Graph Algorithms""","['Graph Neural Networks', 'Graph Algorithms', 'Learning to Execute', 'Program Synthesis', 'Message Passing Neural Networks', 'Deep Learning']","""Graph Neural Networks (GNNs) are a powerful representational tool for solving problems on graph-structured inputs. In almost all cases so far, however, they have been applied to directly recovering a final solution from raw inputs, without explicit guidance on how to structure their problem-solving. Here, instead, we focus on learning in the space of algorithms: we train several state-of-the-art GNN architectures to imitate individual steps of classical graph algorithms, parallel (breadth-first search, Bellman-Ford) as well as sequential (Prim's algorithm). As graph algorithms usually rely on making discrete decisions within neighbourhoods, we hypothesise that maximisation-based message passing neural networks are best-suited for such objectives, and validate this claim empirically. We also demonstrate how learning in the space of algorithms can yield new opportunities for positive transfer between tasks---showing how learning a shortest-path algorithm can be substantially improved when simultaneously learning a reachability algorithm.""","""It seems to be an interesting contribution to the area. I suggest acceptance."""
1970,"""Preventing Imitation Learning with Adversarial Policy Ensembles""","['Imitation Learning', 'Reinforcement Learning', 'Representation Learning']","""Imitation learning can reproduce policies by observing experts, which poses a problem regarding policy propriety. Policies, such as human, or policies on deployed robots, can all be cloned without consent from the owners. How can we protect our proprietary policies from cloning by an external observer? To answer this question we introduce a new reinforcement learning framework, where we train an ensemble of optimal policies, whose demonstrations are guaranteed to be useless for an external observer. We formulate this idea by a constrained optimization problem, where the objective is to improve proprietary policies, and at the same time deteriorate the virtual policy of an eventual external observer. We design a tractable algorithm to solve this new optimization problem by modifying the standard policy gradient algorithm. It appears such problem formulation admits plausible interpretations of confidentiality, adversarial behaviour, which enables a broader perspective of this work. We demonstrate explicitly the existence of such 'non-clonable' ensembles, providing a solution to the above optimization problem, which is calculated by our modified policy gradient algorithm. To our knowledge, this is the first work regarding the protection and privacy of policies in Reinforcement Learning.""","""Although the reviewers appreciated the novelty of this work, they unanimously recommended rejection. The current version of the paper exhibits weak presentation quality and lacks sufficient technical depth. The experimental evaluation was not found to be sufficiently convincing by any of the reviewers. The submitted comments should help the authors improve their paper."""
1971,"""Sliced Cramer Synaptic Consolidation for Preserving Deeply Learned Representations""","['selective plasticity', 'catastrophic forgetting', 'intransigence']","""Deep neural networks suffer from the inability to preserve the learned data representation (i.e., catastrophic forgetting) in domains where the input data distribution is non-stationary, and it changes during training. Various selective synaptic plasticity approaches have been recently proposed to preserve network parameters, which are crucial for previously learned tasks while learning new tasks. We explore such selective synaptic plasticity approaches through a unifying lens of memory replay and show the close relationship between methods like Elastic Weight Consolidation (EWC) and Memory-Aware-Synapses (MAS). We then propose a fundamentally different class of preservation methods that aim at preserving the distribution of internal neural representations for previous tasks while learning a new one. We propose the sliced Cram\'{e}r distance as a suitable choice for such preservation and evaluate our Sliced Cramer Preservation (SCP) algorithm through extensive empirical investigations on various network architectures in both supervised and unsupervised learning settings. We show that SCP consistently utilizes the learning capacity of the network better than online-EWC and MAS methods on various incremental learning tasks.""","""The paper addresses an important problem (preventing catastrophic forgetting in continual learning) through a novel approach based on the sliced Kramer distance. The paper provides a novel and interesting conceptual contribution and is well written. Experiments could have been more extensive but this is very nice work and deserves publication."""
1972,"""Learning to Infer User Interface Attributes from Images""",[],"""We present a new approach that helps developers automate the process of user interface implementation. Concretely, given an input image created by a designer (e.g, using a vector graphics editor), we learn to infer its implementation which when rendered (e.g., on the Android platform), looks visually the same as the input image. To achieve this, we take a black box rendering engine and a set of attributes it supports (e.g., colors, border radius, shadow or text properties), use it to generate a suitable synthetic training dataset, and then train specialized neural models to predict each of the attribute values. To improve pixel-level accuracy, we also use imitation learning to train a neural policy that refines the predicted attribute values by learning to compute the similarity of the original and rendered images in their attribute space, rather than based on the difference of pixel values. ""","""The majority of reviewers suggest rejection, pointing to concerns about design and novelty. Perhaps the most concerning part to me was the consistent lack of expertise in the applied area. This could be random bad luck draw of reviewers, but more likely the paper is not positioned well in the ICLR literature. This means that either it was submitted to the wrong venue, or that the exposition needs to be improved so that the paper is approachable by a larger part of the ICLR community. Since this is not currently true, I suggest that the authors work on a revision."""
1973,"""Rethinking the Hyperparameters for Fine-tuning""","['fine-tuning', 'hyperparameter search', 'transfer learning']","""Fine-tuning from pre-trained ImageNet models has become the de-facto standard for various computer vision tasks. Current practices for fine-tuning typically involve selecting an ad-hoc choice of hyperparameters and keeping them fixed to values normally used for training from scratch. This paper re-examines several common practices of setting hyperparameters for fine-tuning. Our findings are based on extensive empirical evaluation for fine-tuning on various transfer learning benchmarks. (1) While prior works have thoroughly investigated learning rate and batch size, momentum for fine-tuning is a relatively unexplored parameter. We find that the value of momentum also affects fine-tuning performance and connect it with previous theoretical findings. (2) Optimal hyperparameters for fine-tuning, in particular, the effective learning rate, are not only dataset dependent but also sensitive to the similarity between the source domain and target domain. This is in contrast to hyperparameters for training from scratch. (3) Reference-based regularization that keeps models close to the initial model does not necessarily apply for ""dissimilar"" datasets. Our findings challenge common practices of fine-tuning and encourages deep learning practitioners to rethink the hyperparameters for fine-tuning.""","""This paper presents a guide for setting hyperparameters when fine-tuning from one domain to another. This is an important problem as many practical deep learning applications repurpose an existing model to a new setting through fine-tuning. All reviewers were positive saying that this work provides new experimental insights, especially related to setting momentum parameters. Though other works may have previously discussed the effect of momentum during fine-tuning, this work presented new experiments which contributes to the overall understanding. Reviewer 3 had some concern about the generalization of the findings to other backbone architectures, but this concern was resolved during the discussion phase. The authors have provided detailed clarifications during the rebuttal and we encourage them to incorporate any remaining discussion or any new clarifications into the final draft. """
1974,"""Transfer Alignment Network for Double Blind Unsupervised Domain Adaptation""","['unsupervised domain adaptation', 'double blind domain adaptation']","""How can we transfer knowledge from a source domain to a target domain when each side cannot observe the data in the other side? The recent state-of-the-art deep architectures show significant performance in classification tasks which highly depend on a large number of training data. In order to resolve the dearth of abundant target labeled data, transfer learning and unsupervised learning leverage data from different sources and unlabeled data as training data, respectively. However, in some practical settings, transferring source data to target domain is restricted due to a privacy policy. In this paper, we define the problem of unsupervised domain adaptation under double blind constraint, where either the source or the target domain cannot observe the data in the other domain, but data from both domains are used for training. We propose TAN (Transfer Alignment Network for Double Blind Domain Adaptation), an effective method for the problem by aligning source and target domain features. TAN maps the target feature into source feature space so that the classifier learned from the labeled data in the source domain is readily used in the target domain. Extensive experiments show that TAN 1) provides the state-of-the-art accuracy for double blind domain adaptation, and 2) outperforms baselines regardless of the proportion of target domain data in the training data. ""","""This paper tackles the problem of how to adapt a model from a source to a target domain when both data is not available simultaneously (even unlabeled) to a single learner. This is of relevance for certain privacy preserving applications where one setting would like to benefit from information learned in a related setting but due to various factors may not be willing to directly share data. The proposed solution is a transfer alignment network (TAN) which consists of two autoencoders (each trained independently on the source and the target) and an aligner which has the task of mapping the latent codes of one domain to the other. All three reviewers expressed concerns for this submission. Of greatest concern was the experimental setting. The datasets chosen were non-standard and there was no prior work to compare against directly so the results presented are difficult to contextualize. The authors have responded to this concern by specifying the existing domain adaptation benchmarks are more challenging and require more complex architectures to handle the more complex data manifolds. The fact that existing benchmark datasets may be more complex the the dataset explored in this work is a concern. The authors should take care to clarify whether their proposed solution may only be applicable to specific types of data. In addition, the authors claim to address a new problem setting and therefore cannot compare directly to existing work. One suggestion is if using new data, report performance of existing work under the standard setting to give readers some grounding for the privacy preserving setting. Another option would be to provide scaffold results in the standard UDA setting but with frozen feature spaces. Another option would be to ablate the choice of L2 loss for learning the transformer and instead train using an adversarial loss, L1 loss etc. There are many ways the authors could both explore a new problem statement and provide convincing experimental evidence for their solution. The AC encourages the authors to revise their manuscript, paying special attention to clarity and experimental details in order to further justify their proposed work. """
1975,"""Invertible generative models for inverse problems: mitigating representation error and dataset bias""","['Invertible generative models', 'inverse problems', 'generative prior', 'Glow', 'compressed sensing', 'denoising', 'inpainting.']","""Trained generative models have shown remarkable performance as priors for inverse problems in imaging. For example, Generative Adversarial Network priors permit recovery of test images from 5-10x fewer measurements than sparsity priors. Unfortunately, these models may be unable to represent any particular image because of architectural choices, mode collapse, and bias in the training dataset. In this paper, we demonstrate that invertible neural networks, which have zero representation error by design, can be effective natural signal priors at inverse problems such as denoising, compressive sensing, and inpainting. Our formulation is an empirical risk minimization that does not directly optimize the likelihood of images, as one would expect. Instead we optimize the likelihood of the latent representation of images as a proxy, as this is empirically easier. For compressive sensing, our formulation can yield higher accuracy than sparsity priors across almost all undersampling ratios. For the same accuracy on test images, they can use 10-20x fewer measurements. We demonstrate that invertible priors can yield better reconstructions than sparsity priors for images that have rare features of variation within the biased training set, including out-of-distribution natural images. ""","""This paper studies the empirical performance of invertible generative models for compressive sensing, denoising and in painting. One issue in using generative models in this area has been that they hit an error floor in reconstruction due to model collapse etc i.e. one can not achieve zero error in reconstruction. The reviewers raised some concerns about novelty of the approach and thoroughness of the empirical studies. The authors response suggests that they are not claiming novelty w.r.t. to the approach but rather their use in compressive techniques. My own understanding is that this error floor is a major problem and removing its effect is a good contribution even without any novelty in the techniques. However, I do agree that a more thorough empirical study would be more convincing. While I can not recommend acceptance given the scores I do think this paper has potential and recommend the authors to resubmit to a future venue after a through revision."""
1976,"""Growing Action Spaces""","['reinforcement learning', 'curriculum learning', 'multi-agent reinforcement learning']","""In complex tasks, such as those with large combinatorial action spaces, random exploration may be too inefficient to achieve meaningful learning progress. In this work, we use a curriculum of progressively growing action spaces to accelerate learning. We assume the environment is out of our control, but that the agent may set an internal curriculum by initially restricting its action space. Our approach uses off-policy reinforcement learning to estimate optimal value functions for multiple action spaces simultaneously and efficiently transfers data, value estimates, and state representations from restricted action spaces to the full task. We show the efficacy of our approach in proof-of-concept control tasks and on challenging large-scale StarCraft micromanagement tasks with large, multi-agent action spaces.""","""This paper presents a novel approach to learning in problems which have large action spaces with natural hierarchies. The proposed approach involves learning from a curriculum of increasingly larger action spaces to accelerate learning. The method is demonstrated on both small continuous action domains, as well as a Starcraft domain. While this is indeed an interesting paper, there were two major concerns expressed by the reviewers. The first concerns the choice of baselines for comparison, and the second involves improving the discussion and intuition for why the hierarchical approach to growing action spaces will not lead to the agent missing viable solutions. The reviewers felt that neither of these were adequately addressed in the rebuttal, and as such it is to be rejected in its current form."""
1977,"""Frequency Principle: Fourier Analysis Sheds Light on Deep Neural Networks""","['deep learning', 'training behavior', 'Fourier analysis', 'generalization']","""We study the training process of Deep Neural Networks (DNNs) from the Fourier analysis perspective. We demonstrate a very universal Frequency Principle (F-Principle) --- DNNs often fit target functions from low to high frequencies --- on high-dimensional benchmark datasets, such as MNIST/CIFAR10, and deep networks, such as VGG16. This F-Principle of DNNs is opposite to the learning behavior of most conventional iterative numerical schemes (e.g., Jacobi method), which exhibits faster convergence for higher frequencies, for various scientific computing problems. With a naive theory, we illustrate that this F-Principle results from the regularity of the commonly used activation functions. The F-Principle implies an implicit bias that DNNs tend to fit training data by a low-frequency function. This understanding provides an explanation of good generalization of DNNs on most real datasets and bad generalization of DNNs on parity function or randomized dataset.""","""Borderline decision. The idea is nice, but the theory is not completely convincing. That makes the results in this paper not be significant enough."""
1978,"""Compositional Embeddings: Joint Perception and Comparison of Class Label Sets""","['Embedding', 'One-shot Learning', 'Compositional Representation']","""We explore the idea of compositional set embeddings that can be used to infer not just a single class, but the set of classes associated with the input data (e.g., image, video, audio signal). This can be useful, for example, in multi-object detection in images, or multi-speaker diarization (one-shot learning) in audio. In particular, we devise and implement two novel models consisting of (1) an embedding function f trained jointly with a composite function g that computes set union opera- tions between the classes encoded in two embedding vectors; and (2) embedding f trained jointly with a query function h that computes whether the classes en- coded in one embedding subsume the classes encoded in another embedding. In contrast to prior work, these models must both perceive the classes associated with the input examples, and also encode the relationships between different class label sets. In experiments conducted on simulated data, OmniGlot, and COCO datasets, the proposed composite embedding models outperform baselines based on traditional embedding approaches.""","""The authors propose a new type of compositional embedding (with two proposed variants) for performing tasks that involve set relationships between examples (say, images) containing sets of classes (say, objects). The setting is new and the reviewers are mostly in agreement (after discussion and revision) that the approach is interesting and the results encouraging. There is some concern, however, that the task setup may be too contrived, and that in any real task there could be a more obvious baseline that would do better. For example, one task setup requires that examples be represented via embeddings, and no reference can be made to the original inputs; this is justified in a setting where space is a constraint, but the combination of this setting with the specific set query tasks considered seems quite rare. The paper may be an example of a hammer in search of a nail. The ideas are interesting and the paper is written well, and so the authors can hopefully refine the proposed class of problems toward more practical settings."""
1979,"""Unsupervised Universal Self-Attention Network for Graph Classification""","['Graph embedding', 'graph classification', 'universal self-attention network', 'graph neural network']","""Existing graph embedding models often have weaknesses in exploiting graph structure similarities, potential dependencies among nodes and global network properties. To this end, we present U2GAN, a novel unsupervised model leveraging on the strength of the recently introduced universal self-attention network (Dehghani et al., 2019), to learn low-dimensional embeddings of graphs which can be used for graph classification. In particular, given an input graph, U2GAN first applies a self-attention computation, which is then followed by a recurrent transition to iteratively memorize its attention on vector representations of each node and its neighbors across each iteration. Thus, U2GAN can address the weaknesses in the existing models in order to produce plausible node embeddings whose sum is the final embedding of the whole graph. Experimental results show that our unsupervised U2GAN produces new state-of-the-art performances on a range of well-known benchmark datasets for the graph classification task. It even outperforms supervised methods in most of benchmark cases.""","""All three reviewers are consistently negative on this paper. Thus a reject is recommended."""
1980,"""Representation Learning for Remote Sensing: An Unsupervised Sensor Fusion Approach""","['unsupervised learning', 'representation learning', 'deep learning', 'remote sensing', 'sensor fusion']","""In the application of machine learning to remote sensing, labeled data is often scarce or expensive, which impedes the training of powerful models like deep convolutional neural networks. Although unlabeled data is abundant, recent self-supervised learning approaches are ill-suited to the remote sensing domain. In addition, most remote sensing applications currently use only a small subset of the multi-sensor, multi-channel information available, motivating the need for fused multi-sensor representations. We propose a new self-supervised training objective, Contrastive Sensor Fusion, which exploits coterminous data from multiple sources to learn useful representations of every possible combination of those sources. This method uses information common across multiple sensors and bands by training a single model to produce a representation that remains similar when any subset of its input channels is used. Using a dataset of 47 million unlabeled coterminous image triplets, we train an encoder to produce semantically meaningful representations from any possible combination of channels from the input sensors. These representations outperform fully supervised ImageNet weights on a remote sensing classification task and improve as more sensors are fused.""","""The authors present a method for learning representations of remote sensing images from multiple views. The main ideas is to use the InfoNCE loss to learn from multiple views of the data. The reviewers had a few concerns about this work which were not adequately addressed by the authors. I have summarised these below and would strongly recommend that the authors address these in subsequent submissions: 1) Experiments on a single dataset and a very specific task: Authors should present a more convincing argument about why the chosen dataset and task are challenging and important to demonstrate the main ideas presented in their work. Further, they should also report results on additional datasets suggested by the reviewers. 2) Comparisons with existing works: The reviewers suggested several existing works for comparison. The authors agreed that these were relevant and important but haven't done this comparison yet. Without such a comparison it is hard to evaluate the main contributions of this work. Based on the above objections raised by the reviewers, I recommend that the paper should not be accepted."""
1981,"""Temporal Probabilistic Asymmetric Multi-task Learning""","['Multi-task learning', 'Time-series analysis', 'Variational Inference']","""When performing multi-task predictions with time-series data, knowledge learned for one task at a specific time step may be useful in learning for another task at a later time step (e.g. prediction of sepsis may be useful for prediction of mortality for risk prediction at intensive care units). To capture such dynamically changing asymmetric relationships between tasks and long-range temporal dependencies in time-series data, we propose a novel temporal asymmetric multi-task learning model, which learns to combine features from other tasks at diverse timesteps for the prediction of each task. One crucial challenge here is deciding on the direction and the amount of knowledge transfer, since loss-based knowledge transfer Lee et al. (2016; 2017) does not apply in our case where we do not have loss at each timestep. We propose to tackle this challenge by proposing a novel uncertainty- based probabilistic knowledge transfer mechanism, such that we perform knowledge transfer from more certain tasks with lower variance to uncertain ones with higher variance. We validate our Temporal Probabilistic Asymmetric Multi-task Learning (TP-AMTL) model on two clinical risk prediction tasks against recent deep learning models for time-series analysis, which our model significantly outperforms by successfully preventing negative transfer. Further qualitative analysis of our model by clinicians suggests that the learned knowledge transfer graphs are helpful in analyzing the models predictions.""","""The authors propose a method for multi-task learning with time series data. The reviewers found the paper interesting, but the majority found the description of the method in the paper confusing and several technical details missing. Moreover, the reviewers were not convinced that the technique used for uncertainty quantification of the features at each stage of the time series is well founded."""
1982,"""Curvature-based Robustness Certificates against Adversarial Examples""","['Adversarial examples', 'Robustness certificates', 'Adversarial attacks', 'Machine Learning Security']","""A robustness certificate against adversarial examples is the minimum distance of a given input to the decision boundary of the classifier (or its lower bound). For {\it any} perturbation of the input with a magnitude smaller than the certificate value, the classification output will provably remain unchanged. Computing exact robustness certificates for deep classifiers is difficult in general since it requires solving a non-convex optimization. In this paper, we provide computationally-efficient robustness certificates for deep classifiers with differentiable activation functions in two steps. First, we show that if the eigenvalues of the Hessian of the network (curvatures of the network) are bounded, we can compute a robustness certificate in the pseudo-formula norm efficiently using convex optimization. Second, we derive a computationally-efficient differentiable upper bound on the curvature of a deep network. We also use the curvature bound as a regularization term during the training of the network to boost its certified robustness against adversarial examples. Putting these results together leads to our proposed {\bf C}urvature-based {\bf R}obustness {\bf C}ertificate (CRC) and {\bf C}urvature-based {\bf R}obust {\bf T}raining (CRT). Our numerical results show that CRC outperforms CROWN's certificate by an order of magnitude while CRT leads to higher certified accuracy compared to standard adversarial training and TRADES. ""","""This paper presents a upper bound on the curvature of a deep network. After the discussion, the author has addressed some concerns of reviwers, but the results are not very strong, there is some limitation on the applications. There is no strong support for this paper. Due to the high standard of ICLR, the acceptance of the paper need strong results in terms of theory or experiments."""
1983,"""Disentangled Cumulants Help Successor Representations Transfer to New Tasks""","['reinforcement learning', 'representation learning', 'intrinsic reward', 'intrinsic control', 'endogenous', 'generalized policy improvement', 'successor features', 'variational', 'monet', 'disentangled']","""Biological intelligence can learn to solve many diverse tasks in a data efficient manner by re-using basic knowledge and skills from one task to another. Furthermore, many of such skills are acquired through something called latent learning, where no explicit supervision for skill acquisition is provided. This is in contrast to the state-of-the-art reinforcement learning agents, which typically start learning each new task from scratch and struggle with knowledge transfer. In this paper we propose a principled way to learn and recombine a basis set of policies, which comes with certain guarantees on the coverage of the final task space. In particular, we construct a learning pipeline where an agent invests time to learn to perform intrinsically generated, goal-based tasks, and subsequently leverages this experience to quickly achieve a high level of performance on externally specified, often significantly more complex tasks through generalised policy improvement. We demonstrate both theoretically and empirically that such goal-based intrinsic tasks produce more transferable policies when the goals are specified in a space that exhibits a form of disentanglement. ""","""The author propose a method to first learn policies for intrinsically generated goal-based tasks, and then leverage the learned representations to improve the learning of a new task in a generalized policy iteration framework. The reviewers had significant issues about clarity of writing that were largely addressed in the rebuttal. However, there were also concerns about the magnitude of the contribution (especially if it was added anything significant to the existing literature on GPI, successor features, etc), and the simplicity (and small number of) test domains. These concerns persisted after the rebuttal and discussion. Thus, I recommend rejection at this time."""
1984,"""Peer Loss Functions: Learning from Noisy Labels without Knowing Noise Rates""","['learning with noisy labels', 'empirical risk minimization', 'peer loss']","""Learning with noisy labels is a common problem in supervised learning. Existing approaches require practitioners to specify noise rates, i.e., a set of parameters controlling the severity of label noises in the problem. In this work, we introduce a technique to learn from noisy labels that does not require a priori specification of the noise rates. In particular, we introduce a new family of loss functions that we name as peer loss functions. Our approach then uses a standard empirical risk minimization (ERM) framework with peer loss functions. Peer loss functions associate each training sample with a certain form of ""peer"" samples, which evaluate a classifier' predictions jointly. We show that, under mild conditions, performing ERM with peer loss functions on the noisy dataset leads to the optimal or a near optimal classifier as if performing ERM over the clean training data, which we do not have access to. To our best knowledge, this is the first result on ""learning with noisy labels without knowing noise rates"" with theoretical guarantees. We pair our results with an extensive set of experiments, where we compare with state-of-the-art techniques of learning with noisy labels. Our results show that peer loss functions based method consistently outperforms the baseline benchmarks. Peer loss provides a way to simplify model development when facing potentially noisy training labels, and can be promoted as a robust candidate loss function in such situations. ""","""Thank you very much for the detailed feedback to the reviewers, which helped us better understand your paper. Thanks also for revising the manuscript significantly; many parts were indeed revised. However, due to the major revision, we find more points to be further discussed, which requires another round of reviews/rebuttals. For this reason, we decided not to accept this paper. We hope that the reviewers' comments are useful for improving the paper for potential future publication. """
1985,"""Variational Hetero-Encoder Randomized GANs for Joint Image-Text Modeling""","['Deep topic model', 'image generation', 'text generation', 'raster-scan-GAN', 'zero-shot learning']","""For bidirectional joint image-text modeling, we develop variational hetero-encoder (VHE) randomized generative adversarial network (GAN), a versatile deep generative model that integrates a probabilistic text decoder, probabilistic image encoder, and GAN into a coherent end-to-end multi-modality learning framework. VHE randomized GAN (VHE-GAN) encodes an image to decode its associated text, and feeds the variational posterior as the source of randomness into the GAN image generator. We plug three off-the-shelf modules, including a deep topic model, a ladder-structured image encoder, and StackGAN++, into VHE-GAN, which already achieves competitive performance. This further motivates the development of VHE-raster-scan-GAN that generates photo-realistic images in not only a multi-scale low-to-high-resolution manner, but also a hierarchical-semantic coarse-to-fine fashion. By capturing and relating hierarchical semantic and visual concepts with end-to-end training, VHE-raster-scan-GAN achieves state-of-the-art performance in a wide variety of image-text multi-modality learning and generation tasks. ""","""This paper proposes a bidirectional joint image-text model using a variational hetero-encoder (VHE) randomized generative adversarial network (GAN). The proposed VHE-GAN model encodes an image to decode its associated text. Three reviewers have split reviews. Reviewer #3 is overall positive about this work. Reviewer #1 rated weak acceptance, while request more comparison with latest works. Reviewer #2 rated weak reject raised concerns on the motivation of the approach, the lack of ablation and lack of comparison with the latest work. During the rebuttal, the authors provide additional comparison and ablation, which seem to address the major concerns. Given the overall positive feedback and the quality of rebuttal, the AC recommends acceptance."""
1986,"""Learning deep graph matching with channel-independent embedding and Hungarian attention""","['deep graph matching', 'edge embedding', 'combinatorial problem', 'Hungarian loss']","""Graph matching aims to establishing node-wise correspondence between two graphs, which is a classic combinatorial problem and in general NP-complete. Until very recently, deep graph matching methods start to resort to deep networks to achieve unprecedented matching accuracy. Along this direction, this paper makes two complementary contributions which can also be reused as plugin in existing works: i) a novel node and edge embedding strategy which stimulates the multi-head strategy in attention models and allows the information in each channel to be merged independently. In contrast, only node embedding is accounted in previous works; ii) a general masking mechanism over the loss function is devised to improve the smoothness of objective learning for graph matching. Using Hungarian algorithm, it dynamically constructs a structured and sparsely connected layer, taking into account the most contributing matching pairs as hard attention. Our approach performs competitively, and can also improve state-of-the-art methods as plugin, regarding with matching accuracy on three public benchmarks.""","""This paper proposed a new graph matching approach. The main contribution is a Hungarian attention mechanism, which dynamically generates links in computational graph. The resulting matching algorithm is tested on vision tasks. The main concern of reviews is that the general matching algorithm is only tested on vision tasks. The authors partially addressed this problem by providing new experimental results with only geometric edge features. Other comments of Blind Review #2 are about some minor questions, which have also been answered by the authors. Overall, this paper proposed a promising graph match approach and I tend to accept it. """
1987,"""Attention on Abstract Visual Reasoning""","['Transformer Networks', 'Self-Attention', 'Wild Relation Networks', 'Procedurally Generated Matrices']","""Attention mechanisms have been boosting the performance of deep learning models on a wide range of applications, ranging from speech understanding to program induction. However, despite experiments from psychology which suggest that attention plays an essential role in visual reasoning, the full potential of attention mechanisms has so far not been explored to solve abstract cognitive tasks on image data. In this work, we propose a hybrid network architecture, grounded on self-attention and relational reasoning. We call this new model Attention Relation Network (ARNe). ARNe combines features from the recently introduced Transformer and the Wild Relation Network (WReN). We test ARNe on the Procedurally Generated Matrices (PGMs) datasets for abstract visual reasoning. ARNe excels the WReN model on this task by 11.28 ppt. Relational concepts between objects are efficiently learned demanding only 35% of the training samples to surpass reported accuracy of the base line model. Our proposed hybrid model, represents an alternative on learning abstract relations using self-attention and demonstrates that the Transformer network is also well suited for abstract visual reasoning.""","""This work proposes a new architecture for abstract visual reasoning called ""Attention Relation Network"" (ARNe), based on Transformer-style soft attention and relation networks, which the authors show to improve on the ""Wild Relation Network"" (WReN). The authors test their network on the PGM dataset, and demonstrate a non-trivial improvement over previously reported baselines. The paper is well written and makes an interesting contribution, but the reviewers expressed some criticisms, including technical novelty, unfinished experiments (and lack of experimental details), and somewhat weak experimental results, which suggest that the proposed ARNe model does not work well when training with weaker supervision without meta-targets. Even though the authors addressed some concerns in their revised version (namely, they added new experiments in the extrapolation split of PGM and experiments on the new RAVEN dataset), I feel the paper is not yet ready for publication at ICLR. """
1988,"""Universal Approximation with Certified Networks""","['adversarial robustness', 'universal approximation', 'certified network', 'interval bound propagation']","""Training neural networks to be certifiably robust is critical to ensure their safety against adversarial attacks. However, it is currently very difficult to train a neural network that is both accurate and certifiably robust. In this work we take a step towards addressing this challenge. We prove that for every continuous function pseudo-formula , there exists a network pseudo-formula such that: (i) pseudo-formula approximates pseudo-formula arbitrarily close, and (ii) simple interval bound propagation of a region pseudo-formula through pseudo-formula yields a result that is arbitrarily close to the optimal output of pseudo-formula on pseudo-formula . Our result can be seen as a Universal Approximation Theorem for interval-certified ReLU networks. To the best of our knowledge, this is the first work to prove the existence of accurate, interval-certified networks.""","""This work shows that there exist neural networks that can be certified by interval bound propagation. It provides interesting and surprising theoretical insights, although analysis requires the networks to be impractically large and hence does not directly yield practical advances. """
1989,"""Split LBI for Deep Learning: Structural Sparsity via Differential Inclusion Paths""",[],"""Over-parameterization is ubiquitous nowadays in training neural networks to benefit both optimization in seeking global optima and generalization in reducing prediction error. However, compressive networks are desired in many real world applications and direct training of small networks may be trapped in local optima. In this paper, instead of pruning or distilling over-parameterized models to compressive ones, we propose a new approach based on \emph{differential inclusions of inverse scale spaces}, that generates a family of models from simple to complex ones by coupling gradient descent and mirror descent to explore model structural sparsity. It has a simple discretization, called the Split Linearized Bregman Iteration (SplitLBI), whose global convergence analysis in deep learning is established that from any initializations, algorithmic iterations converge to a critical point of empirical risks. Experimental evidence shows that\ SplitLBI may achieve state-of-the-art performance in large scale training on ImageNet-2012 dataset etc., while with \emph{early stopping} it unveils effective subnet architecture with comparable test accuracies to dense models after retraining instead of pruning well-trained ones.""","""This paper investigates an existing method for fitting sparse neural networks, and provides a novel proof of global convergence. Overall, this seems like a sensible, if marginal, contribution. However, there were serious red flags regarding the care which which the scholarship was done which make me deem the current submission unsuitable for publication. In particular, two points raised by R4, which were not addressed even after the rebuttal: 1) ""One important issue with the paper is that it blurs the distinction between prior work and the new contribution. For example, the subsection on Split Linearized Bregman Iteration in the ""Methodology"" section does not contain anything new compared to [1], and this is not clear enough to the reader."" 2) ""The newly-written conclusion is still incorrect, stating again that Split LBI achieves SOTA performance on ImageNet."" I believe that R3's high score is due to not noticing these unsupported or misleading claims. """
1990,"""Laplacian Denoising Autoencoder""","['unsupervised', 'representation learning', 'Laplacian']","""While deep neural networks have been shown to perform remarkably well in many machine learning tasks, labeling a large amount of supervised data is usually very costly to scale. Therefore, learning robust representations with unlabeled data is critical in relieving human effort and vital for many downstream applications. Recent advances in unsupervised and self-supervised learning approaches for visual data benefit greatly from domain knowledge. Here we are interested in a more generic unsupervised learning framework that can be easily generalized to other domains. In this paper, we propose to learn data representations with a novel type of denoising autoencoder, where the input noisy data is generated by corrupting the clean data in gradient domain. This can be naturally generalized to span multiple scales with a Laplacian pyramid representation of the input data. In this way, the agent has to learn more robust representations that can exploit the underlying data structures across multiple scales. Experiments on several visual benchmarks demonstrate that better representations can be learned with the proposed approach, compared to its counterpart with single-scale corruption. Besides, we also demonstrate that the learned representations perform well when transferring to other vision tasks.""","""The main idea proposed by the work is interesting. The reviewers had several concerns about applicability and the extent of the empirical work. The authors responded to all the comments, added more experiments, and as reviewer 2 noted, the method is interesting because of its ability to handle local noise. Despite the author's helpful responses, the ratings were not increased, and it is still hard to assess the exact extent of how the proposed approach improves over state of the art. Because some concerns remained, and due to a large number of stronger papers, this paper was not accepted at this time."""
1991,"""HyperEmbed: Tradeoffs Between Resources and Performance in NLP Tasks with Hyperdimensional Computing enabled embedding of n-gram statistics ""","['NLP', 'Hyperdimensional computing', 'n-gram statistics', 'word representation', 'semantic hashing']","""Recent advances in Deep Learning have led to a significant performance increase on several NLP tasks, however, the models become more and more computationally demanding. Therefore, this paper tackles the domain of computationally efficient algorithms for NLP tasks. In particular, it investigates distributed representations of n-gram statistics of texts. The representations are formed using hyperdimensional computing enabled embedding. These representations then serve as features, which are used as input to standard classifiers. We investigate the applicability of the embedding on one large and three small standard datasets for classification tasks using nine classifiers. The embedding achieved on par F1 scores while decreasing the time and memory requirements by several times compared to the conventional n-gram statistics, e.g., for one of the classifiers on a small dataset, the memory reduction was 6.18 times; while train and test speed-ups were 4.62 and 3.84 times, respectively. For many classifiers on the large dataset, the memory reduction was about 100 times and train and test speed-ups were over 100 times. More importantly, the usage of distributed representations formed via hyperdimensional computing allows dissecting the strict dependency between the dimensionality of the representation and the parameters of n-gram statistics, thus, opening a room for tradeoffs.""","""The reviewers were unanimous that this submission is not ready for publication at ICLR in its present form. Concerns raised included lack of relevant baselines, and lack of sufficient justification of the novelty and impact of the approach."""
1992,"""Statistical Adaptive Stochastic Optimization""",[],"""We investigate statistical methods for automatically scheduling the learning rate (step size) in stochastic optimization. First, we consider a broad family of stochastic optimization methods with constant hyperparameters (including the learning rate and various forms of momentum) and derive a general necessary condition for the resulting dynamics to be stationary. Based on this condition, we develop a simple online statistical test to detect (non-)stationarity and use it to automatically drop the learning rate by a constant factor whenever stationarity is detected. Unlike in prior work, our stationarity condition and our statistical test applies to different algorithms without modification. Finally, we propose a smoothed stochastic line-search method that can be used to warm up the optimization process before the statistical test can be applied effectively. This removes the expensive trial and error for setting a good initial learning rate. The combined method is highly autonomous and it attains state-of-the-art training and testing performance in our experiments on several deep learning tasks.""","""The paper proposes an approach to automatically tune the learning rate by using a statistical test that detects the stationarity of the learning dynamics. It also proposes a robust line search algorithm to reduce the need to tune the initial learning rate. The statistical test uses a test function which is taken to be a quadratic function in the paper for simplicity, although any choice of test function is valid. Although the method itself is interesting, the empirical benefits over SGD/ADAM seem to be minor. """
1993,"""FRICATIVE PHONEME DETECTION WITH ZERO DELAY""","['fricative detection', 'phoneme detection', 'speech recognition', 'deep learning', 'hearing aids', 'zero delay', 'extrapolation', 'TIMIT']","""People with high-frequency hearing loss rely on hearing aids that employ frequency lowering algorithms. These algorithms shift some of the sounds from the high frequency band to the lower frequency band where the sounds become more perceptible for the people with the condition. Fricative phonemes have an important part of their content concentrated in high frequency bands. It is important that the frequency lowering algorithm is activated exactly for the duration of a fricative phoneme, and kept off at all other times. Therefore, timely (with zero delay) and accurate fricative phoneme detection is a key problem for high quality hearing aids. In this paper we present a deep learning based fricative phoneme detection algorithm that has zero detection delay and achieves state-of-the-art fricative phoneme detection accuracy on the TIMIT Speech Corpus. All reported results are reproducible and come with easy to use code that could serve as a baseline for future research. ""","""The reviewers appreciate the importance of the problem, and one reviewer particularly appreciated the gains in performance. However, two reviewers raised concerns about limited novelty and missing comparisons to prior work. While the rebuttal helped address these concerns, the novelty is still limited. The authors are encouraged to revise the presentation to clarify the novelty."""
1994,"""Enhancing Language Emergence through Empathy""","['multi-agent deep reinforcement learning', 'emergent communication', 'auxiliary tasks']","""The emergence of language in multi-agent settings is a promising research direction to ground natural language in simulated agents. If AI would be able to understand the meaning of language through its using it, it could also transfer it to other situations flexibly. That is seen as an important step towards achieving general AI. The scope of emergent communication is so far, however, still limited. It is necessary to enhance the learning possibilities for skills associated with communication to increase the emergable complexity. We took an example from human language acquisition and the importance of the empathic connection in this process. We propose an approach to introduce the notion of empathy to multi-agent deep reinforcement learning. We extend existing approaches on referential games with an auxiliary task for the speaker to predict the listener's mind change improving the learning time. Our experiments show the high potential of this architectural element by doubling the learning speed of the test setup. ""","""This paper introduces the idea of ""empathy"" to improve learning in communication emergence. The reviewers all agree that the idea is interesting and well described. However, this paper clearly falls short on delivering the detailed and sufficient experiments and results to demonstrate whether and how the idea works. I thank the authors for submitting this research to ICLR and encourage following up on the reviewers' comments and suggestions for future submission. """
1995,"""Unsupervised Model Selection for Variational Disentangled Representation Learning""","['unsupervised disentanglement metric', 'disentangling', 'representation learning']","""Disentangled representations have recently been shown to improve fairness, data efficiency and generalisation in simple supervised and reinforcement learning tasks. To extend the benefits of disentangled representations to more complex domains and practical applications, it is important to enable hyperparameter tuning and model selection of existing unsupervised approaches without requiring access to ground truth attribute labels, which are not available for most datasets. This paper addresses this problem by introducing a simple yet robust and reliable method for unsupervised disentangled model selection. We show that our approach performs comparably to the existing supervised alternatives across 5400 models from six state of the art unsupervised disentangled representation learning model classes. Furthermore, we show that the ranking produced by our approach correlates well with the final task performance on two different domains.""","""The authors address the important and understudied problem of tuning of unsupervised models, in particular variational models for learning disentangled representations. They propose an unsupervised measure for model selection that correlates well with performance on multiple tasks. After significant fruitful discussion with the reviewers and resulting revisions, many reviewer concerns have been addressed. There are some remaining concerns that there may still be a gap in the theoretical basis for the application of the proposed measure to some models, that for different downstream tasks the best model selection criteria may vary, and that the method might be too cumbersome and not quite reliable enough for practitioners to use it broadly. All of that being said, the reviewers (and I) agree that the approach is sufficiently interesting, and the empirical results sufficiently convincing, to make the paper a good contribution and hopefully motivation for additional methods addressing this problem."""
1996,"""Multi-Task Learning via Scale Aware Feature Pyramid Networks and Effective Joint Head""","['Multi-Task Learning', 'Object Detection', 'Instance Segmentation']","""As a concise and classic framework for object detection and instance segmentation, Mask R-CNN achieves promising performance in both two tasks. However, considering stronger feature representation for Mask R-CNN fashion framework, there is room for improvement from two aspects. On the one hand, performing multi-task prediction needs more credible feature extraction and multi-scale features integration to handle objects with varied scales. In this paper, we address this problem by using a novel neck module called SA-FPN (Scale Aware Feature Pyramid Networks). With the enhanced feature representations, our model can accurately detect and segment the objects of multiple scales. On the other hand, in Mask R-CNN framework, isolation between parallel detection branch and instance segmentation branch exists, causing the gap between training and testing processes. To narrow this gap, we propose a unified head module named EJ-Head (Effective Joint Head) to combine two branches into one head, not only realizing the interaction between two tasks, but also enhancing the effectiveness of multi-task learning. Comprehensive experiments show that our proposed methods bring noticeable gains for object detection and instance segmentation. In particular, our model outperforms the original Mask R-CNN by 1~2 percent AP in both object detection and instance segmentation task on MS-COCO benchmark. Code will be available soon.""","""All three reviewers gave scores of Weak Reject. Only a brief rebuttal was offered, which did not change the scores. Thus the paper connect be accepted. """
1997,"""SSE-PT: Sequential Recommendation Via Personalized Transformer""","['sequential recommendation', 'personalized transformer', 'stochastic shared embeddings']","""Temporal information is crucial for recommendation problems because user preferences are naturally dynamic in the real world. Recent advances in deep learning, especially the discovery of various attention mechanisms and newer architectures in addition to widely used RNN and CNN in natural language processing, have allowed for better use of the temporal ordering of items that each user has engaged with. In particular, the SASRec model, inspired by the popular Transformer model in natural languages processing, has achieved state-of-the-art results. However, SASRec, just like the original Transformer model, is inherently an un-personalized model and does not include personalized user embeddings. To overcome this limitation, we propose a Personalized Transformer (SSE-PT) model, outperforming SASRec by almost 5% in terms of NDCG@10 on 5 real-world datasets. Furthermore, after examining some random users' engagement history, we find our model not only more interpretable but also able to focus on recent engagement patterns for each user. Moreover, our SSE-PT model with a slight modification, which we call SSE-PT++, can handle extremely long sequences and outperform SASRec in ranking results with comparable training speed, striking a balance between performance and speed requirements. Our novel application of the Stochastic Shared Embeddings (SSE) regularization is essential to the success of personalization. Code and data are open-sourced at pseudo-url.""","""The paper proposes to improve sequential recommendation by extending SASRec (from prior work) by adding user embedding with SSE regularization. The authors show that the proposed method outperforms several baselines on five datasets. The paper received two weak accepts and one reject. Reviewers expressed concerns about the limited/scattered technical contribution. Reviewers were also concerned about the quality of the experiment results and need to compare against more baselines. After examining some related work, the AC agrees with the reviewers that there is also many recent relevant work such as BERT4Rec that should be cited and discussed. It would make the paper stronger if the authors can demonstrate that adding the user embedding to another method such as BERT4Rec can improve the performance of that model. Regarding R3's concerns about the comparison against HGN, the authors indicates there are differences in the length of sequences considered and that some method may work better for shorter sequences while their method works better for longer sequences. These details seems important to include in the paper. In the AC's opinion, the paper quality is borderline and the work is of limited interest to the ICLR community. Such would would be more appreciated in the recommender systems community. The authors are encouraged to improve the paper with improved discussion of more recent work such as BERT4Rec, add comparisons against these more recent work, incorporate various suggestions from the reviewers, and resubmit to an appropriate venue."""
1998,"""Uncertainty-guided Continual Learning with Bayesian Neural Networks""","['continual learning', 'catastrophic forgetting']","""Continual learning aims to learn new tasks without forgetting previously learned ones. This is especially challenging when one cannot access data from previous tasks and when the model has a fixed capacity. Current regularization-based continual learning algorithms need an external representation and extra computation to measure the parameters' \textit{importance}. In contrast, we propose Uncertainty-guided Continual Bayesian Neural Networks (UCB), where the learning rate adapts according to the uncertainty defined in the probability distribution of the weights in networks. Uncertainty is a natural way to identify \textit{what to remember} and \textit{what to change} as we continually learn, and thus mitigate catastrophic forgetting. We also show a variant of our model, which uses uncertainty for weight pruning and retains task performance after pruning by saving binary masks per tasks. We evaluate our UCB approach extensively on diverse object classification datasets with short and long sequences of tasks and report superior or on-par performance compared to existing approaches. Additionally, we show that our model does not necessarily need task information at test time, i.e. it does not presume knowledge of which task a sample belongs to.""","""While prior work has shown the potential of using uncertainty to tackle catastrophic forgetting (e.g. by appropriate updates to the posterior), this paper goes further and proposes a strategy to adapt the learning rate based on the uncertainty. This is a very reasonable idea since, in practice, learning rate control is one of the simplest and most understood techniques to fight catastrophic forgetting. The overall approach ends up being a well-motivated strategy for controlling the learning rate of the parameters according to a notion of their ""importance"". Of course now the question is if this work uses a good proxy for ""importance"" so further ablation studies would help, but the current results already show a clear benefit. """
1999,"""Variational Constrained Reinforcement Learning with Application to Planning at Roundabout""","['Safe reinforcement learning', 'Autonomous driving', 'obstacle avoidance']","""Planning at roundabout is crucial for autonomous driving in urban and rural environments. Reinforcement learning is promising not only in dealing with complicated environment but also taking safety constraints into account as a as a constrained Markov Decision Process. However, the safety constraints should be explicitly mathematically formulated while this is challenging for planning at roundabout due to unpredicted dynamic behavior of the obstacles. Therefore, to discriminate the obstacles' states as either safe or unsafe is desired which is known as situation awareness modeling. In this paper, we combine variational learning and constrained reinforcement learning to simultaneously learn a Conditional Representation Model (CRM) to encode the states into safe and unsafe distributions respectively as well as to learn the corresponding safe policy. Our approach is evaluated in using Simulation of Urban Mobility (SUMO) traffic simulator and it can generalize to various traffic flows.""","""This paper proposes to add constraints to the RL problem within a variational method. The hope is to specify a safe vs non-safe states. The reviewers were not convinced that this paper makes the cut for ICLR. Moreover, there was no rebuttal from the authors, so it didn't give the reviewer a chance to reconsider their opinion. Based on the current ratings, I recommend to reject this paper. """
2000,"""Efficient Riemannian Optimization on the Stiefel Manifold via the Cayley Transform""","['Orthonormality', 'Efficient Riemannian Optimization', 'the Stiefel manifold.']","""Strictly enforcing orthonormality constraints on parameter matrices has been shown advantageous in deep learning. This amounts to Riemannian optimization on the Stiefel manifold, which, however, is computationally expensive. To address this challenge, we present two main contributions: (1) A new efficient retraction map based on an iterative Cayley transform for optimization updates, and (2) An implicit vector transport mechanism based on the combination of a projection of the momentum and the Cayley transform on the Stiefel manifold. We specify two new optimization algorithms: Cayley SGD with momentum, and Cayley ADAM on the Stiefel manifold. Convergence of Cayley SGD is theoretically analyzed. Our experiments for CNN training demonstrate that both algorithms: (a) Use less running time per iteration relative to existing approaches that enforce orthonormality of CNN parameters; and (b) Achieve faster convergence rates than the baseline SGD and ADAM algorithms without compromising the performance of the CNN. Cayley SGD and Cayley ADAM are also shown to reduce the training time for optimizing the unitary transition matrices in RNNs.""","""This paper presents a method for optimizing parameter matrices of deep learning objectives while enforcing orthonormality constraints. While advantageous in certain respects, such constraints can be expensive to maintain when using existing methods. To address this issue, an new algorithm is proposed based on the Cayley Transform and analyzed in terms of convergence. After the discussion period two reviewers supported acceptance while one still voted for rejection. Consequently, in recommending acceptance here for a poster, it is worth examining the significance of unresolved concerns. First, the reject reviewer raised the valid point that the convergence proof relies on the assumption of Lipschitz continuous gradients, and yet the experiments use ReLU activation functions that do not satisfy this criteria. In my view though, it is sometimes reasonable to derive useful theory under the assumption of Lipschitz continuous derivatives that nonetheless provides insight into the case where these derivatives may not be Lipschitz on a set of measure zero (which would be the case with ReLU activations). So while ideally it might be nice to extend the theory to remove this assumption, the algorithm seems to work fine with ReLU activations in practice. And this seems reasonable given the improbability of any iterate exactly hitting the measure-zero points where the gradients are discontinuous. Beyond this issue, some criticisms were mentioned in terms of how and where the timing comparisons were presented. However, I believe that these issues can be easily remedied in a final revision."""
2001,"""Data-dependent Gaussian Prior Objective for Language Generation""","['Gaussian Prior Objective', 'Language Generation']","""For typical sequence prediction problems such as language generation, maximum likelihood estimation (MLE) has commonly been adopted as it encourages the predicted sequence most consistent with the ground-truth sequence to have the highest probability of occurring. However, MLE focuses on once-to-all matching between the predicted sequence and gold-standard, consequently treating all incorrect predictions as being equally incorrect. We refer to this drawback as {\it negative diversity ignorance} in this paper. Treating all incorrect predictions as equal unfairly downplays the nuance of these sequences' detailed token-wise structure. To counteract this, we augment the MLE loss by introducing an extra Kullback--Leibler divergence term derived by comparing a data-dependent Gaussian prior and the detailed training prediction. The proposed data-dependent Gaussian prior objective (D2GPo) is defined over a prior topological order of tokens and is poles apart from the data-independent Gaussian prior (L2 regularization) commonly adopted in smoothing the training of MLE. Experimental results show that the proposed method makes effective use of a more detailed prior in the data and has improved performance in typical language generation tasks, including supervised and unsupervised machine translation, text summarization, storytelling, and image captioning. ""","""This paper addresses the problem of poor generation quality in models for text generation that results from the use of the maximum likelihood (ML) loss, in particular the fact that the ML loss does not differentiate between different ""incorrect"" generated outputs (ones that do not match the corresponding training sequence). The authors propose to train text generation models with an additional loss term that measures the distance from the ground truth via a Gaussian distribution based on embeddings of the ground-truth tokens. This is not the first attempt to address drawbacks of ML training for text generation, but it is simple and intuitive, and produces improvements over the state of the art on a range of tasks. The reviewers are all quite positive, and are in agreement that the author responses and revisions have improved the paper quality and addressed initial concerns. I think this work will be broadly appreciated by the ICLR audience. One negative point is that the writing quality still needs improvement."""
2002,"""Last-iterate convergence rates for min-max optimization""","['min-max optimization', 'zero-sum game', 'saddle point', 'last-iterate convergence', 'non-asymptotic convergence', 'global rates', 'Hamiltonian', 'sufficiently bilinear']","""While classic work in convex-concave min-max optimization relies on average-iterate convergence results, the emergence of nonconvex applications such as training Generative Adversarial Networks has led to renewed interest in last-iterate convergence guarantees. Proving last-iterate convergence is challenging because many natural algorithms, such as Simultaneous Gradient Descent/Ascent, provably diverge or cycle even in simple convex-concave min-max settings, and previous work on global last-iterate convergence rates has been limited to the bilinear and convex-strongly concave settings. In this work, we show that the Hamiltonian Gradient Descent (HGD) algorithm achieves linear convergence in a variety of more general settings, including convex-concave problems that satisfy a sufficiently bilinear condition. We also prove similar convergence rates for some parameter settings of the Consensus Optimization (CO) algorithm of Mescheder et al. 2017.""","""This provides a simple analysis of an existing algorithm for min-max optimization under some favorable assumptions. The paper is clean and nice, though unfortunately lands just below borderline. I urge the authors to continue their interesting work, and amongst other things address the reviewer comments, for example those on stochastic gradient descent. """
2003,"""DIVA: Domain Invariant Variational Autoencoder""","['representation learning', 'generative models', 'domain generalization', 'invariance']","""We consider the problem of domain generalization, namely, how to learn representations given data from a set of domains that generalize to data from a previously unseen domain. We propose the Domain Invariant Variational Autoencoder (DIVA), a generative model that tackles this problem by learning three independent latent subspaces, one for the domain, one for the class, and one for any residual variations. We highlight that due to the generative nature of our model we can also incorporate unlabeled data from known or previously unseen domains. To the best of our knowledge this has not been done before in a domain generalization setting. This property is highly desirable in fields like medical imaging where labeled data is scarce. We experimentally evaluate our model on the rotated MNIST benchmark and a malaria cell images dataset where we show that (i) the learned subspaces are indeed complementary to each other, (ii) we improve upon recent works on this task and (iii) incorporating unlabelled data can boost the performance even further.""","""This paper addresses the problem of domain generalization. The proposed solution, DIVA, introduces a domain invariant variational autoencoder. The latent space can be decomposed into three components: category specific, domain specific, and residual. The authors argue that each component is necessary to capture all relevant information while keeping the latent space interpretable. This work received mixed scores. Two reviewers recommended weak reject while one reviewer recommended weak accept. There was extensive discussion between the reviewers and authors as well as amongst the reviewers. All reviewers agreed this is an important problem statement and that this work offers a compelling initial approach and experiments for domain generalization. There was disagreement as to whether the contributions as is was sufficient for acceptance. Some reviewers were concerned over similarity to [ref1], this work appears close to the time of ICLR submission and is therefore considered concurrent. However, despite this, there was significant confusion over the proposed solution and whether it is uniquely useful for domain generalization or for other areas like adaptation or transfer learning with reviewers arguing that experiments in these other settings would have helped showcase the benefits of the proposed approach. In addition, there was inconclusive evidence as to whether the two latent components were necessary. Considering all discussions, reviews, and rebuttals the AC does not recommend this work for acceptance. The contribution and proposed solution needed substantial clarification and the experiments need additional analysis to explain under what conditions each latent component is needed either to improve performance or for interpretability. """
2004,"""Why Convolutional Networks Learn Oriented Bandpass Filters: A Hypothesis""","['convolutional networks', 'computer vision', 'oriented bandpass filters', 'linear systems theory']","""It has been repeatedly observed that convolutional architectures when applied to image understanding tasks learn oriented bandpass filters. A standard explanation of this result is that these filters reflect the structure of the images that they have been exposed to during training: Natural images typically are locally composed of oriented contours at various scales and oriented bandpass filters are matched to such structure. The present paper offers an alternative explanation based not on the structure of images, but rather on the structure of convolutional architectures. In particular, complex exponentials are the eigenfunctions of convolution. These eigenfunctions are defined globally; however, convolutional architectures operate locally. To enforce locality, one can apply a windowing function to the eigenfunctions, which leads to oriented bandpass filters as the natural operators to be learned with convolutional architectures. From a representational point of view, these filters allow for a local systematic way to characterize and operate on an image or other signal.""","""This paper proposes an alternative explanation of the emergence of oriented bandpass filters in convolutional networks: rather than reflecting observed structure in images, these filters would be a consequence of the convolutional architecture itself and its eigenfunctions. Reviewers agree that the mathematical angle taken by the paper is interesting, however they also point out that crucial prior work making the same points exists, and that more thorough insights and analyses would be needed to make a more solid paper. Given the closeness to prior work, we cannot recommend acceptance in this form."""
2005,"""Certified Robustness for Top-k Predictions against Adversarial Perturbations via Randomized Smoothing""","['Certified Adversarial Robustness', 'Randomized Smoothing', 'Adversarial Examples']","""It is well-known that classifiers are vulnerable to adversarial perturbations. To defend against adversarial perturbations, various certified robustness results have been derived. However, existing certified robustnesses are limited to top-1 predictions. In many real-world applications, top- pseudo-formula predictions are more relevant. In this work, we aim to derive certified robustness for top- pseudo-formula predictions. In particular, our certified robustness is based on randomized smoothing, which turns any classifier to a new classifier via adding noise to an input example. We adopt randomized smoothing because it is scalable to large-scale neural networks and applicable to any classifier. We derive a tight robustness in pseudo-formula norm for top- pseudo-formula predictions when using randomized smoothing with Gaussian noise. We find that generalizing the certified robustness from top-1 to top- pseudo-formula predictions faces significant technical challenges. We also empirically evaluate our method on CIFAR10 and ImageNet. For example, our method can obtain an ImageNet classifier with a certified top-5 accuracy of 62.8\% when the pseudo-formula -norms of the adversarial perturbations are less than 0.5 (=127/255). Our code is publicly available at: \url{pseudo-url}. ""","""The paper extends the work on randomized smoothing for certifiably robust classifiers developed in prior work to a weaker specification requiring that the set of top-k predictions remain unchanged under adversarial perturbations of the input (rather than just the top-1). This enables the authors to achieve stronger results on robustness of classifiers on CIFAR10 and ImageNet (where the authors report the top-5 accuracy). This is an interesting extension of certified defenses that is likely to be relevant for complex prediction tasks with several classes (ImageNet and beyond), where top-1 robustness may be difficult and unrealistic to achieve. The reviewers were in consensus on acceptance and minor concerns were alleviated during the rebuttal phase. I therefore recommend acceptance."""
2006,"""Recurrent neural circuits for contour detection""","['Contextual illusions', 'visual cortex', 'recurrent feedback', 'neural circuits']","""We introduce a deep recurrent neural network architecture that approximates visual cortical circuits (Mly et al., 2018). We show that this architecture, which we refer to as the -net, learns to solve contour detection tasks with better sample efficiency than state-of-the-art feedforward networks, while also exhibiting a classic perceptual illusion, known as the orientation-tilt illusion. Correcting this illusion significantly reduces \gnetw contour detection accuracy by driving it to prefer low-level edges over high-level object boundary contours. Overall, our study suggests that the orientation-tilt illusion is a byproduct of neural circuits that help biological visual systems achieve robust and efficient contour detection, and that incorporating these circuits in artificial neural networks can improve computer vision.""","""All the reviewers recommend accept, and the found the paper interesting and novel. """
2007,"""Robust Reinforcement Learning via Adversarial Training with Langevin Dynamics""","['deep reinforcement learning', 'robust reinforcement learning', 'min-max problem']","""We re-think the Two-Player Reinforcement Learning (RL) as an instance of a distribution sampling problem in infinite dimensions. Using the powerful Stochastic Gradient Langevin Dynamics, we propose a new two-player RL algorithm, which is a sampling variant of the two-player policy gradient method. Our new algorithm consistently outperforms existing baselines, in terms of generalization across differing training and testing conditions, on several MuJoCo environments.""","""The authors address the problem of robust reinforcement learning. They propose an adversarial perspective on robustness. Improving the robustness can now be seen as two agent playing a competitive game, which means that in many cases the first agent needs to play a mixed strategy. The authors propose an algorithm for optimizing such mixed strategies. Although the reviewers are convinced of the relevance of the work (as a first approach of Bayesian learning to reach mixed Nash equilibria, which is useful not only for robustness but for any problem that can be formulated as zero-sum game requiring a mixed strategy), they are not completely convinced by the work in current state. Three of the reviewers commented on the experiments not being rigorous and convincing enough in current form, and thus not (yet!) being able to recommend acceptance to ICLR. """
2008,"""Scalable Deep Neural Networks via Low-Rank Matrix Factorization""","['Deep Learning', 'Deep Neural Networks', 'Low-Rank Matrix Factorization', 'Model Compression']","""Compressing deep neural networks (DNNs) is important for real-world applications operating on resource-constrained devices. However, it is difficult to change the model size once the training is completed, which needs re-training to configure models suitable for different devices. In this paper, we propose a novel method that enables DNNs to flexibly change their size after training. We factorize the weight matrices of the DNNs via singular value decomposition (SVD) and change their ranks according to the target size. In contrast with existing methods, we introduce simple criteria that characterize the importance of each basis and layer, which enables to effectively compress the error and complexity of models as little as possible. In experiments on multiple image-classification tasks, our method exhibits favorable performance compared with other methods.""","""The proposed paper presents low-rank compression method for DNNs. This topic has been around for a while, so the contribution is limited. Lebedev et. al paper in ICLR 2015 used CP-factorization to compress neural networks for Imagenet classification; in 2019, the idea has to be really novel in order to be presented on CIFAR datasets. The latency is not analyzed. So, I agree with reviewers."""
2009,"""Adversarial Attacks on Copyright Detection Systems""",[],"""It is well-known that many machine learning models are susceptible to adversarial attacks, in which an attacker evades a classifier by making small perturbations to inputs. This paper discusses how industrial copyright detection tools, which serve a central role on the web, are susceptible to adversarial attacks. We discuss a range of copyright detection systems, and why they are particularly vulnerable to attacks. These vulnerabilities are especially apparent for neural network based systems. As proof of concept, we describe a well-known music identification method and implement this system in the form of a neural net. We then attack this system using simple gradient methods. Adversarial music created this way successfully fools industrial systems, including the AudioTag copyright detector and YouTube's Content ID system. Our goal is to raise awareness of the threats posed by adversarial examples in this space and to highlight the importance of hardening copyright detection systems to attacks.""","""This paper shows a case study of an adversarial attack on a copyright detection system. The paper implements a music identification method with a simple convolutional neural network, and shows that it is possible to fool such CNN with an adversarial learning. After the discussion period, two among three reviewers incline to the rejection of the paper. Although the majority of the reviewers agree that this is an interesting problem with an important application, they also find many of their concerns remain unaddressed. These include the generality of the finding as the current paper is more like a proof-of-concept that black/white-box attack can work for copyright system. The reviewers are also concerned that the technique solution/finding is not novel as it is very similar to prior work in other domains (e.g., image classification). One reviewer was particularly concerned about that the user study is missing, making it difficult to judge whether the quality of the modified audio is reasonable or not."""
2010,"""Pitfalls of In-Domain Uncertainty Estimation and Ensembling in Deep Learning""","['uncertainty', 'in-domain uncertainty', 'deep ensembles', 'ensemble learning', 'deep learning']","""Uncertainty estimation and ensembling methods go hand-in-hand. Uncertainty estimation is one of the main benchmarks for assessment of ensembling performance. At the same time, deep learning ensembles have provided state-of-the-art results in uncertainty estimation. In this work, we focus on in-domain uncertainty for image classification. We explore the standards for its quantification and point out pitfalls of existing metrics. Avoiding these pitfalls, we perform a broad study of different ensembling techniques. To provide more insight in this study, we introduce the deep ensemble equivalent score (DEE) and show that many sophisticated ensembling techniques are equivalent to an ensemble of only few independently trained networks in terms of test performance.""","""The paper points out pitfalls of existing metrics for in-domain uncertainty quantification, and also studies different strategies for ensembling techniques. The authors also satisfactorily addressed the reviewers' questions during the rebuttal phase. In the end, all the reviewers agreed that this is a valuable contribution and paper deserves to be accepted. Nice work!"""
2011,"""Equilibrium Propagation with Continual Weight Updates""","['Biologically Plausible Neural Networks', 'Equilibrium Propagation']","""Equilibrium Propagation (EP) is a learning algorithm that bridges Machine Learning and Neuroscience, by computing gradients closely matching those of Backpropagation Through Time (BPTT), but with a learning rule local in space. Given an input x and associated target y, EP proceeds in two phases: in the first phase neurons evolve freely towards a first steady state; in the second phase output neurons are nudged towards y until they reach a second steady state. However, in existing implementations of EP, the learning rule is not local in time: the weight update is performed after the dynamics of the second phase have converged and requires information of the first phase that is no longer available physically. This is a major impediment to the biological plausibility of EP and its efficient hardware implementation. In this work, we propose a version of EP named Continual Equilibrium Propagation (C-EP) where neuron and synapse dynamics occur simultaneously throughout the second phase, so that the weight update becomes local in time. We prove theoretically that, provided the learning rates are sufficiently small, at each time step of the second phase the dynamics of neurons and synapses follow the gradients of the loss given by BPTT (Theorem 1). We demonstrate training with C-EP on MNIST and generalize C-EP to neural networks where neurons are connected by asymmetric connections. We show through experiments that the more the network updates follows the gradients of BPTT, the best it performs in terms of training. These results bring EP a step closer to biology while maintaining its intimate link with backpropagation.""","""Main content: paper introduces a new variant of equilibrium propagation algorithm that continually updates the weights making it unnecessary to save steady states. T Summary of discussion: reviewer 1: likes the idea but points out many issues with the proofs. reviewer 2: he really likes the novelty of paper, but review is not detailed, particularly discussing pros/cons. reviewer 3: likes the ideas but has questions on proofs, and also questions why MNIST is used as the evaluation tasks. Recommendation: interesting idea but writing/proofs could be clarified better. Vote reject. """
2012,"""AdvectiveNet: An Eulerian-Lagrangian Fluidic Reservoir for Point Cloud Processing ""","['Point Cloud Processing', 'Physical Reservoir Learning', 'Eulerian-Lagrangian Method', 'PIC/FLIP']","""This paper presents a novel physics-inspired deep learning approach for point cloud processing motivated by the natural flow phenomena in fluid mechanics. Our learning architecture jointly defines data in an Eulerian world space, using a static background grid, and a Lagrangian material space, using moving particles. By introducing this Eulerian-Lagrangian representation, we are able to naturally evolve and accumulate particle features using flow velocities generated from a generalized, high-dimensional force field. We demonstrate the efficacy of this system by solving various point cloud classification and segmentation problems with state-of-the-art performance. The entire geometric reservoir and data flow mimic the pipeline of the classic PIC/FLIP scheme in modeling natural flow, bridging the disciplines of geometric machine learning and physical simulation.""","""This paper treats the task of point cloud learning as a dynamic advection problem in conjunction with a learned background velocity field. The resulting system, which bridges geometric machine learning and physical simulation, achieves promising performance on various classification and segmentation problems. Although the initial scores were mixed, all reviewers converged to acceptance after the rebuttal period. For example, a better network architecture, along with an improved interpolation stencil and initialization, lead to better performance (now rivaling the state-of-the-art) as compared to the original submission. This helps to mitigate an initial reviewer concern in terms of competitiveness with existing methods like PointCNN or SE-Net. Likewise, interesting new experiments such as PIC vs. FLIP were included."""
2013,"""Interpretable Network Structure for Modeling Contextual Dependency""","['Language Model', 'Recurrent Neural Network', 'Separation Rank']","""Neural language models have achieved great success in many NLP tasks, to a large extent, due to the ability to capture contextual dependencies among terms in a text. While many efforts have been devoted to empirically explain the connection between the network hyperparameters and the ability to represent the contextual dependency, the theoretical analysis is relatively insufficient. Inspired by the recent research on the use of tensor space to explain the neural network architecture, we explore the interpretable mechanism for neural language models. Specifically, we define the concept of separation rank in the language modeling process, in order to theoretically measure the degree of contextual dependencies in a sentence. Then, we show that the lower bound of such a separation rank can reveal the quantitative relation between the network structure (e.g. depth/width) and the modeling ability for the contextual dependency. Especially, increasing the depth of the neural network can be more effective to improve the ability of modeling contextual dependency. Therefore, it is important to design an adaptive network to compute the adaptive depth in a task. Inspired by Adaptive Computation Time (ACT), we design an adaptive recurrent network based on the separation rank to model contextual dependency. Experiments on various NLP tasks have verified the proposed theoretical analysis. We also test our adaptive recurrent neural network in the sentence classification task, and the experiments show that it can achieve better results than the traditional bidirectional LSTM.""","""This paper a theoretical interpretation of separation rank as a measure of a recurrent network's ability to capture contextual dependencies in text, and introduces a novel bidirectional NLP variant and tests it on several NLP tasks to verify their analysis. Reviewer 3 found that the paper does not provide a clear description of the method and that a focus on single message would have worked better. Reviewer 2 made a claim of several shortcomings in the paper relating to lack of clarity, limited details on method, reliance on a 'false dichotomy', and failure to report performance. Reviewer 1 found the goals of the work to be interesting but that the paper was not clear, that the proofs were not rigorous enough, and clarity of experiments. The authors responded to the all the comments. The reviewers felt that their comments were still valid and did not adjust their ratings. Overall, the paper is not yet ready in its current form. We hope that the authors will find valuable feedback for their ongoing research."""
2014,"""Learning Effective Exploration Strategies For Contextual Bandits""","['meta-learning', 'contextual bandits', 'imitation learning']","""In contextual bandits, an algorithm must choose actions given observed contexts, learning from a reward signal that is observed only for the action chosen. This leads to an exploration/exploitation trade-off: the algorithm must balance taking actions it already believes are good with taking new actions to potentially discover better choices. We develop a meta-learning algorithm, MELEE, that learns an exploration policy based on simulated, synthetic contextual bandit tasks. MELEE uses imitation learning against these simulations to train an exploration policy that can be applied to true contextual bandit tasks at test time. We evaluate on both a natural contextual bandit problem derived from a learning to rank dataset as well as hundreds of simulated contextual bandit problems derived from classification tasks. MELEE outperforms seven strong baselines on most of these datasets by leveraging a rich feature representation for learning an exploration strategy.""","""This paper introduces MELEE, a meta-learning procedure for contextual bandits. In particular, MELEE learns how to explore by training on datasets with full-information about what every reward each action would obtain (e.g., using classification datasets). The idea is strongly related to imitation learning, and a regret bound is demonstrated for the procedure that comes from that literature. Experiments are performed. Perhaps due to the generality in which the algorithm was presented, reviewers found some parts of the work unintuitive and difficult to follow. The work may greatly benefit from having an explicit running example for F and pi and how it evolves during training. Some reviewers were not impressed by the experimental results relative to epsilon-greedy. Yes, epsilon-greedy is a strong baseline, but MELEE introduces significant technical debt and data infrastructure so it seems fair to expect a sizable bump over epsilon-greedy or else why is it worth it? Perhaps with revisions and experiments within a domain that justify its complexity, this paper may be suitable at another venue. But it is not deemed acceptable at this time, Reject. """
2015,"""A Simple Approach to the Noisy Label Problem Through the Gambler's Loss""","['noisy labels', 'robust learning', 'early stopping', 'generalization']","""Learning in the presence of label noise is a challenging yet important task. It is crucial to design models that are robust to noisy labels. In this paper, we discover that a new class of loss functions called the gambler's loss provides strong robustness to label noise across various levels of corruption. Training with this modified loss function reduces memorization of data points with noisy labels and is a simple yet effective method to improve robustness and generalization. Moreover, using this loss function allows us to derive an analytical early stopping criterion that accurately estimates when memorization of noisy labels begins to occur. Our overall approach achieves strong results and outperforming existing baselines.""","""This paper focuses on mitigating the effect of label noise. They provide a new class of loss functions along with a new stopping criteria for this problem. The authors claim that these new losses improves the test accuracy in the presence of label corruption and helps avoid memorization. The reviewers raised concerns about (1) lack of proper comparison with many baselines (2) subpar literature review and (3) state that parts of the paper is vague. The authors partially addressed these concerns and have significantly updated the paper including comparison with some of the baselines. However, the reviewers were not fully satisfied with the new updates. I mostly agree with the reviewers. I think the paper has potential but requires a bit more work to be ready for publication and can not recommend acceptance at this time. I have to say that the authors really put a lot of effort in their response and significantly improved their submission during the discussion period. I recommend the authors follow the reviewers' suggestions to further improve the paper (e.g. comparing with other baselines) for future submissions"""
2016,"""If MaxEnt RL is the Answer, What is the Question?""","['reinforcement learning', 'maximum entropy', 'POMDP']","""Experimentally, it has been observed that humans and animals often make decisions that do not maximize their expected utility, but rather choose outcomes randomly, with probability proportional to expected utility. Probability matching, as this strategy is called, is equivalent to maximum entropy reinforcement learning (MaxEnt RL). However, MaxEnt RL does not optimize expected utility. In this paper, we formally show that MaxEnt RL does optimally solve certain classes of control problems with variability in the reward function. In particular, we show (1) that MaxEnt RL can be used to solve a certain class of POMDPs, and (2) that MaxEnt RL is equivalent to a two-player game where an adversary chooses the reward function. These results suggest a deeper connection between MaxEnt RL, robust control, and POMDPs, and provide insight for the types of problems for which we might expect MaxEnt RL to produce effective solutions. Specifically, our results suggest that domains with uncertainty in the task goal may be especially well-suited for MaxEnt RL methods.""","""This paper studies maximum entropy reinforcement learning in more detail. Maximum entropy is a popular strategy in modern RL methods and also seems used in human and animal decision making. However, it does not lead to optimize expected utility. The authors propose a setting in which maximum entropy RL is an optimal solution. The authors were quite split on the paper, and there has been an animated discussion between the reviewers among each other and with the authors. The technical quality is good, although one reviewer commented on the restricted setting of the experiments (bandit problems). The authors have addressed this by adding an additional experiment. Futhermore, two reviewers commented that the clarity of the paper could be improved. A larger part of the discussion (also the private discussion) revolved around relevance and significance, especially of the meta-pomdp setting that takes up a large part of the manuscript. - A reviewer mentioned that after reading the paper, it does not become more clear why maximum entropy RL works well in practice. The discussion even turned to why MaxEntropyRL might be *unreasonable* from the point of view of needing a meta-POMDP with Markov assumptions, which doesn't help shed light on its empirical success. The meta-POMDP setting does not seem to reflect the use cases where maximum entropy RL has done well in emperical studies. - Another reviewer mentioned that earlier papers have investigated maximum entropy RL, and that the paper tries to offer a new perspective with the Meta-POMDP setting. The discussion of this discussion was not deemed complete in current state and needs more attention (splitting the paper into two along these lines is a possibility mooted by two of the reviewers). A particular example was the doctor-patient example, where in the meta-POMDP setting the doctor would repeatedly attempt to cure a fixed sampled illness, rather than e.g. solving for a new illness each time. Based on the discussion, I would conclude that the topic broached by the paper is very relevant and timely, however, that the paper would benefit from a round of major revision and resubmission rather than being accepted to ICLR in current form. """
2017,"""Federated User Representation Learning""","['Machine Learning', 'Federated Learning', 'Personalization', 'User Representation']","""Collaborative personalization, such as through learned user representations (embeddings), can improve the prediction accuracy of neural-network-based models significantly. We propose Federated User Representation Learning (FURL), a simple, scalable, privacy-preserving and resource-efficient way to utilize existing neural personalization techniques in the Federated Learning (FL) setting. FURL divides model parameters into federated and private parameters. Private parameters, such as private user embeddings, are trained locally, but unlike federated parameters, they are not transferred to or averaged on the server. We show theoretically that this parameter split does not affect training for most model personalization approaches. Storing user embeddings locally not only preserves user privacy, but also improves memory locality of personalization compared to on-server training. We evaluate FURL on two datasets, demonstrating a significant improvement in model quality with 8% and 51% performance increases, and approximately the same level of performance as centralized training with only 0% and 4% reductions. Furthermore, we show that user embeddings learned in FL and the centralized setting have a very similar structure, indicating that FURL can learn collaboratively through the shared parameters while preserving user privacy.""","""This manuscript personalization techniques to improve the scalability and privacy preservation of federated learning. Empirical results are provided which suggests improved performance. The reviewers and AC agree that the problem studied is timely and interesting, as the tradeoffs between personalization and performance are a known concern in federated learning. However, this manuscript also received quite divergent reviews, resulting from differences in opinion about the novelty and clarity of the conceptual and empirical results. Reviewers were also unconvinced by the provided empirical evaluation results. """
2018,"""When Does Self-supervision Improve Few-shot Learning?""","['Few-shot learning', 'Self-supervised learning', 'Meta-learning', 'Multi-task learning']","""We present a technique to improve the generalization of deep representations learned on small labeled datasets by introducing self-supervised tasks as auxiliary loss functions. Although recent research has shown benefits of self-supervised learning (SSL) on large unlabeled datasets, its utility on small datasets is unknown. We find that SSL reduces the relative error rate of few-shot meta-learners by 4%-27%, even when the datasets are small and only utilizing images within the datasets. The improvements are greater when the training set is smaller or the task is more challenging. Though the benefits of SSL may increase with larger training sets, we observe that SSL can have a negative impact on performance when there is a domain shift between distribution of images used for meta-learning and SSL. Based on this analysis we present a technique that automatically select images for SSL from a large, generic pool of unlabeled images for a given dataset using a domain classifier that provides further improvements. We present results using several meta-learners and self-supervised tasks across datasets with varying degrees of domain shifts and label sizes to characterize the effectiveness of SSL for few-shot learning.""","""Reviewers agree that this paper contains interesting results and simple, but good ideas. However, a few severe concerns were raised by reviewers. Most prominent one was the experiment set up - authors use a pre trained ResNet101 (which has seen many classes of Imagenet) for testing which makes is unclear how well their proposed method would work for unlabeled pool of dataset that classifiers has never seen. While authors claim that their the dataset used for testing was disjoint from Imagenet, a reviewer pointed out that dogs dataset, bird datasets both state that they overlap with Imagenet. A few other concerns are raised (need more meaningful metric in Figure 4d, which wasnt addressed in rebuttal). We look forward to seeing an improved version of the paper in your future submissions. """
2019,"""Pixel Co-Occurence Based Loss Metrics for Super Resolution Texture Recovery""","['Super Resolution Generative Adversarial Networks', 'Perceptual Loss Functions']","""Single Image Super Resolution (SISR) has significantly improved with Convolutional Neural Networks (CNNs) and Generative Adversarial Networks (GANs), often achieving order of magnitude better pixelwise accuracies (distortions) and state-of-the-art perceptual accuracy. Due to the stochastic nature of GAN reconstruction and the ill-posed nature of the problem, perceptual accuracy tends to correlate inversely with pixelwise accuracy which is especially detrimental to SISR, where preservation of original content is an objective. GAN stochastics can be guided by intermediate loss functions such as the VGG featurewise loss, but these features are typically derived from biased pre-trained networks. Similarly, measurements of perceptual quality such as the human Mean Opinion Score (MOS) and no-reference measures have issues with pre-trained bias. The spatial relationships between pixel values can be measured without bias using the Grey Level Co-occurence Matrix (GLCM), which was found to match the cardinality and comparative value of the MOS while reducing subjectivity and automating the analytical process. In this work, the GLCM is also directly used as a loss function to guide the generation of perceptually accurate images based on spatial collocation of pixel values. We compare GLCM based loss against scenarios where (1) no intermediate guiding loss function, and (2) the VGG feature function are used. Experimental validation is carried on X-ray images of rock samples, characterised by significant number of high frequency texture features. We find GLCM-based loss to result in images with higher pixelwise accuracy and better perceptual scores.""","""This paper proposes to use the grey level co-occurrence matrix method (GLCM) for both the performance evaluation metric and an auxiliary loss function for single image super resolution. Experiments are conducted on X-ray images of rock samples. Three reviewers provide comments. Two reviewers rated reject while one rated weak reject. The major concerns include the lack of clear and detailed description, low novelty, limited experiment on only one database, unconvincing improvement over the prior work, etc. The authors agree that the limited experiment on one database does not demonstrate the generalization capability of the proposed method. The AC agrees with the reviewers comments, and recommend rejection."""
2020,"""Tensor Decompositions for Temporal Knowledge Base Completion""","['knowledge base completion', 'temporal embeddings']","""Most algorithms for representation learning and link prediction in relational data have been designed for static data. However, the data they are applied to usually evolves with time, such as friend graphs in social networks or user interactions with items in recommender systems. This is also the case for knowledge bases, which contain facts such as (US, has president, B. Obama, [2009-2017]) that are valid only at certain points in time. For the problem of link prediction under temporal constraints, i.e., answering queries of the form (US, has president, ?, 2012), we propose a solution inspired by the canonical decomposition of tensors of order 4. We introduce new regularization schemes and present an extension of ComplEx that achieves state-of-the-art performance. Additionally, we propose a new dataset for knowledge base completion constructed from Wikidata, larger than previous benchmarks by an order of magnitude, as a new reference for evaluating temporal and non-temporal link prediction methods. ""","""The authors propose a new algorithm based on tensor decompositions for the problem of knowledge base completion. They also introduce new regularisers to augment their method. They also propose an new dataset for temporal KB completion. All the reviews agreed that the paper addresses an important problem and presents interesting results. The authors diligently responded to reviewer queries and addressed most of the concerns raised by the reviewers. Since all the reviewers are in agreement, I recommend that this paper be accepted. """
2021,"""Retrospection: Leveraging the Past for Efficient Training of Deep Neural Networks""","['Deep Neural Networks', 'Supervised Learning', 'Classification', 'Training Strategy', 'Generative Adversarial Networks', 'Convolutional Neural Networks']","""Deep neural networks are powerful learning machines that have enabled breakthroughs in several domains. In this work, we introduce retrospection loss to improve the performance of neural networks by utilizing prior experiences during training. Minimizing the retrospection loss pushes the parameter state at the current training step towards the optimal parameter state while pulling it away from the parameter state at a previous training step. We conduct extensive experiments to show that the proposed retrospection loss results in improved performance across multiple tasks, input types and network architectures.""","""This paper introduces a further regularizer, retrospection loss, for training neural networks, which leverages past parameter states. The authors added several ablation studies and extra experiments during the rebuttal, which are helpful to show that their method is useful. However, this is still one of those papers that essentially proposes an additional heuristic to train deep news, which is helpful but not clearly motivated from a theoretical point of view (despite the intuitions). Yes, it provides improvements across tasks but these are all relatively small, and the method is more involved. Therefore, I am recommending rejection."""
2022,"""Informed Temporal Modeling via Logical Specification of Factorial LSTMs""","['factorized LSTM', 'temporal point process', 'event streams', 'structural bias', 'Datalog']","""Consider a world in which events occur that involve various entities. Learning how to predict future events from patterns of past events becomes more difficult as we consider more types of events. Many of the patterns detected in the dataset by an ordinary LSTM will be spurious since the number of potential pairwise correlations, for example, grows quadratically with the number of events. We propose a type of factorial LSTM architecture where different blocks of LSTM cells are responsible for capturing different aspects of the world state. We use Datalog rules to specify how to derive the LSTM structure from a database of facts about the entities in the world. This is analogous to how a probabilistic relational model (Getoor & Taskar, 2007) specifies a recipe for deriving a graphical model structure from a database. In both cases, the goal is to obtain useful inductive biases by encoding informed independence assumptions into the model. We specifically consider the neural Hawkes process, which uses an LSTM to modulate the rate of instantaneous events in continuous time. In both synthetic and real-world domains, we show that we obtain better generalization by using appropriate factorial designs specified by simple Datalog programs. ""","""While reviewers find this paper interesting, they raised number of concerns including the novelty, writing, experiments, references and clear mention of the benefit. Unfortunately, excellent questions and insightful comments left by reviewers are gone without authors answers. """
2023,"""Aggregating explanation methods for neural networks stabilizes explanations""","['explainability', 'deep learning', 'interpretability', 'XAI']",""" Despite a growing literature on explaining neural networks, no consensus has been reached on how to explain a neural network decision or how to evaluate an explanation. Our contributions in this paper are twofold. First, we investigate schemes to combine explanation methods and reduce model uncertainty to obtain a single aggregated explanation. The aggregation is more robust and aligns better with the neural network than any single explanation method.. Second, we propose a new approach to evaluating explanation methods that circumvents the need for manual evaluation and is not reliant on the alignment of neural networks and humans decision processes. ""","""This paper describes a new method for explaining the predictions of a CNN on a particular image. The method is based on aggregating the explanations of several methods. They also describe a new method of evaluating explanation methods which avoids manual evaluation of the explanations. However, the most critical reviewer questions the contribution of the proposed method, which is simple. Simple isn't always a bad thing, but I think here the reviewer has a point. The new method for evaluating explanation methods is interesting, but the sample images given are also very simple -- how does the method work when the image is cluttered? How about when the prediction is uncertain or wrong?"""
2024,"""Measuring causal influence with back-to-back regression: the linear case""",[],"""Identifying causes from observations can be particularly challenging when i) potential factors are difficult to manipulate individually and ii) observations are complex and multi-dimensional. To address this issue, we introduce Back-to-Back regression (B2B), a method designed to efficiently measure, from a set of co-varying factors, the causal influences that most plausibly account for multidimensional observations. After proving the consistency of B2B and its links to other linear approaches, we show that our method outperforms least-squares regression and cross-decomposition techniques (e.g. canonical correlation analysis and partial least squares) on causal identification. Finally, we apply B2B to neuroimaging recordings of 102 subjects reading word sequences. The results show that the early and late brain representations, caused by low- and high-level word features respectively, are more reliably detected with B2B than with other standard techniques. ""","""The authors introduce a method for disentangling effects of correlated predictors in the context of high dimensional outcomes. While the paper contains interesting ideas and has been substantially improved from its original form, the paper still does not meet the quality bar of ICLR due to its limitations in terms of limited applicability and experiments. The paper will benefit from a revision and resubmission to another venue."""
2025,"""TED: A Pretrained Unsupervised Summarization Model with Theme Modeling and Denoising""","['text summarization', 'unsupervised learning', 'natural language processing']","""Text summarization aims to extract essential information from a piece of text and transform it into a concise version. Existing unsupervised abstractive summarization models use recurrent neural networks framework and ignore abundant unlabeled corpora resources. In order to address these issues, we propose TED, a transformer-based unsupervised summarization system with dataset-agnostic pretraining. We first leverage the lead bias in news articles to pretrain the model on large-scale corpora. Then, we finetune TED on target domains through theme modeling and a denoising autoencoder to enhance the quality of summaries. Notably, TED outperforms all unsupervised abstractive baselines on NYT, CNN/DM and English Gigaword datasets with various document styles. Further analysis shows that the summaries generated by TED are abstractive and containing even higher proportions of novel tokens than those from supervised models.""","""This paper proposes an abstractive text summarization model that takes advantage of lead bias for pretraining on unlabeled corpora and a combination of reconstruction and theme modeling loss for finetuning. Experiments on NYT, CNN/DM, and Gigaword datasets demonstrate the benefit of the proposed approach. I think this is an interesting paper and the results are reasonably convincing. My only concern is regarding a parallel submission that contains a significant overlap in terms contributions, as originally pointed out by R2 (pseudo-url). All of us had an internal discussion regarding this submission and agree that if the lead bias is considered a contribution of another paper this paper is not strong enough. Due to space constraint and the above concern, along with the issue that the two submissions contain a significant overlap in terms of authors as well, I recommend to reject this paper."""
2026,"""ASGen: Answer-containing Sentence Generation to Pre-Train Question Generator for Scale-up Data in Question Answering""","['Question Answering', 'Machine Reading Comprehension', 'Data Augmentation', 'Question Generation', 'Answer Generation']","""Numerous machine reading comprehension (MRC) datasets often involve manual annotation, requiring enormous human effort, and hence the size of the dataset remains significantly smaller than the size of the data available for unsupervised learning. Recently, researchers proposed a model for generating synthetic question-and-answer data from large corpora such as Wikipedia. This model is utilized to generate synthetic data for training an MRC model before fine-tuning it using the original MRC dataset. This technique shows better performance than other general pre-training techniques such as language modeling, because the characteristics of the generated data are similar to those of the downstream MRC data. However, it is difficult to have high-quality synthetic data comparable to human-annotated MRC datasets. To address this issue, we propose Answer-containing Sentence Generation (ASGen), a novel pre-training method for generating synthetic data involving two advanced techniques, (1) dynamically determining K answers and (2) pre-training the question generator on the answer-containing sentence generation task. We evaluate the question generation capability of our method by comparing the BLEU score with existing methods and test our method by fine-tuning the MRC model on the downstream MRC data after training on synthetic data. Experimental results show that our approach outperforms existing generation methods and increases the performance of the state-of-the-art MRC models across a range of MRC datasets such as SQuAD-v1.1, SQuAD-v2.0, KorQuAD and QUASAR-T without any architectural modifications to the original MRC model.""","""Thanks for an interesting discussion. The paper introduces a sound question generation technique for QA. Reviewers are moderately positive, with low confidence. Some issues remain unresolved, though: While the UniLM comparison is currently not apples-to-apples, for example, nothing prevents the authors from using their method to pretrain UniLM. Currently, QA results are low-ish, and it is hard to accept a paper based solely on BLEU scores (questionable metric) for question generation (the task is but a means to an end). Moreover, the authors do not really discuss how their method relates to previous work (see Review 2 and the related work cited there; there's more, e.g., [0]). I also find it a little problematic that the paper completely ignores all work prior to 2017: The NLP community started organizing workshops on question generation in 2010. [1]"""
2027,"""Abstractive Dialog Summarization with Semantic Scaffolds""","['Abstractive Summarization', 'Dialog', 'Multi-task Learning']","""The demand for abstractive dialog summary is growing in real-world applications. For example, customer service center or hospitals would like to summarize customer service interaction and doctor-patient interaction. However, few researchers explored abstractive summarization on dialogs due to the lack of suitable datasets. We propose an abstractive dialog summarization dataset based on MultiWOZ. If we directly apply previous state-of-the-art document summarization methods on dialogs, there are two significant drawbacks: the informative entities such as restaurant names are difficult to preserve, and the contents from different dialog domains are sometimes mismatched. To address these two drawbacks, we propose Scaffold Pointer Network (SPNet) to utilize the existing annotation on speaker role, semantic slot and dialog domain. SPNet incorporates these semantic scaffolds for dialog summarization. Since ROUGE cannot capture the two drawbacks mentioned, we also propose a new evaluation metric that considers critical informative entities in the text. On MultiWOZ, our proposed SPNet outperforms state-of-the-art abstractive summarization methods on all the automatic and human evaluation metrics.""","""This paper proposes an approach for abstractive summarization of multi-domain dialogs, called SPNet, that incrementally builds on previous approaches such as pointer-generator networks. SPNet also separately includes speaker role, slot and domain labels, and is evaluated against a new metric, Critical Information Completeness (CIC), to tackle issues with ROUGE. The reviewers suggested a set of issues, including the meaningfulness of the task, incremental nature of the work and lack of novelty, and consistency issues in the write up. Unfortunately authors did not respond to the reviewer comments. I suggest rejecting the paper."""
2028,"""ADA+: A GENERIC FRAMEWORK WITH MORE ADAPTIVE EXPLICIT ADJUSTMENT FOR LEARNING RATE""","['Optimization', 'Adaptive Methods', 'Convergence', 'Convolutional Neural Network']","""Although adaptive algorithms have achieved significant success in training deep neural networks with faster training speed, they tend to have poor generalization performance compared to SGD with Momentum(SGDM). One of the state-of-the-art algorithms, PADAM, is proposed to close the generalization gap of adaptive methods while lacking an internal explanation. This work pro- poses a general framework, in which we use an explicit function () as an adjustment to the actual step size, and present a more adaptive specific form AdaPlus(Ada+). Based on this framework, we analyze various behaviors brought by different types of (), such as a constant function in SGDM, a linear function in Adam, a concave function in Padam and a concave function with offset term in AdaPlus. Empirically, we conduct experiments on classic benchmarks both in CNN and RNN architectures and achieve better performance(even than SGDM). ""","""In this paper, the authors proposed a general framework, which uses an explicit function as an adjustment to the actual learning rate, and presented a more adaptive specific form Ada+. Based on this framework, they analyzed various behaviors brought by different types of the function. Empirical experiments on benchmarks demonstrate better performance than some baseline algorithms. The main concern of this paper is: (1) lack of justification or interpretation for the proposed framework; (2) the performance of the proposed algorithm is on a par with Padam; (3) missing comparison with some other baselines on more benchmark datasets. Plus, the authors did not submit response. I agree with the reviewers evaluation."""
2029,"""Gaussian Process Meta-Representations Of Neural Networks""","['Bayesian Neural Networks', 'Representation Learning', 'Gaussian Processes', 'Variational Inference']","""Bayesian inference offers a theoretically grounded and general way to train neural networks and can potentially give calibrated uncertainty. It is, however, challenging to specify a meaningful and tractable prior over the network parameters. More crucially, many existing inference methods assume mean-field approximate posteriors, ignoring interactions between parameters in high-dimensional weight space. To this end, this paper introduces two innovations: (i) a Gaussian process-based hierarchical model for the network parameters based on recently introduced unit embeddings that can flexibly encode weight structures, and (ii) input-dependent contextual variables for the weight prior that can provide convenient ways to regularize the function space being modeled by the NN through the use of kernels. Furthermore, we develop an efficient structured variational inference scheme that alleviates the need to perform inference in the weight space whilst retaining and learning non-trivial correlations between network parameters. We show these models provide desirable test-time uncertainty estimates, demonstrate cases of modeling inductive biases for neural networks with kernels and demonstrate competitive predictive performance of the proposed model and algorithm over alternative approaches on a range of classification and active learning tasks.""","""The authors propose an approach to Bayesian deep learning, by representing neural network weights as latent variables mapped through a Kronecker factored Gaussian process. The ideas have merit and are well-motivated. Reviewers were primarily concerned by the experimental validation, and lack of discussion and comparisons with related work. After the rebuttal, reviewers still expressed concern regarding both points, with no reviewer championing the work. One reviewer writes: ""I have read the authors' rebuttal. I still have reservation regarding the gain of a GP over an NN in my original review and I do not think the authors have addressed this very convincingly -- while I agree that in general, sparse GP can match the performance of GP with a sufficiently large number of inducing inputs, the proposed method also incurs extra approximations so arguing for the advantage of the proposed method in term of the accurate approximate inference of sparse GP seems problematic."" Another reviewer points out that the comment in the author rebuttal about Kronecker factored methods (Saatci, 2011) for non-Gaussian likelihoods and with variational inference being an open question is not accurate: SV-DKL (pseudo-url) and other approaches (pseudo-url) were specifically designed to address this question, and are implemented in popular packages. Moreover, there is highly relevant additional work on latent variable representations for neural network weights, inducing priors on p(w) through p(z), which is not discussed or compared against (pseudo-url, pseudo-url). The revision only includes a minor consideration of DKL in the appendix. While the ideas in the paper are promising, and the generally thoughtful exchanges were appreciated, there is clearly related work that should be discussed in the main text, with appropriate comparisons. With reviewers expressing additional reservations after rebuttal, and the lack of a clear champion, the paper would benefit from significant revisions in these directions. Note: In the text, it says: ""However, obtaining p(w|D) and p(D) exactly is intractable when N is large or when the network is large and as such, approximation methods are often required."" One cannot exactly obtain p(D), or the predictive distribution, regardless of N or the size of the network; exact inference is intractable because the relevant integrals cannot be expressed in closed form, since the parameters are mapped through non-linearities, in addition to typically non-Gaussian likelihoods."""
2030,"""LEARNING TO LEARN WITH BETTER CONVERGENCE""",[],"""We consider the learning to learn problem, where the goal is to leverage deeplearning models to automatically learn (iterative) optimization algorithms for training machine learning models. A natural way to tackle this problem is to replace the human-designed optimizer by an LSTM network and train the parameters on some simple optimization problems (Andrychowicz et al., 2016). Despite their success compared to traditional optimizers such as SGD on a short horizon, theselearnt (meta-) optimizers suffer from two key deficiencies: they fail to converge(or can even diverge) on a longer horizon (e.g., 10000 steps). They also often fail to generalize to new tasks. To address the convergence problem, we rethink the architecture design of the meta-optimizer and develop an embarrassingly simple,yet powerful form of meta-optimizersa coordinate-wise RNN model. We provide insights into the problems with the previous designs of each component and re-design our SimpleOptimizer to resolve those issues. Furthermore, we propose anew mechanism to allow information sharing between coordinates which enables the meta-optimizer to exploit second-order information with negligible overhead.With these designs, our proposed SimpleOptimizer outperforms previous meta-optimizers and can successfully converge to optimal solutions in the long run.Furthermore, our empirical results show that these benefits can be obtained with much smaller models compared to the previous ones.""","""This paper proposes an improved (over Andrychowicz et al) meta-optimizer that tries to to learn better strategies for training deep machine learning models. The paper was reviewed by three experts, two of whom recommend Weak Reject and one who recommends Reject. The reviewers identify a number of significant concerns, including degree of novelty and contribution, connections to previous work, completeness of experiments, and comparisons to baselines. In light of these reviews and since the authors have unfortunately not provided a response to them, we cannot recommend accepting the paper."""
2031,"""Bayesian Residual Policy Optimization: Scalable Bayesian Reinforcement Learning with Clairvoyant Experts""","['Bayesian Residual Reinforcement Learning', 'Residual Reinforcement Learning', 'Bayes Policy Optimization']","""Informed and robust decision making in the face of uncertainty is critical for robots that perform physical tasks alongside people. We formulate this as a Bayesian Reinforcement Learning problem over latent Markov Decision Processes (MDPs). While Bayes-optimality is theoretically the gold standard, existing algorithms do not scale well to continuous state and action spaces. We propose a scalable solution that builds on the following insight: in the absence of uncertainty, each latent MDP is easier to solve. We split the challenge into two simpler components. First, we obtain an ensemble of clairvoyant experts and fuse their advice to compute a baseline policy. Second, we train a Bayesian residual policy to improve upon the ensemble's recommendation and learn to reduce uncertainty. Our algorithm, Bayesian Residual Policy Optimization (BRPO), imports the scalability of policy gradient methods as well as the initialization from prior models. BRPO significantly improves the ensemble of experts and drastically outperforms existing adaptive RL methods.""","""This paper constitutes interesting progress on an important topic; the reviewers identify certain improvements and directions for future work (see in particular the updates from AnonReviewer1), and I urge the authors to continue to develop refinements and extensions."""
2032,"""Self-Educated Language Agent with Hindsight Experience Replay for Instruction Following""","['Language', 'reinforcement learning', 'instruction following', 'Hindsight Experience Replay']","""Language creates a compact representation of the world and allows the description of unlimited situations and objectives through compositionality. These properties make it a natural fit to guide the training of interactive agents as it could ease recurrent challenges in Reinforcement Learning such as sample complexity, generalization, or multi-tasking. Yet, it remains an open-problem to relate language and RL in even simple instruction following scenarios. Current methods rely on expert demonstrations, auxiliary losses, or inductive biases in neural architectures. In this paper, we propose an orthogonal approach called Textual Hindsight Experience Replay (THER) that extends the Hindsight Experience Replay approach to the language setting. Whenever the agent does not fulfill its instruction, THER learn to output a new directive that matches the agent trajectory, and it relabels the episode with a positive reward. To do so, THER learns to map a state into an instruction by using past successful trajectories, which removes the need to have external expert interventions to relabel episodes as in vanilla HER. We observe that this simple idea also initiates a learning synergy between language acquisition and policy learning on instruction following tasks in the BabyAI environment. ""","""Two reviewers are borderline and one recommends rejection. The main criticism is the simplicity of language, scalability to a more complex problem, and questions about experiments. Due to the lack of stronger support, the paper cannot be accepted at this point. The authors are encouraged to address the reviewer's comments and resubmit to a future conference."""
2033,"""Quantum Algorithms for Deep Convolutional Neural Networks""","['quantum computing', 'quantum machine learning', 'convolutional neural network', 'theory', 'algorithm']","""Quantum computing is a powerful computational paradigm with applications in several fields, including machine learning. In the last decade, deep learning, and in particular Convolutional Neural Networks (CNN), have become essential for applications in signal processing and image recognition. Quantum deep learning, however, remains a challenging problem, as it is difficult to implement non linearities with quantum unitaries. In this paper we propose a quantum algorithm for evaluating and training deep convolutional neural networks with potential speedups over classical CNNs for both the forward and backward passes. The quantum CNN (QCNN) reproduces completely the outputs of the classical CNN and allows for non linearities and pooling operations. The QCNN is in particular interesting for deep networks and could allow new frontiers in the image recognition domain, by allowing for many more convolution kernels, larger kernels, high dimensional inputs and high depth input channels. We also present numerical simulations for the classification of the MNIST dataset to provide practical evidence for the efficiency of the QCNN.""","""Four reviewers have assessed this paper and they have scored it as 6/6/6/6 after rebuttal. Nonetheless, the reviewers have raised a number of criticisms and the authors are encouraged to resolve them for the camera-ready submission. Especially, the authors should take care to make this paper accessible (understandable) to the ML community as ICLR is a ML venue (rather than quantum physics one). Failure to do so will likely discourage the generosity of reviewers toward this type of submissions in the future."""
2034,"""{COMPANYNAME}11K: An Unsupervised Representation Learning Dataset for Arrhythmia Subtype Discovery""","['representation learning', 'healthcare', 'medical', 'clinical', 'dataset', 'ecg', 'cardiology', 'heart', 'discovery', 'anomaly detection', 'out of distribution']","""We release the largest public ECG dataset of continuous raw signals for representation learning containing over 11k patients and 2 billion labelled beats. Our goal is to enable semi-supervised ECG models to be made as well as to discover unknown subtypes of arrhythmia and anomalous ECG signal events. To this end, we propose an unsupervised representation learning task, evaluated in a semi-supervised fashion. We provide a set of baselines for different feature extractors that can be built upon. Additionally, we perform qualitative evaluations on results from PCA embeddings, where we identify some clustering of known subtypes indicating the potential for representation learning in arrhythmia sub-type discovery.""","""This paper introduces a new ECG dataset. While I appreciate the efforts to clarify several points raised by the reviewers, I still believe this contribution to be of limited interest to the broad ICLR community. As such, I suggest this paper to be submitted to a more specialised venue."""
2035,"""Neural Symbolic Reader: Scalable Integration of Distributed and Symbolic Representations for Reading Comprehension""","['neural symbolic', 'reading comprehension', 'question answering']","""Integrating distributed representations with symbolic operations is essential for reading comprehension requiring complex reasoning, such as counting, sorting and arithmetics, but most existing approaches are hard to scale to more domains or more complex reasoning. In this work, we propose the Neural Symbolic Reader (NeRd), which includes a reader, e.g., BERT, to encode the passage and question, and a programmer, e.g., LSTM, to generate a program that is executed to produce the answer. Compared to previous works, NeRd is more scalable in two aspects: (1) domain-agnostic, i.e., the same neural architecture works for different domains; (2) compositional, i.e., when needed, complex programs can be generated by recursively applying the predefined operators, which become executable and interpretable representations for more complex reasoning. Furthermore, to overcome the challenge of training NeRd with weak supervision, we apply data augmentation techniques and hard Expectation-Maximization (EM) with thresholding. On DROP, a challenging reading comprehension dataset that requires discrete reasoning, NeRd achieves 1.37%/1.18% absolute improvement over the state-of-the-art on EM/F1 metrics. With the same architecture, NeRd significantly outperforms the baselines on MathQA, a math problem benchmark that requires multiple steps of reasoning, by 25.5% absolute increment on accuracy when trained on all the annotated programs. More importantly, NeRd still beats the baselines even when only 20% of the program annotations are given.""","""Main content: Blind review #1 summarizes it well: This paper presents a semantic parser that operates over passages of text instead of a structured data source. This is the first time anyone has demonstrated such a semantic parser (Siva Reddy and several others have essentially used unstructured text as an information source for a semantic parser, similar to OpenIE methods, but this is qualitatively different). The key insight is to let the semantic parser point to locations in the text that can be used in further symbolic operations. This is excellent work, and it should definitely be accepted. I have a ton of questions about this method, but they are good questions. -- Discussion: The reviews all agree on a generally positive assessment, and focus on details that have been addressed, rather than major problems. -- Recommendation and justification: This paper should be accepted. Even though novelty in terms of fundamental machine learning components is minimal, but the architecture employing neural models to do symbolic work is a good contribution in a crucial direction (especially in the theme of ICLR)."""
2036,"""Accelerating SGD with momentum for over-parameterized learning""","['SGD', 'acceleration', 'momentum', 'stochastic', 'over-parameterized', 'Nesterov']",""" Nesterov SGD is widely used for training modern neural networks and other machine learning models. Yet, its advantages over SGD have not been theoretically clarified. Indeed, as we show in this paper, both theoretically and empirically, Nesterov SGD with any parameter selection does not in general provide acceleration over ordinary SGD. Furthermore, Nesterov SGD may diverge for step sizes that ensure convergence of ordinary SGD. This is in contrast to the classical results in the deterministic setting, where the same step size ensures accelerated convergence of the Nesterov's method over optimal gradient descent. To address the non-acceleration issue, we introduce a compensation term to Nesterov SGD. The resulting algorithm, which we call MaSS, converges for same step sizes as SGD. We prove that MaSS obtains an accelerated convergence rates over SGD for any mini-batch size in the linear setting. For full batch, the convergence rate of MaSS matches the well-known accelerated rate of the Nesterov's method. We also analyze the practically important question of the dependence of the convergence rate and optimal hyper-parameters on the mini-batch size, demonstrating three distinct regimes: linear scaling, diminishing returns and saturation. Experimental evaluation of MaSS for several standard architectures of deep networks, including ResNet and convolutional networks, shows improved performance over SGD, Nesterov SGD and Adam. ""","""The authors provide an empirical and theoretical exploration of Nesterov momentum, particularly in the over-parametrized settings. Nesterov momentum has attracted great interest at various times in deep learning, but its properties and practical utility are not well understood. This paper makes an important step towards shedding some light on this approach for training models with a large number of parameters. """
2037,"""S-Flow GAN""","['GAN', 'Image Generation', 'AI', 'Generative Models', 'CV']","""Our work offers a new method for domain translation from semantic label maps and Computer Graphic (CG) simulation edge map images to photo-realistic im- ages. We train a Generative Adversarial Network (GAN) in a conditional way to generate a photo-realistic version of a given CG scene. Existing architectures of GANs still lack the photo-realism capabilities needed to train DNNs for computer vision tasks, we address this issue by embedding edge maps, and training it in an adversarial mode. We also offer an extension to our model that uses our GAN architecture to create visually appealing and temporally coherent videos.""","""The submission proposes a new GAN-based method for translating from semantic maps of (synthetic) images/videos (from computer graphics) to photo-realistic images/videos with the aid of edge maps. The main innovation is the inclusion of edge maps to the generator, where the edge maps are initially computed using the spatial Laplacian operator, and later output from their DNED network. According to the authors, the edge map allows them to generate images with fine details and to generate output images at higher resolutions. The authors use their method to generate both single images as well as videos. The submission received relatively low scores (2 rejects and 1 weak reject). This was unchanged after the rebuttal (the authors did not submit a revised version of their paper). The reviewers voiced concerns about the following: 1. Limited novelty All of the reviewers indicated that they felt the novelty of the proposed approach of not high as the work seemed to make only small modifications on prior work. In the author response, the authors provided some details on where they felt their innovation to be. The paper can be improved by building on those and having experiments/examples to probe those claims in more detail. 2. Application to other datasets The proposed method is demonstrated only on two datasets of driving scenarios (Cityscapes and Synthia). It is unclear how the method will generalize to other types of inputs. Experiments on other datasets will demonstrate whether the proposed approach can work well for other types of images. 3. The overall quality of the writing. The overall quality of the writing is poor and hard to follow in places. The paper should also include more discussion of domain adaptation in the related work section. It's possible that with improved writing that situates the work and explains the novel aspects of the work better, that the concern about limited novelty will be partially alleviated. The paper also needs an editing pass as there are many grammar/spelling/capitalization issues. Page 2: ""We make Three"" --> ""We make three"" Page 3: ""as can bee seen in fig 3.2"" --> ""as can be seen in ..."" (it's unclear which figure ""fig 3.2"" refers to, as figures are labeled Figure 1, Figure 2, etc) Page 4: Equation (3), symbol e is not explained (it is presumably the edge map) Page 7: ""bellow"" --> ""below"" Overall, there are interesting elements in this paper and the reviewers noted that the generated results look good. However, the paper will need to be improved considerably. The authors are encouraged to improve their work and submit to an appropriate venue. """
2038,"""ALBERT: A Lite BERT for Self-supervised Learning of Language Representations""","['Natural Language Processing', 'BERT', 'Representation Learning']","""Increasing model size when pretraining natural language representations often results in improved performance on downstream tasks. However, at some point further model increases become harder due to GPU/TPU memory limitations and longer training times. To address these problems, we present two parameter-reduction techniques to lower memory consumption and increase the training speed of BERT~\citep{devlin2018bert}. Comprehensive empirical evidence shows that our proposed methods lead to models that scale much better compared to the original BERT. We also use a self-supervised loss that focuses on modeling inter-sentence coherence, and show it consistently helps downstream tasks with multi-sentence inputs. As a result, our best model establishes new state-of-the-art results on the GLUE, RACE, and \squad benchmarks while having fewer parameters compared to BERT-large. The code and the pretrained models are available at pseudo-url.""","""This paper proposes three modifications of BERT type models two of which is concerned with parameter sharing and one with a new auxiliary loss. New SOTA on downstream tasks are demonstrated. All reviewers liked the paper and so did a lot of comments. Acceptance is recommended."""
2039,"""Reinforced active learning for image segmentation""","['semantic segmentation', 'active learning', 'reinforcement learning']","""Learning-based approaches for semantic segmentation have two inherent challenges. First, acquiring pixel-wise labels is expensive and time-consuming. Second, realistic segmentation datasets are highly unbalanced: some categories are much more abundant than others, biasing the performance to the most represented ones. In this paper, we are interested in focusing human labelling effort on a small subset of a larger pool of data, minimizing this effort while maximizing performance of a segmentation model on a hold-out set. We present a new active learning strategy for semantic segmentation based on deep reinforcement learning (RL). An agent learns a policy to select a subset of small informative image regions -- opposed to entire images -- to be labeled, from a pool of unlabeled data. The region selection decision is made based on predictions and uncertainties of the segmentation model being trained. Our method proposes a new modification of the deep Q-network (DQN) formulation for active learning, adapting it to the large-scale nature of semantic segmentation problems. We test the proof of concept in CamVid and provide results in the large-scale dataset Cityscapes. On Cityscapes, our deep RL region-based DQN approach requires roughly 30% less additional labeled data than our most competitive baseline to reach the same performance. Moreover, we find that our method asks for more labels of under-represented categories compared to the baselines, improving their performance and helping to mitigate class imbalance.""","""Authors propose a novel scheme to perform active learning on image segmentation. This structured task is highly time consuming for humans to perform and challenging to model theoretically as to potentially apply existing active learning methods. Reviewers have remaining concerns over computation and that the empirical evaluation is not overwhelming (e.g., more comparisons). Nevertheless, the paper appears to bring new ideas to the table for this important problem. """
2040,"""A Target-Agnostic Attack on Deep Models: Exploiting Security Vulnerabilities of Transfer Learning""","['Machine learning security', 'Transfer learning', 'deep learning security', 'Softmax Vulnerability', 'Transfer learning Security']","""Due to insufficient training data and the high computational cost to train a deep neural network from scratch, transfer learning has been extensively used in many deep-neural-network-based applications. A commonly used transfer learning approach involves taking a part of a pre-trained model, adding a few layers at the end, and re-training the new layers with a small dataset. This approach, while efficient and widely used, imposes a security vulnerability because the pre-trained model used in transfer learning is usually publicly available, including to potential attackers. In this paper, we show that without any additional knowledge other than the pre-trained model, an attacker can launch an effective and efficient brute force attack that can craft instances of input to trigger each target class with high confidence. We assume that the attacker has no access to any target-specific information, including samples from target classes, re-trained model, and probabilities assigned by Softmax to each class, and thus making the attack target-agnostic. These assumptions render all previous attack models inapplicable, to the best of our knowledge. To evaluate the proposed attack, we perform a set of experiments on face recognition and speech recognition tasks and show the effectiveness of the attack. Our work reveals a fundamental security weakness of the Softmax layer when used in transfer learning settings.""","""The reviewers were generally in agreement that the paper presents a valuable contribution and should be accepted for publication. However, I would strongly encourage the authors to carefully read over the reviews and address the suggestions and concerns insofar as possible for the final."""
2041,"""A Stochastic Trust Region Method for Non-convex Minimization""",[],"""We target the problem of finding a local minimum in non-convex finite-sum minimization. Towards this goal, we first prove that the trust region method with inexact gradient and Hessian estimation can achieve a convergence rate of order pseudo-formula as long as those differential estimations are sufficiently accurate. Combining such result with a novel Hessian estimator, we propose a sample-efficient stochastic trust region (STR) algorithm which finds an \sqrt{\epsilon}) local minimum within pseudo-formula stochastic Hessian oracle queries. This improves the state-of-the-art result by a factor of pseudo-formula . Finally, we also develop Hessian-free STR algorithms which achieve the lowest runtime complexity. Experiments verify theoretical conclusions and the efficiency of the proposed algorithms.""","""This paper proposes a stochastic trust region method for local minimum finding based on variance reduction, which achieves better oracle complexities than some of the previous work. This is a borderline paper and has been carefully discussed. The main concern of the reviewers is that this paper falls short of proper experiment evaluation to support their theoretical analysis. In detail, the authors proved a globally sublinear convergence rate to a local minimum, yet the experiments demonstrate a linear or even quadratic convergence starting from the initialization. There is a big gap between the theoretical analysis and experiments, which is probably due to the experimental design. In addition, the authors did not submit a revision during the author response (while it is optional), so it is unclear whether a major revision is required to address all the reviewers comments. In fact, one reviewer thinks that a major revision is needed. I agree with the reviewers evaluation and encourage the authors to improve this paper before future submission."""
2042,"""Learning Latent State Spaces for Planning through Reward Prediction""","['Deep Reinforcement Learning', 'Representation Learning', 'Model Based Reinforcement Learning']","""Model-based reinforcement learning methods typically learn models for high-dimensional state spaces by aiming to reconstruct and predict the original observations. However, drawing inspiration from model-free reinforcement learning, we propose learning a latent dynamics model directly from rewards. In this work, we introduce a model-based planning framework which learns a latent reward prediction model and then plan in the latent state-space. The latent representation is learned exclusively from multi-step reward prediction which we show to be the only necessary information for successful planning. With this framework, we are able to benefit from the concise model-free representation, while still enjoying the data-efficiency of model-based algorithms. We demonstrate our framework in multi-pendulum and multi-cheetah environments where several pendulums or cheetahs are shown to the agent but only one of them produces rewards. In these environments, it is important for the agent to construct a concise latent representation to filter out irrelevant observations. We find that our method can successfully learn an accurate latent reward prediction model in the presence of the irrelevant information while existing model-based methods fail. Planning in the learned latent state-space shows strong performance and high sample efficiency over model-free and model-based baselines.""","""The authors propose a model-based RL algorithm, consisting of learning a deterministic multi-step reward prediction model and a vanilla CEM-based MPC actor. In contrast to prior work, the model does not attempt to learn from observations nor is a value function learned. The approach is tested on task from the mujoco control suit. The paper is below acceptance threshold. It is a variation on previous work form Hafner et al. Furthermore, I think the approach is fundamentally limited: All the learning derives from the immediate, dense reward signal, whereas the main challenges in RL are found in sparse reward settings that require planning over long horizons, where value functions or similar methods to assign credit over long time windows are absolutely essential."""
2043,"""NeurQuRI: Neural Question Requirement Inspector for Answerability Prediction in Machine Reading Comprehension""","['Question Answering', 'Machine Reading Comprehension', 'Answerability Prediction', 'Neural Checklist']","""Real-world question answering systems often retrieve potentially relevant documents to a given question through a keyword search, followed by a machine reading comprehension (MRC) step to find the exact answer from them. In this process, it is essential to properly determine whether an answer to the question exists in a given document. This task often becomes complicated when the question involves multiple different conditions or requirements which are to be met in the answer. For example, in a question ""What was the projection of sea level increases in the fourth assessment report?"", the answer should properly satisfy several conditions, such as ""increases"" (but not decreases) and ""fourth"" (but not third). To address this, we propose a neural question requirement inspection model called NeurQuRI that extracts a list of conditions from the question, each of which should be satisfied by the candidate answer generated by an MRC model. To check whether each condition is met, we propose a novel, attention-based loss function. We evaluate our approach on SQuAD 2.0 dataset by integrating the proposed module with various MRC models, demonstrating the consistent performance improvements across a wide range of state-of-the-art methods.""","""This paper extracts a list of conditions from the question, each of which should be satisfied by the candidate answer generated by an MRC model. All reviewers agree that this approach is interesting (verification and validation) and experiments are solid. One of the reviewers raised concerns are promptly answered by authors, raising the average score to accept. """
2044,"""Sensible adversarial learning""","['adversarial learning', 'deep neural networks', 'trade-off', 'margins', 'sensible reversion', 'sensible robustness']","""The trade-off between robustness and standard accuracy has been consistently reported in the machine learning literature. Although the problem has been widely studied to understand and explain this trade-off, no studies have shown the possibility of a no trade-off solution. In this paper, motivated by the fact that the high dimensional distribution is poorly represented by limited data samples, we introduce sensible adversarial learning and demonstrate the synergistic effect between pursuits of natural accuracy and robustness. Specifically, we define a sensible adversary which is useful for learning a defense model and keeping a high natural accuracy simultaneously. We theoretically establish that the Bayes rule is the most robust multi-class classifier with the 0-1 loss under sensible adversarial learning. We propose a novel and efficient algorithm that trains a robust model with sensible adversarial examples, without a significant drop in natural accuracy. Our model on CIFAR10 yields state-of-the-art results against various attacks with perturbations restricted to l with = 8/255, e.g., the robust accuracy 65.17% against PGD attacks as well as the natural accuracy 91.51%. ""","""Thanks for your detailed feedback to the reviewers, which clarified us a lot in many respects. However, there is still room for improvement; for example, convergence to a good solution needs to be further investigated. Given the high competition at ICLR2020, this paper is unfortunately below the bar. We hope that the reviewers' comments are useful for improving the paper for potential future publication."""
2045,"""RNA Secondary Structure Prediction By Learning Unrolled Algorithms""","['RNA secondary structure prediction', 'learning algorithm', 'deep architecture design', 'computational biology']","""In this paper, we propose an end-to-end deep learning model, called E2Efold, for RNA secondary structure prediction which can effectively take into account the inherent constraints in the problem. The key idea of E2Efold is to directly predict the RNA base-pairing matrix, and use an unrolled algorithm for constrained programming as the template for deep architectures to enforce constraints. With comprehensive experiments on benchmark datasets, we demonstrate the superior performance of E2Efold: it predicts significantly better structures compared to previous SOTA (especially for pseudoknotted structures), while being as efficient as the fastest algorithms in terms of inference time.""","""This paper proposes a RNA structure prediction algorithm based on an unrolled inference algorithm. The proposed approach overcomes limitations of previous methods, such as dynamic programming (which does not work for molecular configurations that do not factorize), or energy-based models (which require a minimization step, e.g. by using MCMC to traverse the energy landscape and find minima). Reviewers agreed that the method presented here is novel on this application domain, has excellent empirical evaluation setup with strong numerical results, and has the potential to be of interest to the wider deep learning community. The AC shares these views and recommends an enthusiastic acceptance. """
2046,"""AMCTS: SEARCH WITH THEORETICAL GUARANTEE USING POLICY AND VALUE FUNCTIONS""","['tree search', 'reinforcement learning', 'value neural network', 'policy neural network']","""Combined with policy and value neural networks, Monte Carlos Tree Search (MCTS) is a critical component of the recent success of AI agents in learning to play board games like Chess and Go (Silver et al., 2017). However, the theoretical foundations of MCTS with policy and value networks remains open. Inspired by MCTS, we propose AMCTS, a novel search algorithm that uses both the policy and value predictors to guide search and enjoys theoretical guarantees. Specifically, assuming that value and policy networks give reasonably accurate signals of the values of each state and action, the sample complexity (number of calls to the value network) to estimate the value of the current state, as well as the optimal one-step action to take from the current state, can be bounded. We apply our theoretical framework to different models for the noise distribution of the policy and value network as well as the distribution of rewards, and show that for these general models, the sample complexity is polynomial in D, where D is the depth of the search tree. Empirically, our method outperforms MCTS in these models.""","""This paper proposed an extension of the Monte Carlos Tree Search to find the optimal policy. The method combines A* and MCTS algorithms to prioritize the state to be explored. Compare with traditional MCTS based on UCT, A* MCTS seem to perform better. One concern of the reviewers is the paper's presentation, which is hard to follow. The second concern is the strong restriction of assumption, which make the setting too simple and unrealistic. The rebuttal did not fully address these problems. This paper needs further polish to meet the standard of ICLR. """
2047,"""Hidden incentives for self-induced distributional shift""","['distributional shift', 'safety', 'incentives', 'specification', 'content recommendation', 'reinforcement learning', 'online learning', 'ethics']","""Decisions made by machine learning systems have increasing influence on the world. Yet it is common for machine learning algorithms to assume that no such influence exists. An example is the use of the i.i.d. assumption in online learning for applications such as content recommendation, where the (choice of) content displayed can change users' perceptions and preferences, or even drive them away, causing a shift in the distribution of users. Generally speaking, it is possible for an algorithm to change the distribution of its own inputs. We introduce the term self-induced distributional shift (SIDS) to describe this phenomenon. A large body of work in reinforcement learning and causal machine learning aims to deal with distributional shift caused by deploying learning systems previously trained offline. Our goal is similar, but distinct: we point out that changes to the learning algorithm, such as the introduction of meta-learning, can reveal hidden incentives for distributional shift (HIDS), and aim to diagnose and prevent problems associated with hidden incentives. We design a simple environment as a ""unit test"" for HIDS, as well as a content recommendation environment which allows us to disentangle different types of SIDS. We demonstrate the potential for HIDS to cause unexpected or undesirable behavior in these environments, and propose and test a mitigation strategy.""","""The paper shows how meta-learning contains hidden incentives for distributional shift and how a technique called context swapping can help deal with this. Overall, distributional shift is an important problem, but the contributions made by this paper to deal with this, such as the introduction of unit-tests and context-swapping, is not sufficiently clear. Therefore, my recommendation is a reject."""
2048,"""Imagine That! Leveraging Emergent Affordances for Tool Synthesis in Reaching Tasks""","['Affordance Learning', 'Imagination', 'Generative Models', 'Activation Maximisation']","""In this paper we investigate an artificial agent's ability to perform task-focused tool synthesis via imagination. Our motivation is to explore the richness of information captured by the latent space of an object-centric generative model - and how to exploit it. In particular, our approach employs activation maximisation of a task-based performance predictor to optimise the latent variable of a structured latent-space model in order to generate tool geometries appropriate for the task at hand. We evaluate our model using a novel dataset of synthetic reaching tasks inspired by the cognitive sciences and behavioural ecology. In doing so we examine the model's ability to imagine tools for increasingly complex scenario types, beyond those seen during training. Our experiments demonstrate that the synthesis process modifies emergent, task-relevant object affordances in a targeted and deliberate way: the agents often specifically modify aspects of the tools which relate to meaningful (yet implicitly learned) concepts such as a tool's length, width and configuration. Our results therefore suggest, that task relevant object affordances are implicitly encoded as directions in a structured latent space shaped by experience. ""","""This paper investigates the task of learning to synthesize tools for specific tasks (in this case, a simulated reaching task). The paper was reviewed by 3 experts and received Reject, Weak Reject, and Weak Reject opinions. The reviews are very encouraging of the topic and general approach taken by the paper -- e.g. R3 commenting on the ""coolness"" of the problem and R1 calling it an ""important problem from a cognitive perspective"" -- but also identify a number of concerns about baselines, novelty of proposed techniques, underwhelming performance on the task, whether experiments support the conclusions, and some missing or unclear technical details. Overall, the feeling of the reviewers is that they're ""not sure what I am supposed to get out of the paper"" (R3). The authors posted responses that addressed some of these issues, in particular clarifying their terminology and contribution, and clearing up some of the technical details. However, in post-rebuttal discussions, the reviewers still have concerns with the claims of the papers. In light of these reviews, we are not able to recommend acceptance at this time, but I agree with reviewers that this is a ""cool"" task and that authors should revise and submit to another venue."""
2049,"""Decaying momentum helps neural network training""","['sgd', 'momentum', 'adam', 'optimization', 'deep learning']","""Momentum is a simple and popular technique in deep learning for gradient-based optimizers. We propose a decaying momentum (Demon) rule, motivated by decaying the total contribution of a gradient to all future updates. Applying Demon to Adam leads to significantly improved training, notably competitive to momentum SGD with learning rate decay, even in settings in which adaptive methods are typically non-competitive. Similarly, applying Demon to momentum SGD rivals momentum SGD with learning rate decay, and in many cases leads to improved performance. Demon is trivial to implement and incurs limited extra computational overhead, compared to the vanilla counterparts. ""","""This paper proposes a new decaying momentum rule to improve existing optimization algorithms for training deep neural networks, including momentum SGD and Adam. The main objections from the reviewers include: (1) its novelty is limited compared with prior work; (2) the experimental comparison needs to be improved (e.g., the baselines might not be carefully tuned, and learning rate decay is not applied, while it usually boosts the performance of all the algorithms a lot). After reviewer discussion, I agree with the reviewers evaluation and recommend reject."""
2050,"""CM3: Cooperative Multi-goal Multi-stage Multi-agent Reinforcement Learning""",['multi-agent reinforcement learning'],"""A variety of cooperative multi-agent control problems require agents to achieve individual goals while contributing to collective success. This multi-goal multi-agent setting poses difficulties for recent algorithms, which primarily target settings with a single global reward, due to two new challenges: efficient exploration for learning both individual goal attainment and cooperation for others' success, and credit-assignment for interactions between actions and goals of different agents. To address both challenges, we restructure the problem into a novel two-stage curriculum, in which single-agent goal attainment is learned prior to learning multi-agent cooperation, and we derive a new multi-goal multi-agent policy gradient with a credit function for localized credit assignment. We use a function augmentation scheme to bridge value and policy functions across the curriculum. The complete architecture, called CM3, learns significantly faster than direct adaptations of existing algorithms on three challenging multi-goal multi-agent problems: cooperative navigation in difficult formations, negotiating multi-vehicle lane changes in the SUMO traffic simulator, and strategic cooperation in a Checkers environment.""","""This paper was generally well received by reviewers and was rated as a weak accept by all. The AC recommends acceptance. """
2051,"""Expected Information Maximization: Using the I-Projection for Mixture Density Estimation""","['density estimation', 'information projection', 'mixture models', 'generative learning', 'multimodal modeling']","""Modelling highly multi-modal data is a challenging problem in machine learning. Most algorithms are based on maximizing the likelihood, which corresponds to the M(oment)-projection of the data distribution to the model distribution. The M-projection forces the model to average over modes it cannot represent. In contrast, the I(nformation)-projection ignores such modes in the data and concentrates on the modes the model can represent. Such behavior is appealing whenever we deal with highly multi-modal data where modelling single modes correctly is more important than covering all the modes. Despite this advantage, the I-projection is rarely used in practice due to the lack of algorithms that can efficiently optimize it based on data. In this work, we present a new algorithm called Expected Information Maximization (EIM) for computing the I-projection solely based on samples for general latent variable models, where we focus on Gaussian mixtures models and Gaussian mixtures of experts. Our approach applies a variational upper bound to the I-projection objective which decomposes the original objective into single objectives for each mixture component as well as for the coefficients, allowing an efficient optimization. Similar to GANs, our approach employs discriminators but uses a more stable optimization procedure, using a tight upper bound. We show that our algorithm is much more effective in computing the I-projection than recent GAN approaches and we illustrate the effectiveness of our approach for modelling multi-modal behavior on two pedestrian and traffic prediction datasets. ""","""The paper proposes a new algorithm called Expected Information Maximization (EIM) for learning latent variable models while computing the I-projection solely based on samples. The reviewers had several questions, which the authors sufficiently answered. The reviewers agree that the paper should be accepted. The authors should carefully read the reviewer questions and comments and use them to improve their final manuscript. """
2052,"""Attention Privileged Reinforcement Learning for Domain Transfer""","['sim-to-real', 'domain randomisation', 'attention', 'transfer learning', 'reinforcement learning']","""Applying reinforcement learning (RL) to physical systems presents notable challenges, given requirements regarding sample efficiency, safety, and physical constraints compared to simulated environments. To enable transfer of policies trained in simulation, randomising simulation parameters leads to more robust policies, but also in significantly extended training time. In this paper, we exploit access to privileged information (such as environment states) often available in simulation, in order to improve and accelerate learning over randomised environments. We introduce Attention Privileged Reinforcement Learning (APRiL), which equips the agent with an attention mechanism and makes use of state information in simulation, learning to align attention between state- and image-based policies while additionally sharing generated data. During deployment we can apply the image-based policy to remove the requirement of access to additional information. We experimentally demonstrate accelerated and more robust learning on a number of diverse domains, leading to improved final performance for environments both within and outside the training distribution.""","""This paper tackles the problem of transferring an RL policy learned in simulation to the real world (sim2real). More specifically, the authors address the situation where the agent can access privileged information available during simulation, for example access to exact states instead of compressed representations. They perform experiments in various simulated domains where different aspects of the environment are modified to evaluate generalization. Major concerns remain following the rebuttal. First, it is not clear how realistic it is to assume access to such privileged information in practice. Second, the experiments are not convincing since the algorithms do not appear to have reached convergence in the presented results. Finally, a sim2real work would highly benefit from real-world experiments. In light of the above issues, I recommend to reject this paper."""
2053,"""Measuring Compositional Generalization: A Comprehensive Method on Realistic Data""","['compositionality', 'generalization', 'natural language understanding', 'benchmark', 'compositional generalization', 'compositional modeling', 'semantic parsing', 'generalization measurement']","""State-of-the-art machine learning methods exhibit limited compositional generalization. At the same time, there is a lack of realistic benchmarks that comprehensively measure this ability, which makes it challenging to find and evaluate improvements. We introduce a novel method to systematically construct such benchmarks by maximizing compound divergence while guaranteeing a small atom divergence between train and test sets, and we quantitatively compare this method to other approaches for creating compositional generalization benchmarks. We present a large and realistic natural language question answering dataset that is constructed according to this method, and we use it to analyze the compositional generalization ability of three machine learning architectures. We find that they fail to generalize compositionally and that there is a surprisingly strong negative correlation between compound divergence and accuracy. We also demonstrate how our method can be used to create new compositionality benchmarks on top of the existing SCAN dataset, which confirms these findings. ""","""Main content: Blind review #1 summarizes it well: This paper first introduces a method for quantifying to what extent a dataset split exhibits compound (or, alternatively, atom) divergence, where in particular atoms refer to basic structures used by examples in the datasets, and compounds result from compositional rule application to these atoms. The paper then proposes to evaluate learners on datasets with maximal compound divergence (but minimal atom divergence) between the train and test portions, as a way of testing whether a model exhibits compositional generalization, and suggests a greedy algorithm for forming datasets with this property. In particular, the authors introduce a large automatically generated semantic parsing dataset, which allows for the construction of datasets with these train/test split divergence properties. Finally, the authors evaluate three sequence-to-sequence style semantic parsers on the constructed datasets, and they find that they all generalize very poorly on datasets with maximal compound divergence, and that furthermore the compound divergence appears to be anticorrelated with accuracy. -- Discussion: Blind review #1 is the most knowledgeable in this area and wrote ""This is an interesting and ambitious paper tackling an important problem. It is worth noting that the claim that it is the compound divergence that controls the difficulty of generalization (rather than something else, like length) is a substantive one, and the authors do provide evidence of this."" -- Recommendation and justification: This paper deserves to be accepted because it tackles an important problem that is overlooked in current work that is evaluated on datasets of questionable meaningfulness. It adds insight by focusing on the qualities of datasets that enable testing how well learning algorithms do on compositional generalization, which is crucial to intelligence."""
2054,"""Towards Hierarchical Importance Attribution: Explaining Compositional Semantics for Neural Sequence Models""","['natural language processing', 'interpretability']","""The impressive performance of neural networks on natural language processing tasks attributes to their ability to model complicated word and phrase compositions. To explain how the model handles semantic compositions, we study hierarchical explanation of neural network predictions. We identify non-additivity and context independent importance attributions within hierarchies as two desirable properties for highlighting word and phrase compositions. We show some prior efforts on hierarchical explanations, e.g. contextual decomposition, do not satisfy the desired properties mathematically, leading to inconsistent explanation quality in different models. In this paper, we start by proposing a formal and general way to quantify the importance of each word and phrase. Following the formulation, we propose Sampling and Contextual Decomposition (SCD) algorithm and Sampling and Occlusion (SOC) algorithm. Human and metrics evaluation on both LSTM models and BERT Transformer models on multiple datasets show that our algorithms outperform prior hierarchical explanation algorithms. Our algorithms help to visualize semantic composition captured by models, extract classification rules and improve human trust of models.""","""The authors present a hierarchical explanation model for understanding the underlying representations produced by LSTMs and Transformers. Using human evaluation, they find that their explanations are better, which could lead to better trust of these opaque models. The reviewers raised some issues with the derivations, but the author response addressed most of these. """
2055,"""WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia""","['multilinguality', 'bitext mining', 'neural MT', 'Wikipedia', 'low-resource languages', 'joint sentence representation']","""We present an approach based on multilingual sentence embeddings to automatically extract parallel sentences from the content of Wikipedia articles in 85 languages, including several dialects or low-resource languages. We do not limit the extraction process to alignments with English, but systematically consider all possible language pairs. In total, we are able to extract 135M parallel sentences for 1620 different language pairs, out of which only 34M are aligned with English. This corpus of parallel sentences is freely available (URL anonymized) To get an indication on the quality of the extracted bitexts, we train neural MT baseline systems on the mined data only for 1886 languages pairs, and evaluate them on the TED corpus, achieving strong BLEU scores for many language pairs. The WikiMatrix bitexts seem to be particularly interesting to train MT systems between distant languages without the need to pivot through English.""","""The authors present an approach to large scale bitext extraction from Wikipedia. This builds heavily on previous work, with the novelty being somewhat minor efficient approximate K-nearest neighbor search and language agnostic parameters such as cutoffs. These techniques have not been validated on other data sets and it is unclear how well they generalise. The major contribution of the paper is the corpus created, consisting of 85 languages, 1620 language pairs and 135M parallel sentences, of which most do not include English. This corpus is very valuable and already in use in the field, but IMO ICLR is not the right venue for this kind of publication. There were four reviews, all broadly in agreement, and some discussion with the authors. """
2056,"""Generalization bounds for deep convolutional neural networks""","['generalization', 'convolutional networks', 'statistical learning theory']","""We prove bounds on the generalization error of convolutional networks. The bounds are in terms of the training loss, the number of parameters, the Lipschitz constant of the loss and the distance from the weights to the initial weights. They are independent of the number of pixels in the input, and the height and width of hidden feature maps. We present experiments using CIFAR-10 with varying hyperparameters of a deep convolutional network, comparing our bounds with practical generalization gaps.""","""The authors present several theorems bounding the generalization error of a class of conv nets (CNNs) with high probability by O(sqrt(W(beta + log(lambda)) + log(1/delta)]/sqrt(n)), where W is the number of weights, beta is the distance from initialization in operator norm, lambda is the margin, n is the number of data, and the bound holds with prob. at least 1-delta. (They also present a bound that is tighter when the empirical risk is small.) The bounds are ""size free"" in the sense that they do not depend on the size of the *input*, which is assumed to be, say, a d x d image. While there is dependence on the number of parameters, W, there is no implicit dependence on d here. The paper received the following feedback: 1. Reviewer 3 mostly had clarifying questions, especially with respect to (essentially independent) work by Wei and Ma. Reviewer 3 also pressed the authors to discuss how the bounds compared in absolute terms to the bounds of Bartlett et al. The authors stated that they did not have explicit constants to make such a comparison. Reviewer 3 was satisfied enough to raise their score to a 6. 2. Reviewer 1 admitted they were not experts and raised some issues around novelty/simplicity. I do not think the simplicity of the paper is a drawback. The reviewers unfortunately did not participate in the rebuttal, despite repeated attempts. 3. Reviewer 2 argued for weak reject, despite an interaction with the authors. The reviewer raised the issue of bounds based on control of the Lipschitz constant. The conversation was slightly marred by a typo in the reviewers original comment. I don't believe the authors ultimately responded to the reviewer's point. There was another discussion about simultaneously work and compression-based bounds. I would agree with the authors that they need not have cited simultaneous work, especially since the details are quite different. Ultimately, this reviewer still argued for rejection (weakly). After the rebuttal period ended, the reviewers raised some further concerns with me. I tried to assess these on my own, and ended up with my own questions. I raise these in no particular order. Each of them may have a simple resolution. In that case, the authors should take them as possible sources of confusion. Addressing them may significantly improve the readability of the paper. i. Lemma A.3. The order of quantification is poorly expressed and so I was not confident in the statement. In particular, the theorem starts \forall \eta >0 \exists C, .... but then C is REINTRODUCED later, subsequent to existential quantification over M, B, and d and so it seems there is dependence. If there is no dependence, this presentation is sloppy and should be fixed. ii. Lemma A.4, the same dependence of C on M, B and d holds here and this is quite problematic for the later applications. If this constant is independent of these quantities, then the order of quantifiers has been stated incorrectly. Again, this is sloppy if it is wrong. If it's correct, then we need to know how C grows. Based on other claims by the authors, it is my understanding that, in both cases, the constant C does not depend on M, B, or d. Regardless, the authors should clarify the dependence. If C does in fact depend on these quantities, and the conclusions change, the paper should be retracted. iii. Proof of Lemma 2.3. I'd remind the reader that the parametrization maps the unit ball to G. iv. The bound depends on control of operator norms and empirical margins. It is not clear how these interact and whether, for margin parameters necessary to achieve small empirical margin risk, the bounds pick up dependence on other aspects of the learning problem (e.g., depth). I think the only way to assess this would be to investigate these quantities empirically, say, by varying the size and depth of the network on a fixed data set, trained to achieve the same empirical risk (or margin). I'll add that I was also disappointed that the authors did not attempt to address any of the issues by a revision of the actual paper. In particular, the authors promise several changes that would have been straightforward to make in the two weeks of rebuttal. Instead, the reviewers and myself are left to imagine how things would change. I see at least two promises: A. To walk back some of the empirical claims about distance from initialization that are based on somewhat flimsy empirical evaluations. I would add to this the need to investigate how the margin and operator norms depend on depth empirically. B. Attribute Dziugate and Roy for establishing the first bounds in terms of distance from initialization, though their bounds were numerical. I think a mention of simultaneously work would also be generous, even if not strictly necessary. """
2057,"""Information Geometry of Orthogonal Initializations and Training""","['Fisher', 'mean-field', 'deep learning']",""" Recently mean field theory has been successfully used to analyze properties of wide, random neural networks. It gave rise to a prescriptive theory for initializing feed-forward neural networks with orthogonal weights, which ensures that both the forward propagated activations and the backpropagated gradients are near \(\ell_2\) isometries and as a consequence training is orders of magnitude faster. Despite strong empirical performance, the mechanisms by which critical initializations confer an advantage in the optimization of deep neural networks are poorly understood. Here we show a novel connection between the maximum curvature of the optimization landscape (gradient smoothness) as measured by the Fisher information matrix (FIM) and the spectral radius of the input-output Jacobian, which partially explains why more isometric networks can train much faster. Furthermore, given that orthogonal weights are necessary to ensure that gradient norms are approximately preserved at initialization, we experimentally investigate the benefits of maintaining orthogonality throughout training, and we conclude that manifold optimization of weights performs well regardless of the smoothness of the gradients. Moreover, we observe a surprising yet robust behavior of highly isometric initializations --- even though such networks have a lower FIM condition number \emph{at initialization}, and therefore by analogy to convex functions should be easier to optimize, experimentally they prove to be much harder to train with stochastic gradient descent. We conjecture the FIM condition number plays a non-trivial role in the optimization.""","""I've gone over this paper carefully and think it's above the bar for ICLR. The paper proves a relationship between the eigenvalues of the Fisher information matrix and the singular values of the network Jacobian. The main step is bounding the eigenvalues of the full Fisher matrix in terms of the eigenvalues and singular values of individual blocks using Gersgorin disks. The analysis seems correct and (to the best of my knowledge) novel, and relationships between the Jacobian and FIM are interesting insofar as they give different ways of looking at linearized approximations. The Gersgorin disk analysis seems like it may give loose bounds, but the analysis still matches up well with the experiments. The paper is not quite as strong when it comes to relating the anslysis to optimization. The maximum eigenvalue of the FIM by itself doesn't tell us much about the difficulty of optimization. E.g., if the top FIM eigenvalue is increased, but the distance the weights need to travel is proportionately decreased (as seems plausible when the Jacobian scale is changed), then one could make just as fast progress with a smaller learning rate. So in this light, it's not too surprising that the analysis fails to capture the optimization dynamics once the learning rates are tuned. But despite this limitation, the contribution still seems worthwhile. The writing can still be improved. The claim about stability of the linearization explaining the training dynamics appears fairly speculative, and not closely related to the analysis and experiments. I recommend removing it, or at least removing it from the abstract. """
2058,"""Promoting Coordination through Policy Regularization in Multi-Agent Deep Reinforcement Learning""","['Reinforcement Learning', 'Multi-Agent', 'Continuous Control', 'Regularization', 'Coordination', 'Inductive biases']","""A central challenge in multi-agent reinforcement learning is the induction of coordination between agents of a team. In this work, we investigate how to promote inter-agent coordination using policy regularization and discuss two possible avenues respectively based on inter-agent modelling and synchronized sub-policy selection. We test each approach in four challenging continuous control tasks with sparse rewards and compare them against three baselines including MADDPG, a state-of-the-art multi-agent reinforcement learning algorithm. To ensure a fair comparison, we rely on a thorough hyper-parameter selection and training methodology that allows a fixed hyper-parameter search budget for each algorithm and environment. We consequently assess both the hyper-parameter sensitivity, sample-efficiency and asymptotic performance of each learning method. Our experiments show that the proposed methods lead to significant improvements on cooperative problems. We further analyse the effects of the proposed regularizations on the behaviors learned by the agents.""","""After reading the reviews and discussing this paper with the reviewers, I believe that this paper is not quite ready for publication at this time. While there was some enthusiasm from the reviewers about the paper, there were also major concerns raised about the comparisons and experimental evaluation, as well as some concerns about novelty. The major concerns about experimental evaluation center around the experiments being restricted to continuous action settings where there is a limited set of baselines (see R3). While I see the authors' point that the method is not restricted to this setting, showing more experiments with more baselines would be important: the demonstrated experiments do strike me as somewhat simplistic, and the standardized comparisons are limited. This might not by itself be that large of an issue, if it wasn't for the other problem: the contribution strikes me as somewhat ad-hoc. While I can see the intuition behind why these two auxiliary objectives might work well, since there is only intuition, then the burden in terms of showing that this is a good idea falls entirely on the experiments. And this is where in my opinion the work comes up short: if we are going to judge the efficacy of the method entirely on the experimental evaluation without any theoretical motivation, then the experimental evaluation does not seem to me to be sufficient. This issue could be addressed either with more extensive and complete experiments and comparisons, or a more convincing conceptual or theoretical argument explaining why we should expect these two particular auxiliary objectives to make a big difference."""
2059,"""Pre-training as Batch Meta Reinforcement Learning with tiMe ""","['Reinforcement Learning', 'Deep Reinforcement Learning', 'Meta Reinforcement Learning', 'Batch Reinforcement Learning', 'Transfer Learning']","""Pre-training is transformative in supervised learning: a large network trained with large and existing datasets can be used as an initialization when learning a new task. Such initialization speeds up convergence and leads to higher performance. In this paper, we seek to understand what the formalization for pre-training from only existing and observational data in Reinforcement Learning (RL) is and whether it is possible. We formulate the setting as Batch Meta Reinforcement Learning. We identify MDP mis-identification to be a central challenge and motivate it with theoretical analysis. Combining ideas from Batch RL and Meta RL, we propose tiMe, which learns distillation of multiple value functions and MDP embeddings from only existing data. In challenging control tasks and without fine-tuning on unseen MDPs, tiMe is competitive with state-of-the-art model-free RL method trained with hundreds of thousands of environment interactions.""","""The reviewers reached a unanimous consensus that the paper could not be accepted for publication in its current form. There were a number of concerns raised regarding (1) the clarity of the writing; (2) the comparisons, especially to prior work; (3) the details of the experimental setup."""
2060,"""TPO: TREE SEARCH POLICY OPTIMIZATION FOR CONTINUOUS ACTION SPACES""","['monte-carlo tree search', 'reinforcement learning', 'tree search', 'policy optimization']","""Monte Carlo Tree Search (MCTS) has achieved impressive results on a range of discrete environments, such as Go, Mario and Arcade games, but it has not yet fulfilled its true potential in continuous domains.In this work, we introduceTPO, a tree search based policy optimization method for continuous environments. TPO takes a hybrid approach to policy optimization. Building the MCTS tree in a continuous action space and updating the policy gradient using off-policy MCTS trajectories are non-trivial. To overcome these challenges, we propose limiting tree search branching factor by drawing only few action samples from the policy distribution and define a new loss function based on the trajectories mean and standard deviations. Our approach led to some non-intuitive findings. MCTS training generally requires a large number of samples and simulations. However, we observed that bootstrappingtree search with a pre-trained policy allows us to achieve high quality results with a low MCTS branching factor and few number of simulations. Without the proposed policy bootstrapping, continuous MCTS would require a much larger branching factor and simulation count, rendering it computationally and prohibitively expensive. In our experiments, we use PPO as our baseline policy optimization algorithm. TPO significantly improves the policy on nearly all of our benchmarks. For example, in complex environments such as Humanoid, we achieve a 2.5improvement over the baseline algorithm.""","""The paper proposes a tree search based policy optimization methods for continuous action state spaces. The paper does not have a theoretical guarantee, but has empirical results. Reviewers brought up issues such as lack of using other policy optimizations methods (SAC, RERPI, etc.), sample inefficiency, and unclear difference with some other similar papers. Even though the authors have provided a rebuttal to address these issues, all the reviewers remain negative. So I can only recommend rejection at this stage."""
2061,"""On Iterative Neural Network Pruning, Reinitialization, and the Similarity of Masks""","['Pruning', 'Lottery Tickets', 'Science of Deep Learning', 'Experimental Deep Learning', 'Empirical Study']","""We examine how recently documented, fundamental phenomena in deep learn-ing models subject to pruning are affected by changes in the pruning procedure. Specifically, we analyze differences in the connectivity structure and learning dynamics of pruned models found through a set of common iterative pruning techniques, to address questions of uniqueness of trainable, high-sparsity sub-networks, and their dependence on the chosen pruning method. In convolutional layers, we document the emergence of structure induced by magnitude-based un-structured pruning in conjunction with weight rewinding that resembles the effects of structured pruning. We also show empirical evidence that weight stability can be automatically achieved through apposite pruning techniques.""","""This is an observational work with experiments for comparing iterative pruning methods. I agree with the main concerns of all reviewers: (a) Experimental setups are of too small-scale or with easy datasets, so hard to believe they would generalize for other settings, e.g., large-scale residual networks. This aspect is very important as this is an observational paper. (b) The main take-home contribution/message is weak considering the high-standard of ICLR. Hence, I recommend rejection. I would encourage the authors to consider the above concerns as it could yield a valuable contribution."""
2062,"""The Role of Embedding Complexity in Domain-invariant Representations""","['domain adaptation', 'domain-invariant representations', 'model complexity', 'theory', 'deep learning']","""Unsupervised domain adaptation aims to generalize the hypothesis trained in a source domain to an unlabeled target domain. One popular approach to this problem is to learn domain-invariant embeddings for both domains. In this work, we study, theoretically and empirically, the effect of the embedding complexity on generalization to the target domain. In particular, this complexity affects an upper bound on the target risk; this is reflected in experiments, too. Next, we specify our theoretical framework to multilayer neural networks. As a result, we develop a strategy that mitigates sensitivity to the embedding complexity, and empirically achieves performance on par with or better than the best layer-dependent complexity tradeoff.""","""This paper studies the impact of embedding complexity on domain-invariant representations by incorporating embedding complexity into the previous upper bound explicitly. The idea of embedding complexity is interesting, the exploration has some useful insight, and the paper is well-written. However, Reviewers and AC generally agree that the current version can be significantly improved in several ways: - The proposed upper bound has several limitations such as looser than existing ones. - The embedding complexity is only addressed implicitly, which shares similar idea with previous works. - The claim of implicit regularization has not been explored in-depth. - The proposed MDM method seems to be incremental and related closely with the embedding complexity. - There is no analysis about the generalization when estimating this upper bound from finite samples. There are important details requiring further elaboration. So I recommend rejection."""
2063,"""Keep Doing What Worked: Behavior Modelling Priors for Offline Reinforcement Learning""","['Reinforcement Learning', 'Off-policy', 'Multitask', 'Continuous Control']","""Off-policy reinforcement learning algorithms promise to be applicable in settings where only a fixed data-set (batch) of environment interactions is available and no new experience can be acquired. This property makes these algorithms appealing for real world problems such as robot control. In practice, however, standard off-policy algorithms fail in the batch setting for continuous control. In this paper, we propose a simple solution to this problem. It admits the use of data generated by arbitrary behavior policies and uses a learned prior -- the advantage-weighted behavior model (ABM) -- to bias the RL policy towards actions that have previously been executed and are likely to be successful on the new task. Our method can be seen as an extension of recent work on batch-RL that enables stable learning from conflicting data-sources. We find improvements on competitive baselines in a variety of RL tasks -- including standard continuous control benchmarks and multi-task learning for simulated and real-world robots. ""","""The authors present a novel stable RL algorithm for the batch off-policy setting, through the use of a learned prior. Initially, reviewers had significant concerns about (1) reproducibility, (2) technical details, including the non-negativity of the lagrange multiplier, (3) a lack of separation between performance contributions of ABM and MPO, (4) baseline comparisons. The authors satisfactorily clarified points (1)-(3) and the simulated baseline comparisons for (4) seem reasonable in light of how long the real robot experiments took, as reported by the authors. Futhermore, the reviewers all agree on the contribution of the core ideas. Thus, I recommend this paper for acceptance."""
2064,"""Dynamic Instance Hardness""","['training dynamics', 'instance hardness', 'curriculum learning', 'neural nets memorization']","""We introduce dynamic instance hardness (DIH) to facilitate the training of machine learning models. DIH is a property of each training sample and is computed as the running mean of the sample's instantaneous hardness as measured over the training history. We use DIH to evaluate how well a model retains knowledge about each training sample over time. We find that for deep neural nets (DNNs), the DIH of a sample in relatively early training stages reflects its DIH in later stages and as a result, DIH can be effectively used to reduce the set of training samples in future epochs. Specifically, during each epoch, only samples with high DIH are trained (since they are historically hard) while samples with low DIH can be safely ignored. DIH is updated each epoch only for the selected samples, so it does not require additional computation. Hence, using DIH during training leads to an appreciable speedup. Also, since the model is focused on the historically more challenging samples, resultant models are more accurate. The above, when formulated as an algorithm, can be seen as a form of curriculum learning, so we call our framework DIH curriculum learning (or DIHCL). The advantages of DIHCL, compared to other curriculum learning approaches, are: (1) DIHCL does not require additional inference steps over the data not selected by DIHCL in each epoch, (2) the dynamic instance hardness, compared to static instance hardness (e.g., instantaneous loss), is more stable as it integrates information over the entire training history up to the present time. Making certain mathematical assumptions, we formulate the problem of DIHCL as finding a curriculum that maximizes a multi-set function pseudo-formula , and derive an approximation bound for a DIH-produced curriculum relative to the optimal curriculum. Empirically, DIHCL-trained DNNs significantly outperform random mini-batch SGD and other recently developed curriculum learning methods in terms of efficiency, early-stage convergence, and final performance, and this is shown in training several state-of-the-art DNNs on 11 modern datasets.""","""All three reviewers, even after the rebuttal, agreed that the paper did not meet with bar for acceptance. A common complaint was lack of clarity being a major problem. Unfortunately, the paper cannot be accepted in its current form. The authors are encouraged to improve the presentation of their approach and resubmit to a new venue. """
2065,"""Towards Understanding the Regularization of Adversarial Robustness on Neural Networks""","['Adversarial robustness', 'Statistical Learning', 'Regularization']",""" The problem of adversarial examples has shown that modern Neural Network (NN) models could be rather fragile. Among the most promising techniques to solve the problem, one is to require the model to be {\it pseudo-formula -adversarially robust} (AR); that is, to require the model not to change predicted labels when any given input examples are perturbed within a certain range. However, it is widely observed that such methods would lead to standard performance degradation, i.e., the degradation on natural examples. In this work, we study the degradation through the regularization perspective. We identify quantities from generalization analysis of NNs; with the identified quantities we empirically find that AR is achieved by regularizing/biasing NNs towards less confident solutions by making the changes in the feature space (induced by changes in the instance space) of most layers smoother uniformly in all directions; so to a certain extent, it prevents sudden change in prediction w.r.t. perturbations. However, the end result of such smoothing concentrates samples around decision boundaries, resulting in less confident solutions, and leads to worse standard performance. Our studies suggest that one might consider ways that build AR into NNs in a gentler way to avoid the problematic regularization. ""","""The paper investigates why adversarial training can sometimes degrade model performance on clean input examples. The reviewers agreed that the paper provides valuable insights into how adversarial training affects the distribution of activations. On the other hand, the reviewers raised concerns about the experimental setup as well as the clarity of the writing and felt that the presentation could be improved. Overall, I think this paper explores a very interesting direction and such papers are valuable to the community. It's a borderline paper currently but I think it could turn into a great paper with another round of revision. I encourage the authors to revise the draft and resubmit to a different venue. """
2066,"""AutoSlim: Towards One-Shot Architecture Search for Channel Numbers""","['AutoSlim', 'Neural Architecture Search', 'Efficient Networks', 'Network Pruning']",""" We study how to set the number of channels in a neural network to achieve better accuracy under constrained resources (e.g., FLOPs, latency, memory footprint or model size). A simple and one-shot approach, named AutoSlim, is presented. Instead of training many network samples and searching with reinforcement learning, we train a single slimmable network to approximate the network accuracy of different channel configurations. We then iteratively evaluate the trained slimmable model and greedily slim the layer with minimal accuracy drop. By this single pass, we can obtain the optimized channel configurations under different resource constraints. We present experiments with MobileNet v1, MobileNet v2, ResNet-50 and RL-searched MNasNet on ImageNet classification. We show significant improvements over their default channel configurations. We also achieve better accuracy than recent channel pruning methods and neural architecture search methods with 100X lower search cost. Notably, by setting optimized channel numbers, our AutoSlim-MobileNet-v2 at 305M FLOPs achieves 74.2% top-1 accuracy, 2.4% better than default MobileNet-v2 (301M FLOPs), and even 0.2% better than RL-searched MNasNet (317M FLOPs). Our AutoSlim-ResNet-50 at 570M FLOPs, without depthwise convolutions, achieves 1.3% better accuracy than MobileNet-v1 (569M FLOPs). ""","""The paper presents a simple one-shot approach on searching the number of channels for deep convolutional neural networks. It trains a single slimmable network and then iteratively slim and evaluate the model to ensure a minimal accuracy drop. The method is simple and the results are promising. The main concern for this paper is the limited novelty. This work is based on slimmable network and the iterative slimming process is new, but in some sense similar to DropPath. The rebuttal that PathNet ""has not demonstrated results on searching number of channels, and we are among the first few one-shot approaches on architectural search for number of channels"" seem weak."""
2067,"""A Finite-Time Analysis of Q-Learning with Neural Network Function Approximation""","['Reinforcement Learning', 'Neural Networks']","""Q-learning with neural network function approximation (neural Q-learning for short) is among the most prevalent deep reinforcement learning algorithms. Despite its empirical success, the non-asymptotic convergence rate of neural Q-learning remains virtually unknown. In this paper, we present a finite-time analysis of a neural Q-learning algorithm, where the data are generated from a Markov decision process and the action-value function is approximated by a deep ReLU neural network. We prove that neural Q-learning finds the optimal policy with pseudo-formula convergence rate if the neural function approximator is sufficiently overparameterized, where pseudo-formula is the number of iterations. To our best knowledge, our result is the first finite-time analysis of neural Q-learning under non-i.i.d. data assumption.""","""This was an extremely difficult paper to decide, as it attracted significant commentary (and controversy) that led to non-trivial corrections in the results. One of the main criticisms is that the work is an incremental combination of existing results. A potentially bigger concern is that of correctness: the main convergence rate was changed from 1/T to 1/sqrt{T} during the rebuttal and revision process. Such a change is not trivial and essentially proves the initial submission was incorrect. In general, it is not prudent to accept a hastily revised theory paper without a proper assessment of correctness in its modified form. Therefore, I think it would be premature to accept this paper without a full review cycle that assessed the revised form. There also appear to be technical challenges from the discussion that remain unaddressed. Any resubmission will also have to highlight significance and make a stronger case for the novelty of the results."""
2068,"""Multi-Precision Policy Enforced Training (MuPPET) : A precision-switching strategy for quantised fixed-point training of CNNs""",[],"""Large-scale convolutional neural networks (CNNs) suffer from very long training times, spanning from hours to weeks, limiting the productivity and experimentation of deep learning practitioners. As networks grow in size and complexity one approach of reducing training time is the use of low-precision data representation and computations during the training stage. However, in doing so the final accuracy suffers due to the problem of vanishing gradients. Existing state-of-the-art methods combat this issue by means of a mixed-precision approach employing two different precision levels, FP32 (32-bit floating-point precision) and FP16/FP8 (16-/8-bit floating-point precision), leveraging the hardware support of recent GPU architectures for FP16 operations to obtaining performance gains. This work pushes the boundary of quantised training by employing a multilevel optimisation approach that utilises multiple precisions including low-precision fixed-point representations. The training strategy, named MuPPET, combines the use of multiple number representation regimes together with a precision-switching mechanism that decides at run time the transition between different precisions. Overall, the proposed strategy tailors the training process to the hardware-level capabilities of the utilised hardware architecture and yields improvements in training time and energy efficiency compared to state-of-the-art approaches. Applying MuPPET on the training of AlexNet, ResNet18 and GoogLeNet on ImageNet (ILSVRC12) and targeting an NVIDIA Turing GPU, the proposed method achieves the same accuracy as the standard full-precision training with an average training-time speedup of 1.28 across the networks.""","""The submission presents an approach to speed up network training time by using lower precision representations and computation to begin with and then dynamically increasing the precision from 8 to 32 bits over the course of training. The results show that the same accuracy can be obtained while achieving a moderate speed up. The reviewers were agreed that the paper did not offer a signficant advantage or novelty, and that the method was somewhat ad hoc and unclear. Unfortunately, the authors' rebuttal did not clarify all of these points, and the recommendation after discussion is for rejection. """
2069,"""Optimal Unsupervised Domain Translation""","['Unsupervised Domain Translation', 'CycleGAN', 'Optimal Transport']","""Unsupervised Domain Translation~(UDT) consists in finding meaningful correspondences between two domains, without access to explicit pairings between them. Following the seminal work of \textit{CycleGAN}, many variants and extensions of this model have been applied successfully to a wide range of applications. However, these methods remain poorly understood, and lack convincing theoretical guarantees. In this work, we define UDT in a rigorous, non-ambiguous manner, explore the implicit biases present in the approach and demonstrate the limits of theses approaches. Specifically, we show that mappings produced by these methods are biased towards \textit{low energy} transformations, leading us to cast UDT into an Optimal Transport~(OT) framework by making this implicit bias explicit. This not only allows us to provide theoretical guarantees for existing methods, but also to solve UDT problems where previous methods fail. Finally, making the link between the dynamic formulation of OT and CycleGAN, we propose a simple approach to solve UDT, and illustrate its properties in two distinct settings.""","""The paper examines the problem of unsupervised domain translation. It poses the problem in a rigorous way for the first time and examines the shortcomings of existing CycleGAN-based methods. Then the authors propose to consider the problem through the lens of Optimal Transport theory and formulate a practical algorithm. The reviewers agree that the paper addresses an important problem, brings clarity to existing methods, and proposes an interesting approach / algorithm, and is well-written. However, there was a shared concern about whether the new approach just moves the complexity elsewhere (into the design of the cost function). The authors claim to have addressed in the rebuttal by adding an extra experiment, but the reviewers remained unconvinced. Based on the reviewer discussion, I recommend rejection at this time, but look forward to seeing the revised paper at a future venue."""
2070,"""TriMap: Large-scale Dimensionality Reduction Using Triplets""","['Dimensionality Reduction', 'Triplets', 'Data Visualization', 't-SNE', 'LargeVis', 'UMAP']","""We introduce ``TriMap''; a dimensionality reduction technique based on triplet constraints that preserves the global accuracy of the data better than the other commonly used methods such as t-SNE, LargeVis, and UMAP. To quantify the global accuracy, we introduce a score which roughly reflects the relative placement of the clusters rather than the individual points. We empirically show the excellent performance of TriMap on a large variety of datasets in terms of the quality of the embedding as well as the runtime. On our performance benchmarks, TriMap easily scales to millions of points without depleting the memory and clearly outperforms t-SNE, LargeVis, and UMAP in terms of runtime.""","""This paper proposes a new dimensionality reduction technique that tries to preserve the global structure of the data as measured by the relative distances between triplets. As Reviewer 1 noted, the construction of the TriMap algorithm is fairly heuristic, making it difficult to determine how TriMap ought to behave better than existing dimensionality reduction approaches other than through qualitative assessment. Here, I share Reviewer 2s concern that the qualitative behavior of TriMap is difficult to distinguish from existing methods in many of the figures. """
2071,"""AdvCodec: Towards A Unified Framework for Adversarial Text Generation""","['adversarial text generation', 'tree-autoencoder', 'human evaluation']","""Machine learning (ML) especially deep neural networks (DNNs) have been widely applied to real-world applications. However, recent studies show that DNNs are vulnerable to carefully crafted \emph{adversarial examples} which only deviate from the original data by a small magnitude of perturbation. While there has been great interest on generating imperceptible adversarial examples in continuous data domain (e.g. image and audio) to explore the model vulnerabilities, generating \emph{adversarial text} in the discrete domain is still challenging. The main contribution of this paper is to propose a general targeted attack framework \advcodec for adversarial text generation which addresses the challenge of discrete input space and be easily adapted to general natural language processing (NLP) tasks. In particular, we propose a tree based autoencoder to encode discrete text data into continuous vector space, upon which we optimize the adversarial perturbation. With the tree based decoder, it is possible to ensure the grammar correctness of the generated text; and the tree based encoder enables flexibility of making manipulations on different levels of text, such as sentence (\advcodecsent) and word (\advcodecword) levels. We consider multiple attacking scenarios, including appending an adversarial sentence or adding unnoticeable words to a given paragraph, to achieve arbitrary \emph{targeted attack}. To demonstrate the effectiveness of the proposed method, we consider two most representative NLP tasks: sentiment analysis and question answering (QA). Extensive experimental results show that \advcodec has successfully attacked both tasks. In particular, our attack causes a BERT-based sentiment classifier accuracy to drop from pseudo-formula to pseudo-formula , and a BERT-based QA model's F1 score to drop from pseudo-formula to pseudo-formula (with best targeted attack F1 score as pseudo-formula ). Furthermore, we show that the white-box generated adversarial texts can transfer across other black-box models, shedding light on an effective way to examine the robustness of existing NLP models.""","""This paper proposes a method for generating text examples that are adversarial against a known text model, based on modifying the internal representations of a tree-structured autoencoder. I side with the two more confident reviewers, and argue that this paper doesn't offer sufficient evidence that this method is useful in the proposed setting. I'm particularly swayed by R1, who raises some fairly basic concerns about the value of adversarial example work of this kind, where the generated examples look unnatural in most cases, and where label preservation is not guaranteed. I'm also concerned by the fact, which came up repeatedly in the reviews, that the authors claimed that using a tree-structured decoder encourages the model to generate grammatical sentencesI see no reason why this should be the case in the setting described here, and the paper doesn't seem to offer evidence to back this up."""
2072,"""Self-Supervised GAN Compression""","['compression', 'pruning', 'generative adversarial networks', 'GAN']","""Deep learning's success has led to larger and larger models to handle more and more complex tasks; trained models can contain millions of parameters. These large models are compute- and memory-intensive, which makes it a challenge to deploy them with minimized latency, throughput, and storage requirements. Some model compression methods have been successfully applied on image classification and detection or language models, but there has been very little work compressing generative adversarial networks (GANs) performing complex tasks. In this paper, we show that a standard model compression technique, weight pruning, cannot be applied to GANs using existing methods. We then develop a self-supervised compression technique which uses the trained discriminator to supervise the training of a compressed generator. We show that this framework has a compelling performance to high degrees of sparsity, generalizes well to new tasks and models, and enables meaningful comparisons between different pruning granularities.""","""The paper develops a new method for pruning generators of GANs. It has received a mixed set of reviews. Basically, the reviewers agree that the problem is interesting and appreciate that the authors have tried some baseline approaches and verified/demonstrated that they do not work. Where the reviewers diverge is on whether the authors have been successful with the new method. In the opinion of the first reviewer, there is little value in achieving low levels (e.g. 50%) of fine-grained sparsity, while the authors have not managed to achieve good performance with filter-level sparsity (as evidenced by Figure 7, Table 3 as well as figures in the appendices). The authors admit that the sparsity levels achieved with their approach cannot be turned into speed improvement without future work. Furthermore, as pointed out by the first reviewer, the comparison with prior art, in particular with LIT method, which has been reported to successfully compress the same GAN, is missing and the results of LIT have been misrepresented. While the authors argue that their pruning is an ""orthogonal technique"", and can be applied on top of LIT, this is not verified in any way. In practice, combination of different compression techniques is known to be non-trivial, since they aim to explain the same types of redundancies. Overall, while this paper comes close, the problems highlighted by the first reviewer have not been resolved convincingly enough for acceptance."""
2073,"""Learning Disentangled Representations for CounterFactual Regression""","['Counterfactual Regression', 'Causal Effect Estimation', 'Selection Bias', 'Off-policy Learning']","""We consider the challenge of estimating treatment effects from observational data; and point out that, in general, only some factors based on the observed covariates X contribute to selection of the treatment T, and only some to determining the outcomes Y. We model this by considering three underlying sources of {X, T, Y} and show that explicitly modeling these sources offers great insight to guide designing models that better handle selection bias. This paper is an attempt to conceptualize this line of thought and provide a path to explore it further. In this work, we propose an algorithm to (1) identify disentangled representations of the above-mentioned underlying factors from any given observational dataset D and (2) leverage this knowledge to reduce, as well as account for, the negative impact of selection bias on estimating the treatment effects from D. Our empirical results show that the proposed method achieves state-of-the-art performance in both individual and population based evaluation measures.""","""The paper proposes a new way of estimating treatment effects from observational data. The text is clear and experiments support the proposed model."""
2074,"""Finding Winning Tickets with Limited (or No) Supervision""","['Lottery Tickets Hypothesis', 'Self-Supervised Learning', 'Deep Learning', 'Image Recognition']","""The lottery ticket hypothesis argues that neural networks contain sparse subnetworks, which, if appropriately initialized (the winning tickets), are capable of matching the accuracy of the full network when trained in isolation. Empirically made in different contexts, such an observation opens interesting questions about the dynamics of neural network optimization and the importance of their initializations. However, the properties of winning tickets are not well understood, especially the importance of supervision in the generating process. In this paper, we aim to answer the following open questions: can we find winning tickets with few data samples or few labels? can we even obtain good tickets without supervision? Perhaps surprisingly, we provide a positive answer to both, by generating winning tickets with limited access to data, or with self-supervision---thus without using manual annotations---and then demonstrating the transferability of the tickets to challenging classification tasks such as ImageNet. ""","""The paper studies finding winning tickets with limited supervision. The authors consider a variety of different settings. An interesting contribution is to show that findings on small datasets may be misleading. That said, all three reviewers agree that novelty is limited, and some found inconsistencies and passages that were hard to read: Based on this, it seems the paper doesn't quite meet the ICLR bar in its current form. """
2075,"""On Identifiability in Transformers""","['Self-attention', 'interpretability', 'identifiability', 'BERT', 'Transformer', 'NLP', 'explanation', 'gradient attribution']","""In this paper we delve deep in the Transformer architecture by investigating two of its core components: self-attention and contextual embeddings. In particular, we study the identifiability of attention weights and token embeddings, and the aggregation of context into hidden tokens. We show that, for sequences longer than the attention head dimension, attention weights are not identifiable. We propose effective attention as a complementary tool for improving explanatory interpretations based on attention. Furthermore, we show that input tokens retain to a large degree their identity across the model. We also find evidence suggesting that identity information is mainly encoded in the angle of the embeddings and gradually decreases with depth. Finally, we demonstrate strong mixing of input information in the generation of contextual embeddings by means of a novel quantification method based on gradient attribution. Overall, we show that self-attention distributions are not directly interpretable and present tools to better understand and further investigate Transformer models. ""","""This paper investigates the identifiability of attention distributions in the context of Transformer architectures. The main result is that, if the sentence length is long enough, difference choices of attention weights may result in the same contextual embeddings (i.e. the attention weights are not identifiable). A notion of ""effective attention"" is proposed that projects out the null space from attention weights. In the discussion period, there were some doubts about the technical correctness of the identifiability result that were clarified by the authors. The attention matrix A results from a softmax transformation, therefore each of its rows is constrained to be in the probability simplex -- i.e. we have A >= 0 (elementwise) and A1 = 1. In the present version of the paper, when analyzing the null space of T (Eqs. 4 and 5) this constraint on A is not taken into account. In particular, in Eq. 5 the existence of a \tilde{A} in the null space of T is not clear at all, since for (A + \tilde{A})T = AT to hold we would need to require, besides A >= 0 and A1 = 1, that A + \tilde{A} >= 0 and A1 + \tilde{A}1 = 1, i.e. \tilde{A} <= A (elementwise) \tilde{A}1 = 0 The present version of the paper does not make it clear that the intersection of the null space of T with these two constraints is non-empty in general -- which would be necessary for attention not to be identifiable, one of the main points of the paper. The authors acknowledged this concern and provided a proof. I suggest the following simplified version of their proof: We're looking for a vector \tilde{A} satisfying (1) \tilde{A}*T = 0 (to be in the null space of T) and (2) \tilde{A}*1 = 0 (3) \tilde{A} >= -A (to make sure A + \tilde{A} are in the probability simplex). Conditions (1) and (2) are equivalent to require \tilde{A} to be in the null space of [T; 1]. It is fine to assume this null space exists for a general T (it will be a linear subspace of dimension ds - dv - 1). To take into account condition (3) heres a simpler proof: since A is a probability vector coming from a softmax transformation (hence it is strictly > 0 elementwise), there is some epsilon > 0 such that any point in the ball centered on 0 with radius epsilon is >= -A. Since the null space of [T; 1] contains 0, any point \tilde{A} in the intersection of this null space with the epsilon-ball above satisfies (1), (2), and (3). This should work for any ds - dv > 1 and as long as A is not a one-hot distribution (otherwise it collapses to a single point \tilde{A} = 0). I am less convinced about the justification to use an effective attention which is not in the probability simplex, though (not even in the null space of [T; 1] but only null(T)). That part deserves more clarification in the paper. I recommend acceptance of this paper provided these clarifications are provided and the proof is included in the final version. """
2076,"""RoBERTa: A Robustly Optimized BERT Pretraining Approach""","['Deep learning', 'language representation learning', 'natural language understanding']","""Language model pretraining has led to significant performance gains but careful comparison between different approaches is challenging. Training is computationally expensive, often done on private datasets of different sizes, and, as we will show, hyperparameter choices have significant impact on the final results. We present a replication study of BERT pretraining (Devlin et al., 2019) that carefully measures the impact of many key hyperparameters and training data size. We find that BERT was significantly undertrained, and can match or exceed the performance of every model published after it. Our best model achieves state-of-the-art results on GLUE, RACE, SQuAD, SuperGLUE and XNLI. These results highlight the importance of previously overlooked design choices, and raise questions about the source of recently reported improvements. We release our models and code.""","""This paper conducts an extensive study of training BERT and shows that its performance can be improved significantly by choosing a better training setup (e.g., hyperparameters, objective functions). I think this paper clearly offers a better understanding of the importance of tuning a language model to get the best performance on downstream tasks. However, most of the findings are obvious (careful tuning helps, more data helps). I think the novelty and technical contributions are rather limited for a conference such as ICLR. These concerns are also shared by all the reviewers. The review scores are borderline, so I recommend to reject the paper."""
2077,"""Projected Canonical Decomposition for Knowledge Base Completion""","['knowledge base completion', 'adagrad']","""The leading approaches to tensor completion and link prediction are based on the canonical polyadic (CP) decomposition of tensors. While these approaches were originally motivated by low rank approximations, the best performances are usually obtained for ranks as high as permitted by computation constraints. For large scale factorization problems where the factor dimensions have to be kept small, the performances of these approaches tend to drop drastically. The other main tensor factorization model, Tucker decomposition, is more flexible than CP for fixed factor dimensions, so we expect Tucker-based approaches to yield better performance under strong constraints on the number of parameters. However, as we show in this paper through experiments on standard benchmarks of link prediction in knowledge bases, ComplEx, a variant of CP, achieves similar performances to recent approaches based on Tucker decomposition on all operating points in terms of number of parameters. In a control experiment, we show that one problem in the practical application of Tucker decomposition to large-scale tensor completion comes from the adaptive optimization algorithms based on diagonal rescaling, such as Adagrad. We present a new algorithm for a constrained version of Tucker which implicitly applies Adagrad to a CP-based model with an additional projection of the embeddings onto a fixed lower dimensional subspace. The resulting Tucker-style extension of ComplEx obtains similar best performances as ComplEx, with substantial gains on some datasets under constraints on the number of parameters.""","""The paper proposes a tensor decomposition method that interpolates between Tucker and CP decompositions. The authors also propose an optimization algorithms (AdaImp) and argue that it has superior performance against AdaGrad in this tesnor decomposition task. The approach is evaluated on some NLP tasks. The reviewers raised some concerns related to clarity, novelty, and strength of experiments. As part of addressing reviewers concerns, the authors reported their own results on MurP and Tucker (instead of quoting results from reference papers). While the reviewers greatly appreciated these experiments as well as authors' response to their questions and feedback, the concerns largely remained unresolved. In particular, R2 found the gain achieved by AdaImp not significantly large compared to Adagrad. In addition, R2 found very limited evaluation on how AdaImp outperforms Adagrad (thus little evidence to support that claim). Finally, AdaImp lacks any theoretical analysis (unlike Adagrad)."""
2078,"""PAC Confidence Sets for Deep Neural Networks via Calibrated Prediction""","['PAC', 'confidence sets', 'classification', 'regression', 'reinforcement learning']","""We propose an algorithm combining calibrated prediction and generalization bounds from learning theory to construct confidence sets for deep neural networks with PAC guarantees---i.e., the confidence set for a given input contains the true label with high probability. We demonstrate how our approach can be used to construct PAC confidence sets on ResNet for ImageNet, a visual object tracking model, and a dynamics model for the half-cheetah reinforcement learning problem.""","""This paper describes a method for bounding the confidence around predictions made by deep networks. Reviewers agree that this result is of technical interest to the community, and with the added reorganization and revisions described by the authors, they and the AC agree the paper should be accepted. """
2079,"""Are Pre-trained Language Models Aware of Phrases? Simple but Strong Baselines for Grammar Induction""",[],"""With the recent success and popularity of pre-trained language models (LMs) in natural language processing, there has been a rise in efforts to understand their inner workings. In line with such interest, we propose a novel method that assists us in investigating the extent to which pre-trained LMs capture the syntactic notion of constituency. Our method provides an effective way of extracting constituency trees from the pre-trained LMs without training. In addition, we report intriguing findings in the induced trees, including the fact that pre-trained LMs outperform other approaches in correctly demarcating adverb phrases in sentences.""","""This paper presents results of looking at the inside of pre-trained language models to capture and extract syntactic constituency. Reviewers initially had neutral to positive comments, and after the author rebuttal which addressed some of the major questions and concerns, their scores were raised to reflect their satisfaction with the response and the revised paper. Reviewer discussions followed in which they again expressed that they became more positive that the paper makes novel and interesting contributions. I thank the authors for submitting this paper to ICLR and look forward to seeing it at the conference.."""
2080,"""End-to-end learning of energy-based representations for irregularly-sampled signals and images""","['end-to-end-learning', 'irregularly-sampled data', 'energy representations', 'optimal interpolation']","""For numerous domains, including for instance earth observation, medical imaging, astrophysics,..., available image and signal datasets often irregular space-time sampling patterns and large missing data rates. These sampling properties is a critical issue to apply state-of-the-art learning-based (e.g., auto-encoders, CNNs,...) to fully benefit from the available large-scale observations and reach breakthroughs in the reconstruction and identification of processes of interest. In this paper, we address the end-to-end learning of representations of signals, images and image sequences from irregularly-sampled data, {\em i.e.} when the training data involved missing data. From an analogy to Bayesian formulation, we consider energy-based representations. Two energy forms are investigated: one derived from auto-encoders and one relating to Gibbs energies. The learning stage of these energy-based representations (or priors) involve a joint interpolation issue, which resorts to solving an energy minimization problem under observation constraints. Using a neural-network-based implementation of the considered energy forms, we can state an end-to-end learning scheme from irregularly-sampled data. We demonstrate the relevance of the proposed representations for different case-studies: namely, multivariate time series, 2{\sc } images and image sequences.""","""This work looks at ways to fill in incomplete data, through two different energy terms. Reviewers find the work interesting, however it is very poorly written and nowhere near ready for publication. This comes on top of poorly stated motivation and insufficient comparison to prior work. Authors have chosen not to answer the reviewers' comments. We recommend rejection."""
2081,"""Natural Image Manipulation for Autoregressive Models Using Fisher Scores""","['fisher score', 'generative models', 'image interpolation']","""Deep autoregressive models are one of the most powerful models that exist today which achieve state-of-the-art bits per dim. However, they lie at a strict disadvantage when it comes to controlled sample generation compared to latent variable models. Latent variable models such as VAEs and normalizing flows allow meaningful semantic manipulations in latent space, which autoregressive models do not have. In this paper, we propose using Fisher scores as a method to extract embeddings from an autoregressive model to use for interpolation and show that our method provides more meaningful sample manipulation compared to alternate embeddings such as network activations.""","""The paper proposes learning a latent embedding for image manipulation for PixelCNN by using Fisher scores projected to a low-dimensional space. The reviewers have several concerns about this paper: * Novelty * Random projection doesnt learn useful representation * Weak evaluations Since two expert reviewers are negative about this paper, I cannot recommend acceptance at this stage. """
2082,"""Compositional Language Continual Learning""","['Compositionality', 'Continual Learning', 'Lifelong Learning', 'Sequence to Sequence Modeling']","""Motivated by the human's ability to continually learn and gain knowledge over time, several research efforts have been pushing the limits of machines to constantly learn while alleviating catastrophic forgetting. Most of the existing methods have been focusing on continual learning of label prediction tasks, which have fixed input and output sizes. In this paper, we propose a new scenario of continual learning which handles sequence-to-sequence tasks common in language learning. We further propose an approach to use label prediction continual learning algorithm for sequence-to-sequence continual learning by leveraging compositionality. Experimental results show that the proposed method has significant improvement over state-of-the-art methods. It enables knowledge transfer and prevents catastrophic forgetting, resulting in more than 85% accuracy up to 100 stages, compared with less than 50% accuracy for baselines in instruction learning task. It also shows significant improvement in machine translation task. This is the first work to combine continual learning and compositionality for language learning, and we hope this work will make machines more helpful in various tasks.""","""The paper addresses the task of continual learning in NLP for seq2seq style tasks. The key idea of the proposed method is to enable the network to represent syntactic and semantic knowledge separately, which allows the neural network to leverage compositionality for knowledge transfer and also solves the problem of catastrophic forgetting. The paper has been improved substantially after the reviewers' comments and also obtains good results on benchmark tasks. The only concern is that the evaluation is on artificial datasets. In future, the authors should try to include more evaluation on real datasets (however, this is also limited by availability of such datasets). As of now, I'm recommending an Acceptance."""
2083,"""MODELLING BIOLOGICAL ASSAYS WITH ADAPTIVE DEEP KERNEL LEARNING""","['few-shot learning', 'few-shot regression', 'deep kernel learning', 'biological assay modelling', 'drug discovery']","""Due to the significant costs of data generation, many prediction tasks within drug discovery are by nature few-shot regression (FSR) problems, including accurate modelling of biological assays. Although a number of few-shot classification and reinforcement learning methods exist for similar applications, we find relatively few FSR methods meeting the performance standards required for such tasks under real-world constraints. Inspired by deep kernel learning, we develop a novel FSR algorithm that is better suited to these settings. Our algorithm consists of learning a deep network in combination with a kernel function and a differentiable kernel algorithm. As the choice of the kernel is critical, our algorithm learns to find the appropriate one for each task during inference. It thus performs more effectively with complex task distributions, outperforming current state-of-the-art algorithms on both toy and novel, real-world benchmarks that we introduce herein. By introducing novel benchmarks derived from biological assays, we hope that the community will progress towards the development of FSR algorithms suitable for use in noisy and uncertain environments such as drug discovery.""","""This work applies deep kernel learning to the problem of few shot regression for modeling biological assays. To deal with sparse data on new tasks, the authors propose to adapt the learned kernel to each task. Reviews were mixed about the method and experiments, some reviewers were satisfied with the author rebuttal while others did not support acceptance during the discussion period. Some reviewers ultimately felt that the experimental results were too weak to warrant publication. On the binding task the method is comparable with simpler baselines, and some felt that the gains on antibacterial were unconvincing. Other reviewers felt that there remained simpler baselines to compare with, for example ablating the affects of learning the kernel with simple hand picking one. While authors commented they tried this, there were no details given on the results or what exactly they tried. Based on the reviewer discussion, the work feels too preliminary in its current form to warrant publication in ICLR. However, given that there are clearly some interesting ideas proposed in this work, I recommend resubmitting with stronger experimental evidence that the method helps over baselines."""
2084,"""Metagross: Meta Gated Recursive Controller Units for Sequence Modeling""","['Deep Learning', 'Natural Language Processing', 'Recurrent Neural Networks']","""This paper proposes Metagross (Meta Gated Recursive Controller), a new neural sequence modeling unit. Our proposed unit is characterized by recursive parameterization of its gating functions, i.e., gating mechanisms of Metagross are controlled by instances of itself, which are repeatedly called in a recursive fashion. This can be interpreted as a form of meta-gating and recursively parameterizing a recurrent model. We postulate that our proposed inductive bias provides modeling benefits pertaining to learning with inherently hierarchically-structured sequence data (e.g., language, logical or music tasks). To this end, we conduct extensive experiments on recursive logic tasks (sorting, tree traversal, logical inference), sequential pixel-by-pixel classification, semantic parsing, code generation, machine translation and polyphonic music modeling, demonstrating the widespread utility of the proposed approach, i.e., achieving state-of-the-art (or close) performance on all tasks.""","""This paper proposes a recurrent architecture based on a recursive gating mechanism. The reviewers leaned towards rejection on the basis of questions regarding novelty, analysis, and the experimental setting. Surprisingly, the authors chose not to engage in discussion, as all reviewers seems pretty open to having their minds changed. If none of the reviewers will champion the paper, and the authors cannot be bothered to champion their own work, I see no reason to recommend acceptance."""
2085,"""Continuous Graph Flow""","['graph flow', 'normalizing flow', 'continuous message passing', 'reversible graph neural networks']","""In this paper, we propose Continuous Graph Flow, a generative continuous flow based method that aims to model complex distributions of graph-structured data. Once learned, the model can be applied to an arbitrary graph, defining a probability density over the random variables represented by the graph. It is formulated as an ordinary differential equation system with shared and reusable functions that operate over the graphs. This leads to a new type of neural graph message passing scheme that performs continuous message passing over time. This class of models offers several advantages: a flexible representation that can generalize to variable data dimensions; ability to model dependencies in complex data distributions; reversible and memory-efficient; and exact and efficient computation of the likelihood of the data. We demonstrate the effectiveness of our model on a diverse set of generation tasks across different domains: graph generation, image puzzle generation, and layout generation from scene graphs. Our proposed model achieves significantly better performance compared to state-of-the-art models.""","""Novelty of the proposed model is low. Experimental results are weak."""
2086,"""Towards Verified Robustness under Text Deletion Interventions""","['natural language processing', 'specification', 'verification', 'model undersensitivity', 'adversarial', 'interval bound propagation']","""Neural networks are widely used in Natural Language Processing, yet despite their empirical successes, their behaviour is brittle: they are both over-sensitive to small input changes, and under-sensitive to deletions of large fractions of input text. This paper aims to tackle under-sensitivity in the context of natural language inference by ensuring that models do not become more confident in their predictions as arbitrary subsets of words from the input text are deleted. We develop a novel technique for formal verification of this specification for models based on the popular decomposable attention mechanism by employing the efficient yet effective interval bound propagation (IBP) approach. Using this method we can efficiently prove, given a model, whether a particular sample is free from the under-sensitivity problem. We compare different training methods to address under-sensitivity, and compare metrics to measure it. In our experiments on the SNLI and MNLI datasets, we observe that IBP training leads to a significantly improved verified accuracy. On the SNLI test set, we can verify 18.4% of samples, a substantial improvement over only 2.8% using standard training.""","""This paper deals with the under-sensitivity problem in natural language inference tasks. An interval bound propagation (IBP) approach is applied to predict the confidence of the model when a subsets of words from the input text are deleted. The paper is well written and easy to follow. The authors give detailed rebuttal and 3 of the 4 reviewers lean to accept the paper."""
2087,"""Multi-agent Reinforcement Learning for Networked System Control""","['deep reinforcement learning', 'multi-agent reinforcement learning', 'decision and control']","""This paper considers multi-agent reinforcement learning (MARL) in networked system control. Specifically, each agent learns a decentralized control policy based on local observations and messages from connected neighbors. We formulate such a networked MARL (NMARL) problem as a spatiotemporal Markov decision process and introduce a spatial discount factor to stabilize the training of each local agent. Further, we propose a new differentiable communication protocol, called NeurComm, to reduce information loss and non-stationarity in NMARL. Based on experiments in realistic NMARL scenarios of adaptive traffic signal control and cooperative adaptive cruise control, an appropriate spatial discount factor effectively enhances the learning curves of non-communicative MARL algorithms, while NeurComm outperforms existing communication protocols in both learning efficiency and control performance.""","""The paper focuses on multi-agent reinforcement learning applications in network systems control settings. A key consideration is the spatial layout of such systems, and the authors propose a problem formulation designed to leverage structural assumptions (e.g., locality). The authors derive a novel approach / communication protocol for these settings, and demonstrate strong performance and novel insights in realistic applications. Reviewers particularly commended the realistic applications explored here. Clarifying questions about the setting, experiments, and results were addressed in the rebuttal, and the resulting paper is judged to provide valuable novel insights."""
2088,"""RL-LIM: Reinforcement Learning-based Locally Interpretable Modeling""","['Interpretability', 'Explanable AI', 'Explanability']","""Understanding black-box machine learning models is important towards their widespread adoption. However, developing globally interpretable models that explain the behavior of the entire model is challenging. An alternative approach is to explain black-box models through explaining individual prediction using a locally interpretable model. In this paper, we propose a novel method for locally interpretable modeling -- Reinforcement Learning-based Locally Interpretable Modeling (RL-LIM). RL-LIM employs reinforcement learning to select a small number of samples and distill the black-box model prediction into a low-capacity locally interpretable model. Training is guided with a reward that is obtained directly by measuring agreement of the predictions from the locally interpretable model with the black-box model. RL-LIM near-matches the overall prediction performance of black-box models while yielding human-like interpretability, and significantly outperforms state of the art locally interpretable models in terms of overall prediction performance and fidelity. ""","""The paper aims to find locally interpretable models, such that the local models are fit (w.r.t. the ground truth) and faithful (w.r.t. the global underlying black box model). The contribution of the paper is that the local model is trained from a subset of points, selected via an optimized importance weight function. The difference compared to Ren et al. (cited) is that the IW function is non-differentiable and optimized using Reinforcement Learning. A first concern (Rev#1, Rev#2) regards the positioning of the paper w.r.t. RL, as the actual optimization method could be any black-box optimization method: one wants to find the IW that maximizes the faithfulness. The rebuttal makes a good job in explaining the impact of using a non-differentiable IW function. A second concern (Rev#2) regards the interpretability of the IW underlying the local interpretable model. There is no doubt that the paper was considerably improved during the rebuttal period. However, the improvements raise additional questions (e.g. about selecting the IW depending on the distance to the probes). I encourage the authors to continue on this promising line of search. """
2089,"""On Solving Minimax Optimization Locally: A Follow-the-Ridge Approach""","['minimax optimization', 'smooth differentiable games', 'local convergence', 'generative adversarial networks', 'optimization']","""Many tasks in modern machine learning can be formulated as finding equilibria in sequential games. In particular, two-player zero-sum sequential games, also known as minimax optimization, have received growing interest. It is tempting to apply gradient descent to solve minimax optimization given its popularity and success in supervised learning. However, it has been noted that naive application of gradient descent fails to find some local minimax and can converge to non-local-minimax points. In this paper, we propose Follow-the-Ridge (FR), a novel algorithm that provably converges to and only converges to local minimax. We show theoretically that the algorithm addresses the notorious rotational behaviour of gradient dynamics, and is compatible with preconditioning and positive momentum. Empirically, FR solves toy minimax problems and improves the convergence of GAN training compared to the recent minimax optimization algorithms. ""","""The submission proposes a novel solution for minimax optimization which has strong theoretical and empirical results as well as broad relevance for the community. The approach, Follow-the-Ridge, has theoretical guarantees and is compatible with preconditioning and momentum optimization strategies. The paper is well-written and the authors engaged in a lengthy discussion with the reviewers, leading to a clearer understanding of the paper for all. The reviews all recommend acceptance. """
2090,"""Feature-map-level Online Adversarial Knowledge Distillation""","['Computer vision', 'Image classification', 'Knowledge distillation', 'Deep Learning']","""Feature maps contain rich information about image intensity and spatial correlation. However, previous online knowledge distillation methods only utilize the class probabilities. Thus in this paper, we propose an online knowledge distillation method that transfers not only the knowledge of the class probabilities but also that of the feature map using the adversarial training framework. We train multiple networks simultaneously by employing discriminators to distinguish the feature map distributions of different networks. Each network has its corresponding discriminator which discriminates the feature map from its own as fake while classifying that of the other network as real. By training a network to fool the corresponding discriminator, it can learn the other networks feature map distribution. Discriminators and networks are trained concurrently in a minimax two-player game. Also, we propose a novel cyclic learning scheme for training more than two networks together. We have applied our method to various network architectures on the classification task and discovered a significant improvement of performance especially in the case of training a pair of a small network and a large one.""","""The paper received scores of WR (R1) WR (R2) WA (R3), although R3 stated that they were borderline. The main issues were (i) lack of novelty and (ii) insufficient experiments. The AC has closely look at the reviews/comments/rebuttal and examined the paper. Unfortunately, the AC feels that with no-one strongly advocating for acceptance, the paper cannot be accepted at this time. The authors should use the feedback from reviewers to improve their paper. """
2091,"""Privacy-preserving Representation Learning by Disentanglement""",[],"""Deep learning and latest machine learning technology heralded an era of success in data analysis. Accompanied by the ever increasing performance, reaching super-human performance in many areas, is the requirement of amassing more and more data to train these models. Often ignored or underestimated, the big data curation is associated with the risk of privacy leakages. The proposed approach seeks to mitigate these privacy issues. In order to sanitize data from sensitive content, we propose to learn a privacy-preserving data representation by disentangling into public and private part, with the public part being shareable without privacy infringement. The proposed approach deals with the setting where the private features are not explicit, and is estimated though the course of learning. This is particularly appealing, when the notion of sensitive attribute is ``fuzzy''. We showcase feasibility in terms of classification of facial attributes and identity on the CelebA dataset. The results suggest that private component can be removed in the cases where the the downstream task is known a priori (i.e., ``supervised''), and the case where it is not known a priori (i.e., ``weakly-supervised'').""","""The paper leverages variational auto-encoders (VAEs) and disentanglement to generate data representations that hide sensitive attributes. The reviewers have identified several issues with the paper, including its false claims or statements about differential privacy, unclear privacy guarantee, and lack of related work discussion. The authors have not directly addressed these issues."""
2092,"""Robust Federated Learning Through Representation Matching and Adaptive Hyper-parameters""","['federated learning', 'hyper-parameter tuning', 'regularization']",""" Federated learning is a distributed, privacy-aware learning scenario which trains a single model on data belonging to several clients. Each client trains a local model on its data and the local models are then aggregated by a central party. Current federated learning methods struggle in cases with heterogeneous client-side data distributions which can quickly lead to divergent local models and a collapse in performance. Careful hyper-parameter tuning is particularly important in these cases but traditional automated hyper-parameter tuning methods would require several training trials which is often impractical in a federated learning setting. We describe a two-pronged solution to the issues of robustness and hyper-parameter tuning in federated learning settings. We propose a novel representation matching scheme that reduces the divergence of local models by ensuring the feature representations in the global (aggregate) model can be derived from the locally learned representations. We also propose an online hyper-parameter tuning scheme which uses an online version of the REINFORCE algorithm to find a hyper-parameter distribution that maximizes the expected improvements in training loss. We show on several benchmarks that our two-part scheme of local representation matching and global adaptive hyper-parameters significantly improves performance and training robustness.""","""This manuscript proposes strategies to improve both the robustness and accuracy of federated learning. Two proposals are online reinforcement learning for adaptive hyperparameter search, and local distribution matching to synchronize the learning trajectories of different local models. The reviewers and AC agree that the problem studied is timely and interesting, as it addresses known issues with federated learning. However, this manuscript also received quite divergent reviews, resulting from differences in opinion about the novelty and clarity of the conceptual and empirical results. Taken together, the AC's opinion is that the paper may not be ready for publication."""
2093,"""Smoothness and Stability in GANs""","['generative adversarial networks', 'stability', 'smoothness', 'convex conjugate']","""Generative adversarial networks, or GANs, commonly display unstable behavior during training. In this work, we develop a principled theoretical framework for understanding the stability of various types of GANs. In particular, we derive conditions that guarantee eventual stationarity of the generator when it is trained with gradient descent, conditions that must be satisfied by the divergence that is minimized by the GAN and the generator's architecture. We find that existing GAN variants satisfy some, but not all, of these conditions. Using tools from convex analysis, optimal transport, and reproducing kernels, we construct a GAN that fulfills these conditions simultaneously. In the process, we explain and clarify the need for various existing GAN stabilization techniques, including Lipschitz constraints, gradient penalties, and smooth activation functions.""","""The paper provides a theoretical study of what regularizations should be used in GAN training and why. The main focus is that the conditions on the discriminator that need to be enforced, to get the Lipshitz property of the corresponding function that is optimized for the generator. Quite a few theorems and propositions are provided. As noted by Reviewer3, this adds insight to well-known techniques: the Reviewer1 rightfully notes that this does not lead to any practical conclusion. Moreover, then training of GANs never goes to the optimal discriminator, that could be a weak point; rather than it proceeds in the alternating fashion, and then evolution is governed by the spectra of the local Jacobian (which is briefly mentioned). This is mentioned in future work, but it is not clear at all if the results here can be helpful (or can be generalized). At some point of the paper it gets to ""more theorems mode"" which make it not so easy and motivating to read. The theoretical results at the quantitative level are very interesting. I have looked for a long time on Figure 1: does this support the claims? First my impression was it does not (there are better FID scores for larger learning rates). But in the end, I think it supports: the convergence for a smaller that pseudo-formula learning rate to the same FID indicated the convergence to the same local minima (probably). This is perfectly fine. Oscillations afterwards move us to a stochastic region, where FID oscillates. So, the theory has at least minor confirmation. """
2094,"""Solving Packing Problems by Conditional Query Learning""","['Neural Combinatorial Optimization', 'Reinforcement Learning', 'Packing Problem']","""Neural Combinatorial Optimization (NCO) has shown the potential to solve traditional NP-hard problems recently. Previous studies have shown that NCO outperforms heuristic algorithms in many combinatorial optimization problems such as the routing problems. However, it is less efficient for more complicated problems such as packing, one type of optimization problem that faces mutual conditioned action space. In this paper, we propose a Conditional Query Learning (CQL) method to handle the packing problem for both 2D and 3D settings. By embedding previous actions as a conditional query to the attention model, we design a fully end-to-end model and train it for 2D and 3D packing via reinforcement learning respectively. Through extensive experiments, the results show that our method could achieve lower bin gap ratio and variance for both 2D and 3D packing. Our model improves 7.2% space utilization ratio compared with genetic algorithm for 3D packing (30 boxes case), and reduces more than 10% bin gap ratio in almost every case compared with extant learning approaches. In addition, our model shows great scalability to packing box number. Furthermore, we provide a general test environment of 2D and 3D packing for learning algorithms. All source code of the model and the test environment is released.""","""This paper proposes an end-to-end deep reinforcement learning-based algorithm for the 2D and 3D bin packing problems. Its main contribution is conditional query learning (CQL) which allows effective decision over mutually conditioned action spaces through policy expressed as a sequence of conditional distributions. Efficient neural architectures for modeling of such a policy is proposed. Experiments validate the effectiveness of the algorithm through comparisons with genetic algorithm and vanilla RL baselines. The presentation is clear and the results are interesting, but the novelty seems insufficient for ICLR. The proposed model is based on transformer with the following changes: * encoder: position embedding is removed, state embedding is added to the multi-head attention layer and feed forward layer of the original transformer encoder; * decoder: three decoders one for the three steps, namely selection, rotation and location. * training: actor-critic algorithm"""
2095,"""Ergodic Inference: Accelerate Convergence by Optimisation""","['MCMC', 'variational inference', 'statistical inference']","""Statistical inference methods are fundamentally important in machine learning. Most state-of-the-art inference algorithms are variants of Markov chain Monte Carlo (MCMC) or variational inference (VI). However, both methods struggle with limitations in practice: MCMC methods can be computationally demanding; VI methods may have large bias. In this work, we aim to improve upon MCMC and VI by a novel hybrid method based on the idea of reducing simulation bias of finite-length MCMC chains using gradient-based optimisation. The proposed method can generate low-biased samples by increasing the length of MCMC simulation and optimising the MCMC hyper-parameters, which offers attractive balance between approximation bias and computational efficiency. We show that our method produces promising results on popular benchmarks when compared to recent hybrid methods of MCMC and VI.""","""This paper presents a way of adapting an HMC-based posterior inference algorithm. It's based on two approximations: replacing the entropy of the final state with the entropy of the initial state, and differentiating through the MH acceptance step. Experiments show it is able to sample from some toy distributions and achieves slightly higher log-likelihood on binarized MNIST than competing approaches. The paper is well-written, and the experiments seem pretty reasonable. I don't find the motivations for the aforementioned approximations very convincing. It's claimed that encouraging entropy of P_0 has a similar effect to encouraging entropy of P_T, but it seems easy to come up with situations where the algorithm could ""cheat"" by finding a high-entropy P_0 which leads straight downhill to an atypically high-density region. Similarly, there was some reviewer discussion about whether it's OK to differentiate through the indicator function; while we differentiate through nondifferentiable functions all the time, it makes no sense to differentiate through a discontinuous function. (This is a big part of why adaptive HMC is hard.) This paper has some promising ideas, but overall the reviewers and I don't think this is quite ready. """
2096,"""Understanding Knowledge Distillation in Non-autoregressive Machine Translation""","['knowledge distillation', 'non-autoregressive neural machine translation']","""Non-autoregressive machine translation (NAT) systems predict a sequence of output tokens in parallel, achieving substantial improvements in generation speed compared to autoregressive models. Existing NAT models usually rely on the technique of knowledge distillation, which creates the training data from a pretrained autoregressive model for better performance. Knowledge distillation is empirically useful, leading to large gains in accuracy for NAT models, but the reason for this success has, as of yet, been unclear. In this paper, we first design systematic experiments to investigate why knowledge distillation is crucial to NAT training. We find that knowledge distillation can reduce the complexity of data sets and help NAT to model the variations in the output data. Furthermore, a strong correlation is observed between the capacity of an NAT model and the optimal complexity of the distilled data for the best translation quality. Based on these findings, we further propose several approaches that can alter the complexity of data sets to improve the performance of NAT models. We achieve the state-of-the-art performance for the NAT-based models, and close the gap with the autoregressive baseline on WMT14 En-De benchmark.""","""Main content: Blind review #3 summarized it well, as follows: This paper studies knowledge distillation in the context of non-autoregressive translation. In particular, it is well known that in order to make NAT competitive with AT, one needs to train the NAT system on a distilled dataset from the teacher model. Using initial experiments on EN=>ES/FR/DE, the authors argue that this necessity arises from the overly-multimodal nature of the output distribution, and that the AT teacher model produces a less multimodal distribution that is easier to model with NAT. Based on this, the authors propose two quantities that estimate the complexity (conditional entropy) and faithfulness (cross entropy vs real data), and derive approximations to these based on independence assumptions and an alignment model. The translations from the teacher output are indeed found to be less complex, thereby facilitating easier training for the NAT student model. -- Discussion: Questions were mostly about how robust the results were on other language pairs and random starting points. Authors addressed questions reasonably. One low review came from a reviewer who admitted not knowing the field, and I agree with the other two reviewers. -- Recommendation and justification: I think papers that offer empirically support for scientific insight (giving an ""a-ha!"" reaction), rather than massive engineering efforts to beat the state of the art, are very worthwhile in scientific conferences. This paper meets that criteria for acceptance."""
2097,"""Dynamically Pruned Message Passing Networks for Large-scale Knowledge Graph Reasoning""","['knowledge graph reasoning', 'graph neural networks', 'attention mechanism']","""We propose Dynamically Pruned Message Passing Networks (DPMPN) for large-scale knowledge graph reasoning. In contrast to existing models, embedding-based or path-based, we learn an input-dependent subgraph to explicitly model a sequential reasoning process. Each subgraph is dynamically constructed, expanding itself selectively under a flow-style attention mechanism. In this way, we can not only construct graphical explanations to interpret prediction, but also prune message passing in Graph Neural Networks (GNNs) to scale with the size of graphs. We take the inspiration from the consciousness prior proposed by Bengio to design a two-GNN framework to encode global input-invariant graph-structured representation and learn local input-dependent one coordinated by an attention module. Experiments show the reasoning capability in our model that is providing a clear graphical explanation as well as predicting results accurately, outperforming most state-of-the-art methods in knowledge base completion tasks.""","""The paper presents a graph neural network model inspired by the consciousness prior of Bengio (2017) and implements it by means of two GNN models: the inattentive and the attentive GNN, respectively IGNN and AGNN. The reviewers think - The idea of learning an input-dependent subgraph using GNN seems new. - The proposed way to reduce the complexity by restricting the attention horizon sounds interesting. """
2098,"""Context-Aware Object Detection With Convolutional Neural Networks""","['Object Detection', 'CNN', 'Context', 'CRF']","""Although the state-of-the-art object detection methods are successful in detecting and classifying objects by leveraging deep convolutional neural networks (CNNs), these methods overlook the semantic context which implies the probabilities that different classes of objects occur jointly. In this work, we propose a context-aware CNN (or conCNN for short) that for the first time effectively enforces the semantics context constraints in the CNN-based object detector by leveraging the popular conditional random field (CRF) model in CNN. In particular, conCNN features a context-aware module that naturally models the mean-field inference method for CRF using a stack of common CNN operations. It can be seamlessly plugged into any existing region-based object detection paradigm. Our experiments using COCO datasets showcase that conCNN improves the average precision (AP) of object detection by 2 percentage points, while only introducing negligible extra training overheads.""","""The paper proposes a contextual reasoning module following the approach proposed by the NIPS 2011 paper for object detection. Although the reviewers find the proposed approach reasonable, the experimental results are weak and noisy. Multiple reviewers believe that the paper will benefit from another review cycle, pointing out that the authors response confirmed that multiple additional (or redoing of) experiments are needed. """
2099,"""TWIN GRAPH CONVOLUTIONAL NETWORKS: GCN WITH DUAL GRAPH SUPPORT FOR SEMI-SUPERVISED LEARNING""","['Graph', 'Neural Networks', 'Deep Learning', 'semi-supervised learning']","""Graph Neural Networks as a combination of Graph Signal Processing and Deep Convolutional Networks shows great power in pattern recognition in non-Euclidean domains. In this paper, we propose a new method to deploy two pipelines based on the duality of a graph to improve accuracy. By exploring the primal graph and its dual graph where nodes and edges can be treated as one another, we have exploited the benefits of both vertex features and edge features. As a result, we have arrived at a framework that has great potential in both semisupervised and unsupervised learning.""","""All three reviewers are consistently negative on this paper. Thus a reject is recommended."""
2100,"""Diverse Trajectory Forecasting with Determinantal Point Processes""","['Diverse Inference', 'Generative Models', 'Trajectory Forecasting']","""The ability to forecast a set of likely yet diverse possible future behaviors of an agent (e.g., future trajectories of a pedestrian) is essential for safety-critical perception systems (e.g., autonomous vehicles). In particular, a set of possible future behaviors generated by the system must be diverse to account for all possible outcomes in order to take necessary safety precautions. It is not sufficient to maintain a set of the most likely future outcomes because the set may only contain perturbations of a dominating single outcome (major mode). While generative models such as variational autoencoders (VAEs) have been shown to be a powerful tool for learning a distribution over future trajectories, randomly drawn samples from the learned implicit likelihood model may not be diverse -- the likelihood model is derived from the training data distribution and the samples will concentrate around the major mode of the data. In this work, we propose to learn a diversity sampling function (DSF) that generates a diverse yet likely set of future trajectories. The DSF maps forecasting context features to a set of latent codes which can be decoded by a generative model (e.g., VAE) into a set of diverse trajectory samples. Concretely, the process of identifying the diverse set of samples is posed as DSF parameter estimation. To learn the parameters of the DSF, the diversity of the trajectory samples is evaluated by a diversity loss based on a determinantal point process (DPP). Gradient descent is performed over the DSF parameters, which in turn moves the latent codes of the sample set to find an optimal set of diverse yet likely trajectories. Our method is a novel application of DPPs to optimize a set of items (forecasted trajectories) in continuous space. We demonstrate the diversity of the trajectories produced by our approach on both low-dimensional 2D trajectory data and high-dimensional human motion data.""","""The paper proposes an approach for forecasting diverse object trajectories using determinantal point processes (DPP). Past trajectory is mapped to a latent code and a conditional VAE is used to generate the future trajectories. Instead of using log-likelihood of DPP, the propose method optimizes expected cardinality as a measure for diversity. While there are some concerns about the core method being incremental in novelty over some existing DPP based methods, the context of the paper is different from these papers (ie, diverse trajectories in continuous space) and reviewers have appreciated the empirical improvements over the baselines, in particular over DPP-NLL and DPP-MAP in latent space. """
2101,"""Thinking While Moving: Deep Reinforcement Learning with Concurrent Control""","['deep reinforcement learning', 'continuous-time', 'robotics']","""We study reinforcement learning in settings where sampling an action from the policy must be done concurrently with the time evolution of the controlled system, such as when a robot must decide on the next action while still performing the previous action. Much like a person or an animal, the robot must think and move at the same time, deciding on its next action before the previous one has completed. In order to develop an algorithmic framework for such concurrent control problems, we start with a continuous-time formulation of the Bellman equations, and then discretize them in a way that is aware of system delays. We instantiate this new class of approximate dynamic programming methods via a simple architectural extension to existing value-based deep reinforcement learning algorithms. We evaluate our methods on simulated benchmark tasks and a large-scale robotic grasping task where the robot must ""think while moving.""""","""This paper studies the setting in reinforcement learning where the next action must be sampled while the current action is still executing. This refers to continuous time problems that are discretised to make them delay-aware in terms of the time taken for action execution. The paper presents adaptions of the Bellman operator and Q-learning to deal with this scenario. This is a problem that is of theoretical interest and also has practical value in many real-world problems. The reviewers found both the problem setting and the proposed solution to be valuable, particularly after the greatly improved technical clarity in the rebuttals. As a result, this paper should be accepted."""
2102,"""Variational Recurrent Models for Solving Partially Observable Control Tasks""","['Reinforcement Learning', 'Deep Learning', 'Variational Inference', 'Recurrent Neural Network', 'Partially Observable', 'Robotic Control', 'Continuous Control']","""In partially observable (PO) environments, deep reinforcement learning (RL) agents often suffer from unsatisfactory performance, since two problems need to be tackled together: how to extract information from the raw observations to solve the task, and how to improve the policy. In this study, we propose an RL algorithm for solving PO tasks. Our method comprises two parts: a variational recurrent model (VRM) for modeling the environment, and an RL controller that has access to both the environment and the VRM. The proposed algorithm was tested in two types of PO robotic control tasks, those in which either coordinates or velocities were not observable and those that require long-term memorization. Our experiments show that the proposed algorithm achieved better data efficiency and/or learned more optimal policy than other alternative approaches in tasks in which unobserved states cannot be inferred from raw observations in a simple manner.""","""The authors propose to decompose control in a POMDP into learning a model of the environment (via a VRNN) and learning a feed-forward policy that has access to both the environment and environment model. They argue that learning the recurrent environment model is easier than learning a recurrent policy. They demonstrate improved performance over existing state-of-the-art approaches on several PO tasks. Reviewers found the motivation for the proposed approach convincing and the experimental results proved the effectiveness of the method. The authors response resolved reviewers concerns, so as a result, I recommend acceptance."""
2103,"""Online Learned Continual Compression with Stacked Quantization Modules""","['continual learning', 'lifelong learning']","""We introduce and study the problem of Online Continual Compression, where one attempts to learn to compress and store a representative dataset from a non i.i.d data stream, while only observing each sample once. This problem is highly relevant for downstream online continual learning tasks, as well as standard learning methods under resource constrained data collection. We propose a new architecture which stacks Quantization Modules (SQM), consisting of a series of discrete autoencoders, each equipped with their own memory. Every added module is trained to reconstruct the latent space of the previous module using fewer bits, allowing the learned representation to become more compact as training progresses. This modularity has several advantages: 1) moderate compressions are quickly available early in training, which is crucial for remembering the early tasks, 2) as more data needs to be stored, earlier data becomes more compressed, freeing memory, 3) unlike previous methods, our approach does not require pretraining, even on challenging datasets. We show several potential applications of this method. We first replace the episodic memory used in Experience Replay with SQM, leading to significant gains on standard continual learning benchmarks using a fixed memory budget. We then apply our method to compressing larger images like those from Imagenet, and show that it is also effective with other modalities, such as LiDAR data.""","""The paper proposes a new problem setup as ""online continual compression"". The proposed idea is a combination of existing techniques and very simple, though interesting. Parts of the algorithm are not clear, and the hierarchy is not well-motivated. Experimental results seem promising but not convincing enough, since it is on a very special setting, the LiDAR experiment is missing quantitative evaluation, and different tasks might introduce different difficulties in this online learning setting. The ablation study is well designed but not discussed enough."""
2104,"""Conditional Flow Variational Autoencoders for Structured Sequence Prediction""","['Variational Inference', 'Normalizing Flows', 'Trajectories']","""Prediction of future states of the environment and interacting agents is a key competence required for autonomous agents to operate successfully in the real world. Prior work for structured sequence prediction based on latent variable models imposes a uni-modal standard Gaussian prior on the latent variables. This induces a strong model bias which makes it challenging to fully capture the multi-modality of the distribution of the future states. In this work, we introduce Conditional Flow Variational Autoencoders (CF-VAE) using our novel conditional normalizing flow based prior to capture complex multi-modal conditional distributions for effective structured sequence prediction. Moreover, we propose two novel regularization schemes which stabilizes training and deals with posterior collapse for stable training and better match to the data distribution. Our experiments on three multi-modal structured sequence prediction datasets -- MNIST Sequences, Stanford Drone and HighD -- show that the proposed method obtains state of art results across different evaluation metrics.""","""The novelty of the proposed work is a very weak factor, the idea has been explored in various forms in previous work."""
2105,"""A Boolean Task Algebra for Reinforcement Learning""","['Reinforcement Learning', 'Transfer', 'Composition', 'Lifelong', 'Multi-task', 'Deep Reinforcement learning']","""We propose a framework for defining a Boolean algebra over the space of tasks. This allows us to formulate new tasks in terms of the negation, disjunction and conjunction of a set of base tasks. We then show that by learning goal-oriented value functions and restricting the transition dynamics of the tasks, an agent can solve these new tasks with no further learning. We prove that by composing these value functions in specific ways, we immediately recover the optimal policies for all tasks expressible under the Boolean algebra. We verify our approach in two domains, including a high-dimensional video game environment requiring function approximation, where an agent first learns a set of base skills, and then composes them to solve a super-exponential number of new tasks. ""","""This paper considers the situation where a set of reinforcement learning tasks are related by means of a Boolean algebra. The tasks considered are restricted to stochastic shortest path problems. The paper shows that learning goal-oriented value functions for subtasks enables the agent to solve new tasks (specified with boolean operations on the goal sets) in a zero-shot fashion. Furthermore, the Boolean operations on tasks are transformed to simple arithmetic operations on the optimal action-value functions, enabling the zero short transfer to a new task to be computationally efficient. This approach to zero-shot transfer is tested in the four room domain without function approximation and a small video game with function approximation. The reviewers found several strengths and weaknesses in the paper. The paper was clearly written. The experiments support the claim that the method supports zero-shot composition of goal-specified tasks. The weaknesses lie in the restrictive assumptions. These assumptions require deterministic transition dynamics, reward functions that only differ on the terminal absorbing states, and having only two different terminal reward values possible across all tasks. These assumptions greatly restrict the applicability of the proposed method. The author response and reviewer comments indicated that some aspects these restrictions can be softened in practice, but the form of composition described in this paper is restrictive. The task restrictions also seem to limit the method's utility on general reinforcement learning problems. The paper falls short of being ready for publication at ICLR. Further justification of the restrictive assumptions is required to convince the readers that the forms of composition considered in this paper are adequately general. """
2106,"""Feature-Robustness, Flatness and Generalization Error for Deep Neural Networks""","['robustness', 'flatness', 'generalization error', 'loss surface', 'deep neural networks', 'feature space']","""The performance of deep neural networks is often attributed to their automated, task-related feature construction. It remains an open question, though, why this leads to solutions with good generalization, even in cases where the number of parameters is larger than the number of samples. Back in the 90s, Hochreiter and Schmidhuber observed that flatness of the loss surface around a local minimum correlates with low generalization error. For several flatness measures, this correlation has been empirically validated. However, it has recently been shown that existing measures of flatness cannot theoretically be related to generalization: if a network uses ReLU activations, the network function can be reparameterized without changing its output in such a way that flatness is changed almost arbitrarily. This paper proposes a natural modification of existing flatness measures that results in invariance to reparameterization. The proposed measures imply a robustness of the network to changes in the input and the hidden layers. Connecting this feature robustness to generalization leads to a generalized definition of the representativeness of data. With this, the generalization error of a model trained on representative data can be bounded by its feature robustness which depends on our novel flatness measure.""","""The authors propose a notion of feature robustness, provide a straightforward decomposition of risk in terms of this robustness measure, and then provide some empirical evidence for their perspective. Across the board, the reviewers raised issues with missing related work, which the authors then addressed. I will point out that some things the authors say about PAC-Bayes are false. E.g., in the rebuttal the authors say that PAC-Bayes is limited to 0-1 error. It is generally trivial to obtain bounds for bounded loss. For unbounded loss functions, there are bounds based on, e.g., sub gaussian assumptions. Despite improvements in connections with related work, reviewers continued to find the theoretical contributions to be marginal. Even the empirical contributions were found to be marginal."""
2107,"""The Dual Information Bottleneck""","['optimal prediction learning', 'exponential families', 'critical points', 'information theory']","""The Information-Bottleneck (IB) framework suggests a general characterization of optimal representations in learning, and deep learning in particular. It is based on the optimal trade off between the representation complexity and accuracy, both of which are quantified by mutual information. The problem is solved by alternating projections between the encoder and decoder of the representation, which can be performed locally at each representation level. The framework, however, has practical drawbacks, in that mutual information is notoriously difficult to handle at high dimension, and only has closed form solutions in special cases. Further, because it aims to extract representations which are minimal sufficient statistics of the data with respect to the desired label, it does not necessarily optimize the actual prediction of unseen labels. Here we present a formal dual problem to the IB which has several interesting properties. By switching the order in the KL-divergence between the representation decoder and data, the optimal decoder becomes the geometric rather than the arithmetic mean of the input points. While providing a good approximation to the original IB, it also preserves the form of exponential families, and optimizes the mutual information on the predicted label rather than the desired one. We also analyze the critical points of the dualIB and discuss their importance for the quality of this approach.""","""Main content: Blind review #1 summarizes it well: This paper introduces a variant of the Information Bottleneck (IB) framework, which consists in permuting the conditional probabilities of y given x and y given \hat{x} in a Kullback-Liebler divergence involved in the IB optimization criterion. Interestingly, this change only results in changing an arithmetic mean into a geometric mean in the algorithmic resolution. Good properties of the exponential families (existence of non-trivial minimal sufficient statistics) are preserved, and an analysis of the new critical points/information plane induced is carried out. -- Discussion: The reviews generally agree on the elegant mathematical result, but are critical of the fact that the paper lacks any empirical component whatsoever. -- Recommendation and justification: The paper would be good for ICLR if it had any decent empirical component at all; it is a shame that none was presented as this does not seem very difficult."""
2108,"""Unifying Question Answering, Text Classification, and Regression via Span Extraction""","['NLP', 'span-extraction', 'BERT']","""Even as pre-trained language encoders such as BERT are shared across many tasks, the output layers of question answering, text classification, and regression models are significantly different. Span decoders are frequently used for question answering, fixed-class, classification layers for text classification, and similarity-scoring layers for regression tasks. We show that this distinction is not necessary and that all three can be unified as span extraction. A unified, span-extraction approach leads to superior or comparable performance in supplementary supervised pre-trained, low-data, and multi-task learning experiments on several question answering, text classification, and regression benchmarks.""","""(I acknowledge reading authors' recent note on decaNLP.) This paper proposes a span extraction approach (SpExBERT) to unify question answering, text classification and regression. Paper includes a significant number of experiments (including low-resource and multi-tasking experiments) on multiple benchmarks. The reviewers are concerned about lack of support on author's claims from the experimental results due to seemingly insignificant improvements and lack of analysis regarding the results. Hence, I suggest rejecting the paper."""
2109,"""Model-Augmented Actor-Critic: Backpropagating through Paths""","['reinforcement learning', 'model-based', 'actor-critic', 'pathwise']","""Current model-based reinforcement learning approaches use the model simply as a learned black-box simulator to augment the data for policy optimization or value function learning. In this paper, we show how to make more effective use of the model by exploiting its differentiability. We construct a policy optimization algorithm that uses the pathwise derivative of the learned model and policy across future timesteps. Instabilities of learning across many timesteps are prevented by using a terminal value function, learning the policy in an actor-critic fashion. Furthermore, we present a derivation on the monotonic improvement of our objective in terms of the gradient error in the model and value function. We show that our approach (i) is consistently more sample efficient than existing state-of-the-art model-based algorithms, (ii) matches the asymptotic performance of model-free algorithms, and (iii) scales to long horizons, a regime where typically past model-based approaches have struggled.""","""The authors propose a novel model-based reinforcement learning algorithm. The key difference with previous approaches is that the authors use gradients through the learned model. They present theoretical results on error bounds for their approach and a monotonic improvement theorem. In the small sample regime, they show improved performance over previous approaches. After the revisions, reviewers raised a few concerns: The results are only for 100,000 steps, which does not support the claim that the models achieves the same asymptotic performance as model free algorithms would. The results would be stronger as the experiments were run with more than 3 random seats. In the revised version of the text, it's unclear if the authors are using target networks. Overall, I think the paper introduces some interesting ideas and shows improved performance over existing approaches. I recommend acceptance on the condition that the authors tone down their claims or back them up with empirical evidence. Currently, I don't see evidence for the claim that the method achieves similar asymptotic performance to model free algorithms or the claim that their approach allows for longer horizons than previous approaches."""
2110,"""Towards Interpreting Deep Neural Networks via Understanding Layer Behaviors""","['Interpretability of DNNs', 'Wasserstein distance', 'Layer behavior']","""Deep neural networks (DNNs) have achieved unprecedented practical success in many applications. However, how to interpret DNNs is still an open problem. In particular, what do hidden layers behave is not clearly understood. In this paper, relying on a teacher-student paradigm, we seek to understand the layer behaviors of DNNs by ``monitoring"" both across-layer and single-layer distribution evolution to some target distribution in the training. Here, the ``across-layer"" and ``single-layer"" considers the layer behavior \emph{along the depth} and a specific layer \emph{along training epochs}, respectively. Relying on optimal transport theory, we employ the Wasserstein distance ( pseudo-formula -distance) to measure the divergence between the layer distribution and the target distribution. Theoretically, we prove that i) the pseudo-formula -distance of across layers to the target distribution tends to decrease along the depth. ii) the pseudo-formula -distance of a specific layer to the target distribution tends to decrease along training iterations. iii) However, a deep layer is not always better than a shallow layer for some samples. Moreover, our results helps to analyze the stability of layer distributions and explains why auxiliary losses helps the training of DNNs. Extensive experiments on real-world datasets justify our theoretical findings.""","""This paper studies the transfer of representations learned by deep neural networks across various datasets and tasks when the network is pre-trained on some dataset and subsequently fine-tuned on the target dataset. On the theoretical side the authors analyse two-layer fully connected networks. In an extensive empirical evaluation the authors argue that an appropriately pre-trained networks enable better loss landscapes (improved Lipschitzness). Understanding the transferability of representations is an important problem and the reviewers appreciated some aspects of the extensive empirical evaluation and the initial theoretical investigation. However, we feel that the manuscript needs a major revision and that there is not enough empirical evidence to support the stated conclusions. As a result, I will recommend rejecting this paper in the current form. Nevertheless, as the problem is extremely important I encourage the authors to improve the clarity and provide more convincing arguments towards the stated conclusions by addressing the issues raised during the discussion phase."""
2111,"""Augmenting Non-Collaborative Dialog Systems with Explicit Semantic and Strategic Dialog History""","['dialog systems', 'history tracking']","""We study non-collaborative dialogs, where two agents have a conflict of interest but must strategically communicate to reach an agreement (e.g., negotiation). This setting poses new challenges for modeling dialog history because the dialog's outcome relies not only on the semantic intent, but also on tactics that convey the intent. We propose to model both semantic and tactic history using finite state transducers (FSTs). Unlike RNN, FSTs can explicitly represent dialog history through all the states traversed, facilitating interpretability of dialog structure. We train FSTs on a set of strategies and tactics used in negotiation dialogs. The trained FSTs show plausible tactic structure and can be generalized to other non-collaborative domains (e.g., persuasion). We evaluate the FSTs by incorporating them in an automated negotiating system that attempts to sell products and a persuasion system that persuades people to donate to a charity. Experiments show that explicitly modeling both semantic and tactic history is an effective way to improve both dialog policy planning and generation performance. ""","""This work proposes use of two pre-trained FST models to explicitly incorporate semantic and strategic/tactic information from dialog history into non-collaborative (negotiation) dialog systems. Experiments on two datasets from prior work show the advantage of this model in automated and human evaluation. While all reviewers found the work interesting, they made many suggestions regarding the presentation. Author'(s) rebuttal included explanations and changes to the presentation. Hence, I suggest acceptance as a poster presentation."""
2112,"""Latent Variables on Spheres for Sampling and Inference""","['variational autoencoder', 'generative adversarial network']","""Variational inference is a fundamental problem in Variational AutoEncoder (VAE). The optimization with lower bound of marginal log-likelihood results in the distribution of latent variables approximate to a given prior probability, which is the dilemma of employing VAE to solve real-world problems. By virtue of high-dimensional geometry, we propose a very simple algorithm completely different from existing ones to alleviate the variational inference in VAE. We analyze the unique characteristics of random variables on spheres in high dimensions and prove that Wasserstein distance between two arbitrary data sets randomly drawn from a sphere are nearly identical when the dimension is sufficiently large. Based on our theory, a novel algorithm for distribution-robust sampling is devised. Moreover, we reform the latent space of VAE by constraining latent variables on the sphere, thus freeing VAE from the approximate optimization of posterior probability via variational inference. The new algorithm is named Spherical AutoEncoder (SAE). Extensive experiments by sampling and inference tasks validate our theoretical analysis and the superiority of SAE.""","""This paper proposes to improve VAE/GAN by performing variational inference with a constraint that the latent variables lie on a sphere. The reviewers find some technical issues with the paper (R3's comment regarding theorem 3). They also found that the method is not motivated well, and the paper is not convincing. Based on this feedback, I recommend to reject the paper."""
2113,"""CAQL: Continuous Action Q-Learning""","['Reinforcement learning (RL)', 'DQN', 'Continuous control', 'Mixed-Integer Programming (MIP)']","""Reinforcement learning (RL) with value-based methods (e.g., Q-learning) has shown success in a variety of domains such as games and recommender systems (RSs). When the action space is finite, these algorithms implicitly finds a policy by learning the optimal value function, which are often very efficient. However, one major challenge of extending Q-learning to tackle continuous-action RL problems is that obtaining optimal Bellman backup requires solving a continuous action-maximization (max-Q) problem. While it is common to restrict the parameterization of the Q-function to be concave in actions to simplify the max-Q problem, such a restriction might lead to performance degradation. Alternatively, when the Q-function is parameterized with a generic feed-forward neural network (NN), the max-Q problem can be NP-hard. In this work, we propose the CAQL method which minimizes the Bellman residual using Q-learning with one of several plug-and-play action optimizers. In particular, leveraging the strides of optimization theories in deep NN, we show that max-Q problem can be solved optimally with mixed-integer programming (MIP)---when the Q-function has sufficient representation power, this MIP-based optimization induces better policies and is more robust than counterparts, e.g., CEM or GA, that approximate the max-Q solution. To speed up training of CAQL, we develop three techniques, namely (i) dynamic tolerance, (ii) dual filtering, and (iii) clustering. To speed up inference of CAQL, we introduce the action function that concurrently learns the optimal policy. To demonstrate the efficiency of CAQL we compare it with state-of-the-art RL algorithms on benchmark continuous control problems that have different degrees of action constraints and show that CAQL significantly outperforms policy-based methods in heavily constrained environments.""","""All three reviewers gave scores of Weak Accept. AC has read the reviews and rebuttal and agrees that the paper makes a solid contribution and should be accepted."""
2114,"""Mincut Pooling in Graph Neural Networks""","['Graph Neural Networks', 'Pooling', 'Graph Cuts', 'Spectral Clustering']","""The advance of node pooling operations in Graph Neural Networks (GNNs) has lagged behind the feverish design of new message-passing techniques, and pooling remains an important and challenging endeavor for the design of deep architectures. In this paper, we propose a pooling operation for GNNs that leverages a differentiable unsupervised loss based on the minCut optimization objective. For each node, our method learns a soft cluster assignment vector that depends on the node features, the target inference task (e.g., a graph classification loss), and, thanks to the minCut objective, also on the connectivity structure of the graph. Graph pooling is obtained by applying the matrix of assignment vectors to the adjacency matrix and the node features. We validate the effectiveness of the proposed pooling method on a variety of supervised and unsupervised tasks.""","""Two reviewers are negative on this paper while the other reviewer is positive. Overall, the paper does not make the bar of ICLR. A reject is recommended."""
2115,"""Training Recurrent Neural Networks Online by Learning Explicit State Variables""","['Recurrent Neural Network', 'Partial Observability', 'Online Prediction', 'Incremental Learning']","""Recurrent neural networks (RNNs) allow an agent to construct a state-representation from a stream of experience, which is essential in partially observable problems. However, there are two primary issues one must overcome when training an RNN: the sensitivity of the learning algorithm's performance to truncation length and and long training times. There are variety of strategies to improve training in RNNs, the mostly notably Backprop Through Time (BPTT) and by Real-Time Recurrent Learning. These strategies, however, are typically computationally expensive and focus computation on computing gradients back in time. In this work, we reformulate the RNN training objective to explicitly learn state vectors; this breaks the dependence across time and so avoids the need to estimate gradients far back in time. We show that for a fixed buffer of data, our algorithm---called Fixed Point Propagation (FPP)---is sound: it converges to a stationary point of the new objective. We investigate the empirical performance of our online FPP algorithm, particularly in terms of computation compared to truncated BPTT with varying truncation levels. ""","""The paper proposes an alternative to BPTT for training recurrent neural networks based on an explicit state variable, which is trained to improve both the prediction accuracy and the prediction of the next state. One of the benefits of the methods is that it can be used for online training, where BPTT cannot be used in its exact form. Theoretical analysis is developed to show that the algorithm converges to a fixed point. Overall, the reviewers appreciate the clarity of the paper, and find the theory and the experimental evaluation to be reasonably well balanced. After a round of discussion, the authors improved the paper according to the reviews. The final assessments are overall positive, and Im therefore recommending accepting this paper."""
2116,"""Meta Dropout: Learning to Perturb Latent Features for Generalization""",[],"""A machine learning model that generalizes well should obtain low errors on unseen test examples. Thus, if we know how to optimally perturb training examples to account for test examples, we may achieve better generalization performance. However, obtaining such perturbation is not possible in standard machine learning frameworks as the distribution of the test data is unknown. To tackle this challenge, we propose a novel regularization method, meta-dropout, which learns to perturb the latent features of training examples for generalization in a meta-learning framework. Specifically, we meta-learn a noise generator which outputs a multiplicative noise distribution for latent features, to obtain low errors on the test instances in an input-dependent manner. Then, the learned noise generator can perturb the training examples of unseen tasks at the meta-test time for improved generalization. We validate our method on few-shot classification datasets, whose results show that it significantly improves the generalization performance of the base model, and largely outperforms existing regularization methods such as information bottleneck, manifold mixup, and information dropout.""","""This paper proposes a type of adaptive dropout to regularize gradient based meta-learning models. The reviewers found the idea interesting and it is supported by improvements on standard benchmarks. The authors addressed several concerns of the reviewers during the rebutal phase. In particular, revisions added results against other regularization mthods. We recommend that further attention is given to ablations, in particular the baseline proposed by Reviewer 1."""
2117,"""Spectral Nonlocal Block for Neural Network""","['Nonlocal Neural Network', 'Image Classification', 'Action Recgonition']","""The nonlocal network is designed for capturing long-range spatial-temporal dependencies in several computer vision tasks. Although having shown excellent performances, it needs an elaborate preparation for both the number and position of the building blocks. In this paper, we propose a new formulation of the nonlocal block and interpret it from the general graph signal processing perspective, where we view it as a fully-connected graph filter approximated by Chebyshev polynomials. The proposed nonlocal block is more efficient and robust, which is a generalized form of existing nonlocal blocks (e.g. nonlocal block, nonlocal stage). Moreover, we give the stable hypothesis and show that the steady-state of the deeper nonlocal structure should meet with it. Based on the stable hypothesis, a full-order approximation of the nonlocal block is derived for consecutive connections. Experimental results illustrate the clear-cut improvement and practical applicability of the generalized nonlocal block on both image and video classification tasks.""","""This paper proposes a new formulation of the non-local block and interpret it from the graph view. The idea is interesting and the experimental results seems to be promising. Reviewer has two major concerns. The first is the presentation, which is not clear enough. The second is the experimental design and analysis. The authors add more video dataset in the revision, but still lack comprehensive experimental analysis for video-based applications. Overall, the idea of non-local block from graph view is interesting. However, the presentation of the paper needs further polish and thus does not meet the standard of ICLR """
2118,"""Unrestricted Adversarial Examples via Semantic Manipulation""","['Adversarial Examples', 'Semantic Manipulation', 'Image Colorization', 'Texture Transfer']","""Machine learning models, especially deep neural networks (DNNs), have been shown to be vulnerable against adversarial examples which are carefully crafted samples with a small magnitude of the perturbation. Such adversarial perturbations are usually restricted by bounding their pseudo-formula norm such that they are imperceptible, and thus many current defenses can exploit this property to reduce their adversarial impact. In this paper, we instead introduce ""unrestricted"" perturbations that manipulate semantically meaningful image-based visual descriptors - color and texture - in order to generate effective and photorealistic adversarial examples. We show that these semantically aware perturbations are effective against JPEG compression, feature squeezing and adversarially trained model. We also show that the proposed methods can effectively be applied to both image classification and image captioning tasks on complex datasets such as ImageNet and MSCOCO. In addition, we conduct comprehensive user studies to show that our generated semantic adversarial examples are photorealistic to humans despite large magnitude perturbations when compared to other attacks.""","""In this paper, the authors present adversarial attacks by semantic manipulations, i.e., manipulating specific detectors that result in imperceptible changes in the picture, such as changing texture and color, but without affecting their naturalness. Moreover, these tasks are done on two large scale datasets (ImageNet and MSCOCO) and two visual tasks (classification and captioning). Finally, they also test their adversarial examples against a couple of defense mechanisms and how their transferability. Overall, all reviewers agreed this is an interesting work and well executed, complete with experiments and analyses. I agree with the reviewers in the assessment. I think this is an interesting study that moves us beyond restricted pixel perturbations and overall would be interesting to see what other detectors could be used to generate these type of semantic manipulations. I recommend acceptance of this paper. """
2119,"""Defending Against Physically Realizable Attacks on Image Classification""","['defense against physical attacks', 'adversarial machine learning']","""We study the problem of defending deep neural network approaches for image classification from physically realizable attacks. First, we demonstrate that the two most scalable and effective methods for learning robust models, adversarial training with PGD attacks and randomized smoothing, exhibit very limited effectiveness against three of the highest profile physical attacks. Next, we propose a new abstract adversarial model, rectangular occlusion attacks, in which an adversary places a small adversarially crafted rectangle in an image, and develop two approaches for efficiently computing the resulting adversarial examples. Finally, we demonstrate that adversarial training using our new attack yields image classification models that exhibit high robustness against the physically realizable attacks we study, offering the first effective generic defense against such attacks.""","""This paper studies the problem of defending deep neural network approaches for image classification from physically realizable attacks. It first demonstrates that adversarial training with PGD attacks and randomized smoothing exhibit limited effectiveness against three of the highest profile physical attacks. Then, it proposes a new abstract adversarial model, where an adversary places a small adversarially crafted rectangle in an image, and develops two approaches for efficiently computing the resulting adversarial examples. Empirical results show the effectiveness. Overall, a good paper. The rebuttal is convincing."""
2120,"""Weakly-supervised Knowledge Graph Alignment with Adversarial Learning""",[],"""This paper studies aligning knowledge graphs from different sources or languages. Most existing methods train supervised methods for the alignment, which usually require a large number of aligned knowledge triplets. However, such a large number of aligned knowledge triplets may not be available or are expensive to obtain in many domains. Therefore, in this paper we propose to study aligning knowledge graphs in fully-unsupervised or weakly-supervised fashion, i.e., without or with only a few aligned triplets. We propose an unsupervised framework to align the entity and relation embddings of different knowledge graphs with an adversarial learning framework. Moreover, a regularization term which maximizes the mutual information between the embeddings of different knowledge graphs is used to mitigate the problem of mode collapse when learning the alignment functions. Such a framework can be further seamlessly integrated with existing supervised methods by utilizing a limited number of aligned triples as guidance. Experimental results on multiple datasets prove the effectiveness of our proposed approach in both the unsupervised and the weakly-supervised settings.""","""Thanks for your detailed feedback to the reviewers, which clarified us a lot in many respects. This paper potentially discusses an interesting problem, and the concern raised by Review #2 was addressed in the revised paper. However, given the high competition at ICLR2020, this paper is unfortunately below the bar. We hope that the reviewers' comments are useful for improving the paper for potential future publication. The """
2121,"""Adversarial Interpolation Training: A Simple Approach for Improving Model Robustness""","['adversarial training', 'adversarial robustness']","""We propose a simple approach for adversarial training. The proposed approach utilizes an adversarial interpolation scheme for generating adversarial images and accompanying adversarial labels, which are then used in place of the original data for model training. The proposed approach is intuitive to understand, simple to implement and achieves state-of-the-art performance. We evaluate the proposed approach on a number of datasets including CIFAR10, CIFAR100 and SVHN. Extensive empirical results compared with several state-of-the-art methods against different attacks verify the effectiveness of the proposed approach. ""","""Reviewers agree that the proposed method is interesting and achieves impressive results. Clarifications were needed in terms of motivating and situating the work. Thee rebuttal helped, but unfortunately not enough to push the paper above the threshold. We encourage the authors to further improve the presentation of their method and take into accounts the comments in future revisions."""
2122,"""Playing the lottery with rewards and multiple languages: lottery tickets in RL and NLP""","['lottery tickets', 'nlp', 'transformer', 'rl', 'reinforcement learning']","""The lottery ticket hypothesis proposes that over-parameterization of deep neural networks (DNNs) aids training by increasing the probability of a lucky sub-network initialization being present rather than by helping the optimization process (Frankle& Carbin, 2019). Intriguingly, this phenomenon suggests that initialization strategies for DNNs can be improved substantially, but the lottery ticket hypothesis has only previously been tested in the context of supervised learning for natural image tasks. Here, we evaluate whether winning ticket initializations exist in two different domains: natural language processing (NLP) and reinforcement learning (RL).For NLP, we examined both recurrent LSTM models and large-scale Transformer models (Vaswani et al., 2017). For RL, we analyzed a number of discrete-action space tasks, including both classic control and pixel control. Consistent with workin supervised image classification, we confirm that winning ticket initializations generally outperform parameter-matched random initializations, even at extreme pruning rates for both NLP and RL. Notably, we are able to find winning ticket initializations for Transformers which enable models one-third the size to achieve nearly equivalent performance. Together, these results suggest that the lottery ticket hypothesis is not restricted to supervised learning of natural images, but rather represents a broader phenomenon in DNNs.""","""This paper explores the application of the lottery ticket hypothesis to NLP and RL problems for better initialisations of deep networks and reduced model sizes. This is evaluated in a variety of settings, including continuous control and ATARI games for RL, and LSTMs and Transformers for NLP, showing very positive results. The main issue raised by the reviewers was the lack of algorithmic novelty in the paper. Despite this, I believe the paper to present an important contribution that could stimulate much additional research. The paper is well written and the results are rigorous and interesting. For these reasons I recommend acceptance."""
2123,"""Channel Equilibrium Networks""","['Deep learning', 'convolutional neural networks', 'building block design']","""Convolutional Neural Networks (CNNs) typically treat normalization methods such as batch normalization (BN) and rectified linear function (ReLU) as building blocks. Previous work showed that this basic block would lead to channel-level sparsity (i.e. channel of zero values), reducing computational complexity of CNNs. However, over-sparse CNNs have many collapsed channels (i.e. many channels with undesired zero values), impeding their learning ability. This problem is seldom explored in the literature. To recover the collapsed channels and enhance learning capacity, we propose a building block, Channel Equilibrium (CE), which takes the output of a normalization layer as input and switches between two branches, batch decorrelation (BD) branch and adaptive instance inverse (AII) branch. CE is able to prevent implicit channel-level sparsity in both experiments and theory. It has several appealing properties. First, CE can be stacked after many normalization methods such as BN and Group Normalization (GN), and integrated into many advanced CNN architectures such as ResNet and MobileNet V2 to form a series of CE networks (CENets), consistently improving their performance. Second, extensive experiments show that CE achieves state-of-the-art results on various challenging benchmarks such as ImageNet and COCO. Third, we show an interesting connection between CE and Nash Equilibrium, a well-known solution of a non-cooperative game. The models and code will be released soon.""","""The paper proposed Channel Equilibrium (CE) to overcome the over-sparsity problem in CNNs using 'BN+ReLU'. Experiments on ImageNet and COCO show its effectiveness by introducing little computational complexity. However the reviewers pointed a number of problems in the writing and the clarity of the paper. Although the authors addressed all the se concerns in details and agreed to make revisions in the paper, it's better for the authors to submit the revised version to another opportunity."""
2124,"""ICNN: INPUT-CONDITIONED FEATURE REPRESENTATION LEARNING FOR TRANSFORMATION-INVARIANT NEURAL NETWORK""","['Transformation-invariance', 'Reconstruction', 'Run-time Convolution Filter generation']","""We propose a novel framework, ICNN, which combines the input-conditioned filter generation module and a decoder based network to incorporate contextual information present in images into Convolutional Neural Networks (CNNs). In contrast to traditional CNNs, we do not employ the same set of learned convolution filters for all input image instances. And our proposed decoder network serves the purpose of reducing the transformation present in the input image by learning to construct a representative image of the input image class. Our proposed joint supervision of input-aware framework when combined with techniques inspired by Multi-instance learning and max-pooling, results in a transformation-invariant neural network. We investigated the performance of our proposed framework on three MNIST variations, which covers both rotation and scaling variance, and achieved 0.98% error on MNIST-rot-12k, 1.12% error on Half-rotated MNIST and 0.68% error on Scaling MNIST, which is significantly better than the state-of-the-art results. Our proposed model also showcased consistent improvement on the CIFAR dataset. We make use of visualization to further prove the effectiveness of our input-aware convolution filters. Our proposed convolution filter generation framework can also serve as a plugin for any CNN based architecture and enhance its modeling capacity.""","""This paper proposes a CNN that is invariant to input transformation, by making two modifications on top of the TI-pooling architecture: the input-dependent convolutional filters, and a decoder network to ensure fully transformation invariant. Reviewer #1 concerns the limited novelty, unconvincing experimental results. Reviewer #2 praises the paper being well written, but is not convinced by the significance of the contributions. The authors respond to Reviewer #2 but did not change the rating. Reviewer #3 especially concerns that the paper is not well positioned with respect to the related prior work. Given these concerns and overall negative rating (two weak reject and one reject), the AC recommends reject."""
2125,"""Tensor Graph Convolutional Networks for Prediction on Dynamic Graphs""","['graph convolutional networks', 'graph learning', 'dynamic graphs', 'edge classification', 'tensors']","""Many irregular domains such as social networks, financial transactions, neuron connections, and natural language structures are represented as graphs. In recent years, a variety of graph neural networks (GNNs) have been successfully applied for representation learning and prediction on such graphs. However, in many of the applications, the underlying graph changes over time and existing GNNs are inadequate for handling such dynamic graphs. In this paper we propose a novel technique for learning embeddings of dynamic graphs based on a tensor algebra framework. Our method extends the popular graph convolutional network (GCN) for learning representations of dynamic graphs using the recently proposed tensor M-product technique. Theoretical results that establish the connection between the proposed tensor approach and spectral convolution of tensors are developed. Numerical experiments on real datasets demonstrate the usefulness of the proposed method for an edge classification task on dynamic graphs.""","""The paper proposes a tensor-based extension to graph convolutional networks for prediction over dynamic graphs. The proposed model is reasonable and achieves promising empirical results. After discussion, it is agreed that while the problem of handling dynamic graphs is interesting and challenging, the proposed tensor method lacks novelty, the theoretical analysis is artificial, and the empirical study does not cover enough benchmarks. The current version of the paper is not ready for publication. Addressing the issues above could lead to a strong publication in the future. """
2126,"""StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding""",[],"""Recently, the pre-trained language model, BERT (and its robustly optimized version RoBERTa), has attracted a lot of attention in natural language understanding (NLU), and achieved state-of-the-art accuracy in various NLU tasks, such as sentiment classification, natural language inference, semantic textual similarity and question answering. Inspired by the linearization exploration work of Elman, we extend BERT to a new model, StructBERT, by incorporating language structures into pre-training. Specifically, we pre-train StructBERT with two auxiliary tasks to make the most of the sequential order of words and sentences, which leverage language structures at the word and sentence levels, respectively. As a result, the new model is adapted to different levels of language understanding required by downstream tasks. The StructBERT with structural pre-training gives surprisingly good empirical results on a variety of downstream tasks, including pushing the state-of-the-art on the GLUE benchmark to 89.0 (outperforming all published models at the time of model submission), the F1 score on SQuAD v1.1 question answering to 93.0, the accuracy on SNLI to 91.7.""","""This paper proposes a pair of complementary word- and sentence-level pretraining objectives for BERT-style models, and shows that they are empirically effective, especially when used with an already-pretrained RoBERTa model. Work of this kind has been extremely impactful in NLP, and so I'm somewhat biased toward acceptance: If this isn't published, it seems likely that other groups will go to the trouble to replicate roughly these experiments. However, I think the paper is borderline. Reviewers were impressed by the results, but not convinced that the ablations and analyses were sufficient to motivate the proposed methods, suggesting that some variants of the proposed methods could likely be substantially better. In addition, I agree strongly with R3 that framing this work around 'language structure' is disingenuous, and actively misleads readers about the contribution to the paper."""
2127,"""Transferring Optimality Across Data Distributions via Homotopy Methods""","['deep learning', 'numerical optimization', 'transfer learning']","""Homotopy methods, also known as continuation methods, are a powerful mathematical tool to efficiently solve various problems in numerical analysis, including complex non-convex optimization problems where no or only little prior knowledge regarding the localization of the solutions is available. In this work, we propose a novel homotopy-based numerical method that can be used to transfer knowledge regarding the localization of an optimum across different task distributions in deep learning applications. We validate the proposed methodology with some empirical evaluations in the regression and classification scenarios, where it shows that superior numerical performance can be achieved in popular deep learning benchmarks, i.e. FashionMNIST, CIFAR-10, and draw connections with the widely used fine-tuning heuristic. In addition, we give more insights on the properties of a general homotopy method when used in combination with Stochastic Gradient Descent by conducting a general local theoretical analysis in a simplified setting. ""","""This paper presents a theoretically motivated method based on homotopy continuation for transfer learning and demonstrates encouraging results on FashionMNIST and CIFAR-10. The authors draw a connection between this approach and the widely used fine-tuning heuristic. Reviewers find principled approaches to transfer learning in deep neural networks an important direction, and find the contributions of this paper an encouraging step in that direction. Alongside with the reviewers, I think homotopy continuation is a great numerical tool with a lot of untapped potentials for ML applications, and I am happy to see an instantiation of this approach for transfer learning. Reviewers had some concerns about experimental evaluations (reporting test performance in addition to training), and the writing of the draft. The authors addressed these in the revised version by including test performance in the appendix and rewriting the first parts of the paper. Two out of three reviewers recommend accept. I also find the homotopy analysis interesting and alongside with majority of reviewers, recommend accept. However, please try to iterate at least once more over the writing; simply long sentences and make sure the writing and flow are, for the camera ready version."""
2128,"""Training a Constrained Natural Media Painting Agent using Reinforcement Learning """,[],"""We present a novel approach to train a natural media painting using reinforcement learning. Given a reference image, our formulation is based on stroke-based rendering that imitates human drawing and can be learned from scratch without supervision. Our painting agent computes a sequence of actions that represent the primitive painting strokes. In order to ensure that the generated policy is predictable and controllable, we use a constrained learning method and train the painting agent using the environment model and follows the commands encoded in an observation. We have applied our approach on many benchmarks and our results demonstrate that our constrained agent can handle different painting media and different constraints in the action space to collaborate with humans or other agents. ""","""Paper is withdrawn by authors."""
2129,"""Knowledge Consistency between Neural Networks and Beyond""","['Deep Learning', 'Interpretability', 'Convolutional Neural Networks']","""This paper aims to analyze knowledge consistency between pre-trained deep neural networks. We propose a generic definition for knowledge consistency between neural networks at different fuzziness levels. A task-agnostic method is designed to disentangle feature components, which represent the consistent knowledge, from raw intermediate-layer features of each neural network. As a generic tool, our method can be broadly used for different applications. In preliminary experiments, we have used knowledge consistency as a tool to diagnose representations of neural networks. Knowledge consistency provides new insights to explain the success of existing deep-learning techniques, such as knowledge distillation and network compression. More crucially, knowledge consistency can also be used to refine pre-trained networks and boost performance.""","""This paper presents a method for extracting ""knowledge consistency"" between neural networks and understanding their representations. Reviewers and AC are positive on the paper, in terms of insightful findings and practical usages, and also gave constructive suggestions to improve the paper. In particular, I think the paper can gain much attention for ICLR audience. Hence, I recommend acceptance."""
2130,"""Adaptive Loss Scaling for Mixed Precision Training""","['Deep Learning', 'Mixed Precision Training', 'Loss Scaling', 'Backpropagation']","""Mixed precision training (MPT) is becoming a practical technique to improve the speed and energy efficiency of training deep neural networks by leveraging the fast hardware support for IEEE half-precision floating point that is available in existing GPUs. MPT is typically used in combination with a technique called loss scaling, that works by scaling up the loss value up before the start of backpropagation in order to minimize the impact of numerical underflow on training. Unfortunately, existing methods make this loss scale value a hyperparameter that needs to be tuned per-model, and a single scale cannot be adapted to different layers at different training stages. We introduce a loss scaling-based training method called adaptive loss scaling that makes MPT easier and more practical to use, by removing the need to tune a model-specific loss scale hyperparameter. We achieve this by introducing layer-wise loss scale values which are automatically computed during training to deal with underflow more effectively than existing methods. We present experimental results on a variety of networks and tasks that show our approach can shorten the time to convergence and improve accuracy, compared with using the existing state-of-the-art MPT and single-precision floating point.""","""This work proposes to improve mixed precision training by adaptively scaling the loss based on statistics from previous activations to minimize underflow during training. However, the method is designed rather heuristically and can be improved with stronger theoretical support and improved representation of the paper. """
2131,"""A Baseline for Few-Shot Image Classification""","['few-shot learning', 'transductive learning', 'fine-tuning', 'baseline', 'meta-learning']","""Fine-tuning a deep network trained with the standard cross-entropy loss is a strong baseline for few-shot learning. When fine-tuned transductively, this outperforms the current state-of-the-art on standard datasets such as Mini-ImageNet, Tiered-ImageNet, CIFAR-FS and FC-100 with the same hyper-parameters. The simplicity of this approach enables us to demonstrate the first few-shot learning results on the ImageNet-21k dataset. We find that using a large number of meta-training classes results in high few-shot accuracies even for a large number of few-shot classes. We do not advocate our approach as the solution for few-shot learning, but simply use the results to highlight limitations of current benchmarks and few-shot protocols. We perform extensive studies on benchmark datasets to propose a metric that quantifies the ""hardness"" of a few-shot episode. This metric can be used to report the performance of few-shot algorithms in a more systematic way.""","""This paper introduces a simple baseline for few-shot image classification in the transductive setting, which includes a standard cross-entropy loss on the labeled support samples and a conditional entropy loss on the unlabeled query samples. Both losses are known in the literature (the seminal work of entropy minimization by Bengio should be cited properly). However, reviewers are positive about this paper, acknowledging the significant contributions of a novel few-shot baseline that establishes a new state-of-the-art on well-known public few-shot datasets as well as on the introduced large-scale benchmark ImageNet21K. The comprehensive study of the methods and datasets in this domain will benefit the research practices in this area. Therefore, I make an acceptance recommendation."""
2132,"""Probabilistic Connection Importance Inference and Lossless Compression of Deep Neural Networks""",[],"""Deep neural networks (DNNs) can be huge in size, requiring a considerable a mount of energy and computational resources to operate, which limits their applications in numerous scenarios. It is thus of interest to compress DNNs while maintaining their performance levels. We here propose a probabilistic importance inference approach for pruning DNNs. Specifically, we test the significance of the relevance of a connection in a DNN to the DNNs outputs using a nonparemtric scoring testand keep only those significant ones. Experimental results show that the proposed approach achieves better lossless compression rates than existing techniques""","""This paper proposes a novel approach for pruning deep neural networks using non-parametric statistical tests to detect 3-way interactions among two nodes and the output. While the reviewers agree that this is a neat idea, the paper has been limited in terms of experimental validation. The authors provided further experimental results during the discussion period and the reviewers agree that the paper is now acceptable for publication at ICLR-2020. """
2133,"""Hydra: Preserving Ensemble Diversity for Model Distillation""","['model distillation', 'ensemble models']","""Ensembles of models have been empirically shown to improve predictive performance and to yield robust measures of uncertainty. However, they are expensive in computation and memory. Therefore, recent research has focused on distilling ensembles into a single compact model, reducing the computational and memory burden of the ensemble while trying to preserve its predictive behavior. Most existing distillation formulations summarize the ensemble by capturing its average predictions. As a result, the diversity of the ensemble predictions, stemming from each individual member, is lost. Thus the distilled model cannot provide a measure of uncertainty comparable to that of the original ensemble. To retain more faithfully the diversity of the ensemble, we propose a distillation method based on a single multi-headed neural network, which we refer to as Hydra. The shared body network learns a joint feature representation that enables each head to capture the predictive behavior of each ensemble member. We demonstrate that with a slight increase in parameter count, Hydra improves distillation performance on classification and regression settings while capturing the uncertainty behaviour of the original ensemble over both in-domain and out-of-distribution tasks.""","""This work introduces a simple and effective method for ensemble distillation. The method is a simple extension of earlier prior networks: it differs in which, instead of fitting a single network to mimic a distribution produced by the ensemble, this work suggests to use multi-head (one head per individual ensemble member) in order to better capture the ensemble diversity. This paper experimentally shows that multi-head architecture performs well on MNIST and CIFAR-10 (they added CIFAR-100 in the revised version) in terms of accuracy and uncertainty. While the method is effective and the experiments on CIFAR-100 (a harder task) improved the paper, the reviewers (myself included) pointed out in the discussion phase that the limited novelty remains a major weakness. The proposed method seems like a trivial extension of the prior work, and does not provide much additional insight. To remedy this shortcoming, I suggest the authors provide extensive experimental supports including various datasets and ablation studies. Another concern mentioned in the discussion is the fact that these small improvements are in spite of the fact that the proposed method ends up using many more parameters than the baselines. Including and comparing different model sizes in a full fledged experimental evaluation would better convey the trade-offs of the proposed approach. """
2134,"""The Visual Task Adaptation Benchmark""","['representation learning', 'self-supervised learning', 'benchmark', 'large-scale study']","""Representation learning promises to unlock deep learning for the long tail of vision tasks without expansive labelled datasets. Yet, the absence of a unified yardstick to evaluate general visual representations hinders progress. Many sub-fields promise representations, but each has different evaluation protocols that are either too constrained (linear classification), limited in scope (ImageNet, CIFAR, Pascal-VOC), or only loosely related to representation quality (generation). We present the Visual Task Adaptation Benchmark (VTAB): a diverse, realistic, and challenging benchmark to evaluate representations. VTAB embodies one principle: good representations adapt to unseen tasks with few examples. We run a large VTAB study of popular algorithms, answering questions like: How effective are ImageNet representation on non-standard datasets? Are generative models competitive? Is self-supervision useful if one already has labels?""","""The authors present a new benchmark for evaluating a plethora of models on a variety of tasks. In terms of scores, the paper received a borderline rating, with two reviews being rather superficial unfortunately. The last reviewer was positive. The reviewers generally agreed that the benchmark is interesting and carries value, and the AC agrees. Authors certainly invested a significant effort in designing the benchmark and performing a detailed analysis over several tasks and methods. However, the effort seems more engineering in nature and insights are somewhat lacking. For an experimental paper, presenting the results is interesting yet not sufficient. A much more in-depth analysis and discuss would warrant a deeper understanding of the results and open directions for future work. This part is currently underwhelming. """
2135,"""Dirichlet Wrapper to Quantify Classification Uncertainty in Black-Box Systems""","['uncertainty', 'black-box classifiers', 'rejection', 'deep learning', 'NLP', 'CV']","""Nowadays, machine learning models are becoming a utility in many sectors. AI companies deliver pre-trained encapsulated models as application programming interfaces (APIs) that developers can combine with third party components, their models, and proprietary data, to create complex data products. This complexity and the lack of control and knowledge of the internals of these external components might cause unavoidable effects, such as lack of transparency, difficulty in auditability, and the emergence of uncontrolled potential risks. These issues are especially critical when practitioners use these components as black-boxes in new datasets. In order to provide actionable insights in this type of scenarios, in this work we propose the use of a wrapping deep learning model to enrich the output of a classification black-box with a measure of uncertainty. Given a black-box classifier, we propose a probabilistic neural network that works in parallel to the black-box and uses a Dirichlet layer as the fusion layer with the black-box. This Dirichlet layer yields a distribution on top of the multinomial output parameters of the classifier and enables the estimation of aleatoric uncertainty for any data sample. Based on the resulting uncertainty measure, we advocate for a rejection system that selects the more confident predictions, discarding those more uncertain, leading to an improvement in the trustability of the resulting system. We showcase the proposed technique and methodology in two practical scenarios, one for NLP and another for computer vision, where a simulated API based is applied to different domains. Results demonstrate the effectiveness of the uncertainty computed by the wrapper and its high correlation to wrong predictions and misclassifications.""","""This paper proposes adding a Dirichlet distribution as a wrapper on top of a black box classifier in order to better capture uncertainty in the predictions. This paper received four reviews in total with scores (1,1,1,6). The reviewer who gave the weak accept found the paper well written, easy to follow and intuitive. The other reviewers, however, were primarily concerned about the empirical evaluation of the method. They found the baselines too weak and weren't convinced that the method would work well in practice. The reviewers also cited a lack of comparison to existing literature for their scores. One reviewer noted that while the method addresses aleatoric uncertainty, it doesn't provide any mechanism for epistemic uncertainty, which would be necessary for the applications motivating the work. The authors did not provide a response and thus there was no discussion. """
2136,"""Continuous Convolutional Neural Network forNonuniform Time Series""",[],"""Convolutional neural network (CNN) for time series data implicitly assumes that the data are uniformly sampled, whereas many event-based and multi-modal data are nonuniform or have heterogeneous sampling rates. Directly applying regularCNN to nonuniform time series is ungrounded, because it is unable to recognize and extract common patterns from the nonuniform input signals. Converting the nonuniform time series to uniform ones by interpolation preserves the pattern extraction capability of CNN, but the interpolation kernels are often preset and may be unsuitable for the data or tasks. In this paper, we propose the ContinuousCNN (CCNN), which estimates the inherent continuous inputs by interpolation, and performs continuous convolution on the continuous input. The interpolation and convolution kernels are learned in an end-to-end manner, and are able to learn useful patterns despite the nonuniform sampling rate. Besides, CCNN is a strict generalization to CNN. Results of several experiments verify that CCNN achieves abetter performance on nonuniform data, and learns meaningful continuous kernels""","""This paper presents a continuous CNN model that can handle nonuniform time series data. It learns the interpolation kernel and convolutional architectures in an end-to-end manner, which is shown to achieve higher performance compared to nave baselines. All reviewers scored Weak Reject and there was no strong opinion to support the paper during discussion. Although I felt some of the reviewers comments are missing the points, I generally agree that the novelty of the method is rather straightforward and incremental, and that the experimental evaluation is not convincing enough. Particularly, comparison with more recent state-of-the-art point process methods should be included. For example, [1-3] claim better performance than RMTPP. Considering that the contribution of the paper is more on empirical side and CCNN is not only the solution for handing nonuniform time series data, I think this point should be properly addressed and discussed. Based on these reasons, Id like to recommend rejection. [1] Xiao et al., Modeling the Intensity Function of Point Process via Recurrent Neural Networkss, AAAI 2017. [2] Li et al., Learning Temporal Point Processes via Reinforcement Learning, NIPS 2018. [3] Turkmen et al, FastPoint: Scalable Deep Point Processes, ECML-PKDD 2019. """
2137,"""Query2box: Reasoning over Knowledge Graphs in Vector Space Using Box Embeddings""","['knowledge graph embeddings', 'logical reasoning', 'query answering']","""Answering complex logical queries on large-scale incomplete knowledge graphs (KGs) is a fundamental yet challenging task. Recently, a promising approach to this problem has been to embed KG entities as well as the query into a vector space such that entities that answer the query are embedded close to the query. However, prior work models queries as single points in the vector space, which is problematic because a complex query represents a potentially large set of its answer entities, but it is unclear how such a set can be represented as a single point. Furthermore, prior work can only handle queries that use conjunctions ( pseudo-formula ) and existential quantifiers ( pseudo-formula ). Handling queries with logical disjunctions ( pseudo-formula ) remains an open problem. Here we propose query2box, an embedding-based framework for reasoning over arbitrary queries with pseudo-formula , pseudo-formula , and pseudo-formula operators in massive and incomplete KGs. Our main insight is that queries can be embedded as boxes (i.e., hyper-rectangles), where a set of points inside the box corresponds to a set of answer entities of the query. We show that conjunctions can be naturally represented as intersections of boxes and also prove a negative result that handling disjunctions would require embedding with dimension proportional to the number of KG entities. However, we show that by transforming queries into a Disjunctive Normal Form, query2box is capable of handling arbitrary logical queries with pseudo-formula , pseudo-formula , pseudo-formula in a scalable manner. We demonstrate the effectiveness of query2box on two large KGs and show that query2box achieves up to 25% relative improvement over the state of the art. ""","""This paper proposes a new method to answering queries using incomplete knowledge bases. The approach relies on learning embeddings of the vertices of the knowledge graph. The reviewers unanimously found that the method was well motivated and found the method convincingly outperforms previous work."""
2138,"""Beyond Classical Diffusion: Ballistic Graph Neural Network""","['Graph Convolutional Network', 'Diffusion', 'Transportation', 'Machine Learning']","""This paper presents the ballistic graph neural network. Ballistic graph neural network tackles the weight distribution from a transportation perspective and has many different properties comparing to the traditional graph neural network pipeline. The ballistic graph neural network does not require to calculate any eigenvalue. The filters propagate exponentially faster( \sim T^2 comparing to traditional graph neural network( \sim T$). We use a perturbed coin operator to perturb and optimize the diffusion rate. Our results show that by selecting the diffusion speed, the network can reach a similar accuracy with fewer parameters. We also show the perturbed filters act as better representations comparing to pure ballistic ones. We provide a new perspective of training graph neural network, by adjusting the diffusion rate, the neural network's performance can be improved.""","""This submission has been assessed by three reviewers who scored it 3/1/3, and they have remained unconvinced after the rebuttal. The main issues voiced are the difficult readability of the paper, cryptic at times due to a mix of physical and DL notations, and a lack of sufficient experimentation to support all claims. The reviewers acknowledge the authors' efforts to resolve the main issues but find these efforts insufficient. Thus, this paper cannot be accepted to ICLR2020."""
2139,"""Project and Forget: Solving Large Scale Metric Constrained Problems""","['metric constrained problems', 'metric learning', 'metric nearness', 'correlation clustering', 'Bregman projection', 'cutting planes', 'large scale optimization']","""Given a set of distances amongst points, determining what metric representation is most consistent with the input distances or the metric that captures the relevant geometric features of the data is a key step in many machine learning algorithms. In this paper, we focus on metric constrained problems, a class of optimization problems with metric constraints. In particular, we identify three types of metric constrained problems: metric nearness Brickell et al. (2008), weighted correlation clustering on general graphs Bansal et al. (2004), and metric learning Bellet et al. (2013); Davis et al. (2007). Because of the large number of constraints in these problems, however, researchers have been forced to restrict either the kinds of metrics learned or the size of the problem that can be solved. We provide an algorithm, PROJECT AND FORGET, that uses Bregman projections with cutting planes, to solve metric constrained problems with many (possibly exponentially) inequality constraints. We also prove that our algorithm converges to the global optimal solution. Additionally, we show that the optimality error (L2 distance of the current iterate to the optimal) asymptotically decays at an exponential rate. We show that using our method we can solve large problem instances of three types of metric constrained problems, out-performing all state of the art methods with respect to CPU times and problem sizes.""","""Quoting from Reviewer2: ""The paper considers the problem of optimizing convex functions under metric constraints. The main challenge is that expressing all metric constraints on n points requiries O(n^3) constraints. The paper proposes a project and forget approach which is essentially is based on cyclic Bregman projections but with a twist that some of the constraints are forgotten."" The reviewers were split on this submission, with two arguing for weak acceptance and one arguing for rejection. Purely based on scores, this paper is borderline. It was pointed out by multiple reviewers that the method is not very novel. In particular it effectively works as an active set method. It appears to be very effective in this setting, but the basic algorithm does not differ in structure from any active set method, for which removal of inactive constraints is considered standard (see even the wikipedia page on active set methods)."""
2140,"""How to 0wn the NAS in Your Spare Time""",['Reconstructing Novel Deep Learning Systems'],"""New data processing pipelines and novel network architectures increasingly drive the success of deep learning. In consequence, the industry considers top-performing architectures as intellectual property and devotes considerable computational resources to discovering such architectures through neural architecture search (NAS). This provides an incentive for adversaries to steal these novel architectures; when used in the cloud, to provide Machine Learning as a Service (MLaaS), the adversaries also have an opportunity to reconstruct the architectures by exploiting a range of hardware side-channels. However, it is challenging to reconstruct novel architectures and pipelines without knowing the computational graph (e.g., the layers, branches or skip connections), the architectural parameters (e.g., the number of filters in a convolutional layer) or the specific pre-processing steps (e.g. embeddings). In this paper, we design an algorithm that reconstructs the key components of a novel deep learning system by exploiting a small amount of information leakage from a cache side-channel attack, Flush+Reload. We use Flush+Reload to infer the trace of computations and the timing for each computation. Our algorithm then generates candidate computational graphs from the trace and eliminates incompatible candidates through a parameter estimation process. We implement our algorithm in PyTorch and Tensorflow. We demonstrate experimentally that we can reconstruct MalConv, a novel data pre-processing pipeline for malware detection, and ProxylessNAS-CPU, a novel network architecture for the ImageNet classification optimized to run on CPUs, without knowing the architecture family. In both cases, we achieve 0% error. These results suggest hardware side channels are a practical attack vector against MLaaS, and more efforts should be devoted to understanding their impact on the security of deep learning systems.""","""This paper proposes using Flush+Reload to infer the deep network architecture of another program, when the two programs are running on the same machine (as in cloud computing or similar). There is some disagreement about this paper; the approach is thoughtful and well executed, but one reviewer had concerns about its applicability and realism. Upon reading the author's rebuttal I believe these to be largely addressed, or at least as realistically as one can in a single paper. Therefore I recommend acceptance."""
2141,"""Layer Flexible Adaptive Computation Time for Recurrent Neural Networks""",[],"""Deep recurrent neural networks perform well on sequence data and are the model of choice. However, it is a daunting task to decide the structure of the networks, i.e. the number of layers, especially considering different computational needs of a sequence. We propose a layer flexible recurrent neural network with adaptive computation time, and expand it to a sequence to sequence model. Different from the adaptive computation time model, our model has a dynamic number of transmission states which vary by step and sequence. We evaluate the model on a financial data set and Wikipedia language modeling. Experimental results show the performance improvement of 7% to 12% and indicate the model's ability to dynamically change the number of layers along with the computational steps.""","""All reviewers assessed this paper as a weak reject. The AC recommends rejection."""
2142,"""Neural networks are a priori biased towards Boolean functions with low entropy""","['class imbalance', 'perceptron', 'inductive bias', 'simplicity bias', 'initialization']","""Understanding the inductive bias of neural networks is critical to explaining their ability to generalise. Here, for one of the simplest neural networks -- a single-layer perceptron with pseudo-formula input neurons, one output neuron, and no threshold bias term -- we prove that upon random initialisation of weights, the a priori probability pseudo-formula that it represents a Boolean function that classifies pseudo-formula points in pseudo-formula as pseudo-formula has a remarkably simple form: P(t) = 2^{-n} \,\, {\rm for} \,\, 0\leq t < 2^n Since a perceptron can express far fewer Boolean functions with small or large values of pseudo-formula (low ""entropy"") than with intermediate values of pseudo-formula (high ""entropy"") there is, on average, a strong intrinsic a-priori bias towards individual functions with low entropy. Furthermore, within a class of functions with fixed pseudo-formula , we often observe a further intrinsic bias towards functions of lower complexity. Finally, we prove that, regardless of the distribution of inputs, the bias towards low entropy becomes monotonically stronger upon adding ReLU layers, and empirically show that increasing the variance of the bias term has a similar effect.""","""This article studies the inductive bias in a simple binary perceptron without bias, showing that if the weight vector has a symmetric distribution, then the cardinality of the support of the represented function is uniform on 0,...,2^n-1. Since the number of possible functions with support of extreme cardinality values is smaller, the result is interpreted as a bias towards such functions. Further results and experiments are presented. The reviewers found this work interesting and mentioned that it contributes to the understanding of neural networks. However, they also expressed concerns about the contribution relying crucially on 0/1 variables, and that for example with -1/1 the effect would disappear, implying that the result might not be capturing a significant aspect of neural networks. Another concern was whether the results could be generalised to other architectures. The authors agreed that this is indeed a crucial part of the analysis, and for the moment pointed at empirical evidence for the appearance of this effect in other cases. The reviewers also mentioned that the motivation was not very clear, that some of the derivations were difficult to follow (with many results presented in the appendix), and that the interpretation and implications were not sufficiently discussed (in particular, in relation to generalization, missing a more detailed discussion of training). This is a good contribution and the revision made important improvements on the points mentioned above, but not quite reaching the bar. """
2143,"""Occlusion resistant learning of intuitive physics from videos""",[],"""To reach human performance on complex tasks, a key ability for artificial systems is to understand physical interactions between objects, and predict future outcomes of a situation. This ability, often referred to as intuitive physics , has recently received attention and several methods were proposed to learn these physical rules from video sequences. Yet, most these methods are restricted to the case where no occlusions occur, narrowing the potential areas of application. The main contribution of this paper is a method combining a predictor of object dynamics and a neural renderer efficiently predicting future trajectories and explicitly modelling partial and full occlusions among objects. We present a training procedure enabling learning intuitive physics directly from the input videos containing segmentation masks of objects and their depth. Our results show that our model learns object dynamics despite significant inter-object occlusions, and realistically predicts segmentation masks up to 30 frames in the future. We study model performance for increasing levels of occlusions, and compare results to previous work on the tasks of future prediction and object following. We also show results on predicting motion of objects in real videos and demonstrate significant improvements over state-of-the-art on the object permanence task in the intuitive physics benchmark of Riochet et al. (2018).""","""The paper studies the problem of modeling inter-object dynamics with occlusions. It provides proof-of-concept demonstrations on toy 3d scenes that occlusions can be handled by structured representations using object-level segmentation masks and depth information. However, the technical novelty is not high and the requirement of such structured information seems impractical real-world applications which thus limits the significance of the proposed method."""
2144,"""Annealed Denoising score matching: learning Energy based model in high-dimensional spaces""","['Energy based models', 'score matching', 'annealing', 'likelihood', 'generative model', 'unsupervised learning']","""Energy based models outputs unmormalized log-probability values given datasamples. Such a estimation is essential in a variety of application problems suchas sample generation, denoising, sample restoration, outlier detection, Bayesianreasoning, and many more. However, standard maximum likelihood training iscomputationally expensive due to the requirement of sampling model distribution.Score matching potentially alleviates this problem, and denoising score matching(Vincent, 2011) is a particular convenient version. However, previous attemptsfailed to produce models capable of high quality sample synthesis. We believethat it is because they only performed denoising score matching over a singlenoise scale. To overcome this limitation, here we instead learn an energy functionusing all noise scales. When sampled using Annealed Langevin dynamics andsingle step denoising jump, our model produced high-quality samples comparableto state-of-the-art techniques such as GANs, in addition to assigning likelihood totest data comparable to previous likelihood models. Our model set a new sam-ple quality baseline in likelihood-based models. We further demonstrate that our model learns sample distribution and generalize well on an image inpainting tasks.""","""This paper presents a variant of the Noise Conditional Score Network (NCSN) which does score matching using a single Gaussian scale mixture noise model. Unlike the NCSN, it learns a single energy-based model, and therefore can be compared directly to other models in terms of compression. I've read the paper, and the methods, exposition, and experiments all seem solid. Numerically, the score is slightly below the cutoff; reviewers generally think the paper is well-executed, but lacking in novelty and quality of results relative to Song & Ermon (2019). """
2145,"""ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators""","['Natural Language Processing', 'Representation Learning']","""Masked language modeling (MLM) pre-training methods such as BERT corrupt the input by replacing some tokens with [MASK] and then train a model to reconstruct the original tokens. While they produce good results when transferred to downstream NLP tasks, they generally require large amounts of compute to be effective. As an alternative, we propose a more sample-efficient pre-training task called replaced token detection. Instead of masking the input, our approach corrupts it by replacing some tokens with plausible alternatives sampled from a small generator network. Then, instead of training a model that predicts the original identities of the corrupted tokens, we train a discriminative model that predicts whether each token in the corrupted input was replaced by a generator sample or not. Thorough experiments demonstrate this new pre-training task is more efficient than MLM because the task is defined over all input tokens rather than just the small subset that was masked out. As a result, the contextual representations learned by our approach substantially outperform the ones learned by BERT given the same model size, data, and compute. The gains are particularly strong for small models; for example, we train a model on one GPU for 4 days that outperforms GPT (trained using 30x more compute) on the GLUE natural language understanding benchmark. Our approach also works well at scale, where it performs comparably to RoBERTa and XLNet while using less than 1/4 of their compute and outperforms them when using the same amount of compute. ""","""This paper investigates the tasks used to pretrain language models. The paper proposes not using a generative tasks ('filling in' masked tokens), but instead a discriminative tasked (recognising corrupted tokens). The authors empirically show that the proposed method leads to improved performance, especially in the ""limited compute"" regime. Initially, the reviewers had quite split opinions on the paper, but after the rebuttal and discussion phases all reviewers agreed on an ""accept"" recommendation. I am happy to agree with this recommendation based on the following observations: - The authors provide strong empirical results including relevant ablations. Reviews initially suggested a limitation to classification tasks and a lack of empirical analysis, but those issues have been addressed in the updated version. - The problem of pre-training language model is relevant for the ML and NLP communities, and it should be especially relevant for ICLR. The resulting method significantly outperforms existing methods, especially in the low compute regime. - The idea is quite simple, but at the same time it seems to be a quite novel idea. """
2146,"""Pre-training Tasks for Embedding-based Large-scale Retrieval""","['natural language processing', 'large-scale retrieval', 'unsupervised representation learning', 'paragraph-level pre-training', 'two-tower Transformer models']","""We consider the large-scale query-document retrieval problem: given a query (e.g., a question), return the set of relevant documents (e.g., paragraphs containing the answer) from a large document corpus. This problem is often solved in two steps. The retrieval phase first reduces the solution space, returning a subset of candidate documents. The scoring phase then re-ranks the documents. Critically, the retrieval algorithm not only desires high recall but also requires to be highly efficient, returning candidates in time sublinear to the number of documents. Unlike the scoring phase witnessing significant advances recently due to the BERT-style pre-training tasks on cross-attention models, the retrieval phase remains less well studied. Most previous works rely on classic Information Retrieval (IR) methods such as BM-25 (token matching + TF-IDF weights). These models only accept sparse handcrafted features and can not be optimized for different downstream tasks of interest. In this paper, we conduct a comprehensive study on the embedding-based retrieval models. We show that the key ingredient of learning a strong embedding-based Transformer model is the set of pre-training tasks. With adequately designed paragraph-level pre-training tasks, the Transformer models can remarkably improve over the widely-used BM-25 as well as embedding models without Transformers. The paragraph-level pre-training tasks we studied are Inverse Cloze Task (ICT), Body First Selection (BFS), Wiki Link Prediction (WLP), and the combination of all three.""","""This paper conducts a comprehensive study on different retrieval algorithms and show that the two-tower Transformer models with properly designed pre-training tasks can largely improve over the widely used BM-25 algorithm. In fact, the deep learning based two tower retrieval model is already used in the IR field. The main contribution lies in the comprehensive experimental evaluation. Blind Review #3 has a major misunderstanding of the paper; hence his review will be excluded. The other two reviewers tend to accept the paper with several minor comments. As the authors promise to release the code as a baseline for further works, I agree to accept the paper. """
2147,"""Hierarchical Foresight: Self-Supervised Learning of Long-Horizon Tasks via Visual Subgoal Generation""","['video prediction', 'reinforcement learning', 'planning']","""Video prediction models combined with planning algorithms have shown promise in enabling robots to learn to perform many vision-based tasks through only self-supervision, reaching novel goals in cluttered scenes with unseen objects. However, due to the compounding uncertainty in long horizon video prediction and poor scalability of sampling-based planning optimizers, one significant limitation of these approaches is the ability to plan over long horizons to reach distant goals. To that end, we propose a framework for subgoal generation and planning, hierarchical visual foresight (HVF), which generates subgoal images conditioned on a goal image, and uses them for planning. The subgoal images are directly optimized to decompose the task into easy to plan segments, and as a result, we observe that the method naturally identifies semantically meaningful states as subgoals. Across three out of four simulated vision-based manipulation tasks, we find that our method achieves more than 20% absolute performance improvement over planning without subgoals and model-free RL approaches. Further, our experiments illustrate that our approach extends to real, cluttered visual scenes.""","""This paper proposes a method that uses subgoals for planning when using video prediction. The reviewers thought that the paper was clearly written and interesting. The reviewer questions and concerns were mostly addressed during the discussion phase, and the reviewers are in agreement that the paper should be accepted."""
2148,"""CAPACITY-LIMITED REINFORCEMENT LEARNING: APPLICATIONS IN DEEP ACTOR-CRITIC METHODS FOR CONTINUOUS CONTROL""","['Reinforcement Learning', 'Generalization', 'Information Theory', 'Rate-Distortion Theory']","""Biological and artificial agents must learn to act optimally in spite of a limited capacity for processing, storing, and attending to information. We formalize this type of bounded rationality in terms of an information-theoretic constraint on the complexity of policies that agents seek to learn. We present the Capacity-Limited Reinforcement Learning (CLRL) objective which defines an optimal policy subject to an information capacity constraint. This objective is optimized by drawing from methods used in rate distortion theory and information theory, and applied to the reinforcement learning setting. Using this objective we implement a novel Capacity-Limited Actor-Critic (CLAC) algorithm and situate it within a broader family of RL algorithms such as the Soft Actor Critic (SAC) and discuss their similarities and differences. Our experiments show that compared to alternative approaches, CLAC offers improvements in generalization between training and modified test environments. This is achieved in the CLAC model while displaying high sample efficiency and minimal requirements for hyper-parameter tuning.""","""This paper presents Capacity-Limited Reinforcement Learning (CLRL) which builds on methods in soft RL to enable learning in agents with limited capacity. The reviewers raised issues that were largely around three areas: there is a lack of clear motivation for the work, and many of the insights given lack intuition; many connections to related literature are missing; and the experimental results remain unconvincing. Although the ideas presented in the paper are interesting, more work is required for this to be accepted. Therefore at this point, this is unfortunately a rejection."""
2149,"""Leveraging Entanglement Entropy for Deep Understanding of Attention Matrix in Text Matching""","['Quantum entanglement entropy', 'Attention Matrix']","""The formal understanding of deep learning has made great progress based on quantum many-body physics. For example, the entanglement entropy in quantum many-body systems can interpret the inductive bias of neural network and then guide the design of network structure and parameters for certain tasks. However, there are two unsolved problems in the current study of entanglement entropy, which limits its application potential. First, the theoretical benefits of entanglement entropy was only investigated in the representation of a single object (e.g., an image or a sentence), but has not been well studied in the matching of two objects (e.g., question-answering pairs). Second, the entanglement entropy can not be qualitatively calculated since the exponentially increasing dimension of the matching matrix. In this paper, we are trying to address these two problem by investigating the fundamental connections between the entanglement entropy and the attention matrix. We prove that by a mapping (via the trace operator) on the high-dimensional matching matrix, a low-dimensional attention matrix can be derived. Based on such a attention matrix, we can provide a feasible solution to the entanglement entropy that describes the correlation between the two objects in matching tasks. Inspired by the theoretical property of the entanglement entropy, we can design the network architecture adaptively in a typical text matching task, i.e., question-answering task.""","""This paper advocates for the application of entanglement entropy from quantum physics to understand and improve the inductive bias of neural network architectures for question answering tasks. All reviewers found the current presentation of the method difficult to understand, and as a result it is difficult to determine what exactly the contribution of this work is. One suggestion for improving the manuscript is to minimize the references to quantum entanglement (where currently is it asserted without justification that entanglement entropy is a relevant concept for modeling question-answering tasks). Instead, presenting the method as applications of tensor decompositions for parameterizing neural network architectures would make the work more accessible to a machine learning audience, and help clarify the contribution with respect to related works [1]. 1. pseudo-url"""
2150,"""Deep Auto-Deferring Policy for Combinatorial Optimization""","['deep reinforcement learning', 'combinatorial optimization']","""Designing efficient algorithms for combinatorial optimization appears ubiquitously in various scientific fields. Recently, deep reinforcement learning (DRL) frameworks have gained considerable attention as a new approach: they can automatically learn the design of a good solver without using any sophisticated knowledge or hand-crafted heuristic specialized for the target problem. However, the number of stages (until reaching the final solution) required by existing DRL solvers is proportional to the size of the input graph, which hurts their scalability to large-scale instances. In this paper, we seek to resolve this issue by proposing a novel design of DRL's policy, coined auto-deferring policy (ADP), automatically stretching or shrinking its decision process. Specifically, it decides whether to finalize the value of each vertex at the current stage or defer to determine it at later stages. We apply the proposed ADP framework to the maximum independent set (MIS) problem, a prototype of NP-complete problems, under various scenarios. Our experimental results demonstrate significant improvement of ADP over the current state-of-the-art DRL scheme in terms of computational efficiency and approximation quality. The reported performance of our generic DRL scheme is also comparable with that of the state-of-the-art solvers specialized for MIS, e.g., ADP outperforms them for some graphs with millions of vertices. ""","""This paper proposes a new way to formulate the design of the deep reinforcement learning that automatically shrinks or expands decision processes. The paper is borderline and all reviewers appreciate the paper and gives thorough reviews. However, it not completely convince that it is ready publication. Rejection is recommended. This can become a nice paper for next conference by taking feedback into account. """
2151,"""Improving Neural Language Generation with Spectrum Control""",[],"""Recent Transformer-based models such as Transformer-XL and BERT have achieved huge success on various natural language processing tasks. However, contextualized embeddings at the output layer of these powerful models tend to degenerate and occupy an anisotropic cone in the vector space, which is called the representation degeneration problem. In this paper, we propose a novel spectrum control approach to address this degeneration problem. The core idea of our method is to directly guide the spectra training of the output embedding matrix with a slow-decaying singular value prior distribution through a reparameterization framework. We show that our proposed method encourages isotropy of the learned word representations while maintains the modeling power of these contextual neural models. We further provide a theoretical analysis and insight on the benefit of modeling singular value distribution. We demonstrate that our spectrum control method outperforms the state-of-the-art Transformer-XL modeling for language model, and various Transformer-based models for machine translation, on common benchmark datasets for these tasks.""","""Main content: Blind review #2 summarizes it well: Summary: This paper deals with the representation degeneration problem in neural language generation, as some prior works have found that the singular value distribution of the (input-output-tied) word embedding matrix decays quickly. The authors proposed an approach that directly penalizes deviations of the SV distribution from the two prior distributions, as well as a few other auxiliary losses on the orthogonality of U and V (which are now learnable). The experiments were conducted on small and large scale language modeling datasets as well as the relatively small IWSLT 2014 De-En MT dataset. Pros: + The paper is well-written with great clarity. The dimensionality of the involved matrices (and their decompositions) are clearly provided, and the approach is clearly described. The authors also did a great job providing the details of their experimental setup. + The experiments seem to show consistent improvements over the baseline methods (at least the ones listed by the authors) on a relatively extensive set of tasks (e.g., of both small and large scales, of two different NLP tasks). Via WT2 and WT103, the authors also showed that their method worked on both LSTM and Transformers (which it should, as the SVD on word embedding should be independent of the underlying architecture). + I think studying the expressivity of the output embedding matrix layer is a very interesting (and important) topic for NLP. (e.g., While models like BERT are widely used, the actual most frequently re-used module of BERT is its pre-trained word embeddings.) -- Discussion: The reviewers agree that it is a very well written paper, and this is important as a conference paper to illuminate readers. The one main objection is that spectrum control regularization was previously proposed and applied to GANs (Jiang et al ICLR 2019). However the authors convincingly point out that the technique is widely used, not only for GANs, and that application to neural language generation has quite different characteristics requiring a different, new approach: ""our proposed prior distributions as shown in Figure 2 in our paper are fundamentally different from the singular value distributions learned using their penalty functions (See Figure 1 and Table 7 in Jiang et al.s paper). Figure 1 in their paper suggests that their penalty function, i.e., D-optimal Reg, will encourage all the singular values close to 1, which is well aligned with their motivation for training GAN. However, if we use such penalty function to train neural language models, the learned word representations will lose the power of modeling contextual information, and can result in much worse results than the baseline methods."" -- Recommendation and justification: I concur with the majority of reviewers that this paper is a weak accept. Though not revolutionary, it is well written, has usefully broad application, and is supported well empirically."""
2152,"""ShardNet: One Filter Set to Rule Them All""","['neural network compression', 'filter sharing', 'network interpretability']","""Deep CNNs have achieved state-of-the-art performance for numerous machine learning and computer vision tasks in recent years, but as they have become increasingly deep, the number of parameters they use has also increased, making them hard to deploy in memory-constrained environments and difficult to interpret. Machine learning theory implies that such networks are highly over-parameterised and that it should be possible to reduce their size without sacrificing accuracy, and indeed many recent studies have begun to highlight specific redundancies that can be exploited to achieve this. In this paper, we take a further step in this direction by proposing a filter-sharing approach to compressing deep CNNs that reduces their memory footprint by repeatedly applying a single convolutional mapping of learned filters to simulate a CNN pipeline. We show, via experiments on CIFAR-10, CIFAR-100, Tiny ImageNet, and ImageNet that this allows us to reduce the parameter counts of networks based on common designs such as VGGNet and ResNet by a factor proportional to their depth, whilst leaving their accuracy largely unaffected. At a broader level, our approach also indicates how the scale-space regularities found in visual signals can be leveraged to build neural architectures that are more parsimonious and interpretable.""","""This submission proposes an interesting experiment/modification of CNNs. However, it looks like this contribution overlaps significantly with prior work (that the authors initially missed) and the comparison in the (revised) manuscript seem to not clearly delineate and acknowledge the similarities and differences. I suggest the authors improve this aspect and try submitting this work to next venue. """
2153,"""Geometry-aware Generation of Adversarial and Cooperative Point Clouds""","['Adversarial attack', 'Point cloud classification']","""Recent studies show that machine learning models are vulnerable to adversarial examples. In 2D image domain, these examples are obtained by adding imperceptible noises to natural images. This paper studies adversarial generation of point clouds by learning to deform those approximating object surfaces of certain categories. As 2D manifolds embedded in the 3D Euclidean space, object surfaces enjoy the general properties of smoothness and fairness. We thus argue that in order to achieve imperceptible surface shape deformations, adversarial point clouds should have the same properties with similar degrees of smoothness/fairness to the benign ones, while being close to the benign ones as well when measured under certain distance metrics of point clouds. To this end, we propose a novel loss function to account for imperceptible, geometry-aware deformations of point clouds, and use the proposed loss in an adversarial objective to attack representative models of point set classifiers. Experiments show that our proposed method achieves stronger attacks than existing methods, without introduction of noticeable outliers and surface irregularities. In this work, we also investigate an opposite direction that learns to deform point clouds of object surfaces in the same geometry-aware, but cooperative manner. Cooperatively generated point clouds are more favored by machine learning models in terms of improved classification confidence or accuracy. We present experiments verifying that our proposed objective succeeds in learning cooperative shape deformations.""","""This paper offers an improved attack on 3-D point clouds. Unfortunately the clarity of the contribution is unclear and on balance insufficient for acceptance."""
2154,"""Improved Detection of Adversarial Attacks via Penetration Distortion Maximization""","['Adversarial Examples', 'Adversarial Attacks', 'Adversarial Defense', 'White-Box threat models']","""This paper is concerned with the defense of deep models against adversarial at- tacks. We develop an adversarial detection method, which is inspired by the cer- tificate defense approach, and captures the idea of separating class clusters in the embedding space so as to increase the margin. The resulting defense is intuitive, effective, scalable and can be integrated into any given neural classification model. Our method demonstrates state-of-the-art detection performance under all threat models.""","""A defense against of adversarial attacks is presented, which builds mostly on combining known methods in a novel way. While the novelty is somewhat limited, this would be fine if the results were unequivocally good and other parts of the problematic. However, reviewers were not entirely convinced by the results, and had a number of minor complaints with various parts of the paper. In sum, this paper is not currently at a stage where it can be accepted."""
2155,"""Lipschitz Lifelong Reinforcement Learning""","['Reinforcement Learning', 'Lifelong Learning']","""We consider the problem of reusing prior experience when an agent is facing a series of Reinforcement Learning (RL) tasks. We introduce a novel metric between Markov Decision Processes and focus on the study and exploitation of the optimal value function's Lipschitz continuity in the task space with respect to that metric. These theoretical results lead us to a value transfer method for Lifelong RL, which we use to build a PAC-MDP algorithm that exploits continuity to accelerate learning. We illustrate the benefits of the method in Lifelong RL experiments.""","""While there was some support for this paper, there was not enough support to accept it for publication at ICLR. The following concern is characteristic of the concerns raised by the reviewers: ""The ""main contribution of this paper is hard to discern, but the ideas presented are interesting."" Other reviewers said it was ""hard to read"" and not ready for publication. """
2156,"""AE-OT: A NEW GENERATIVE MODEL BASED ON EXTENDED SEMI-DISCRETE OPTIMAL TRANSPORT""","['Generative model', 'auto-encoder', 'optimal transport', 'mode collapse', 'regularity']","""Generative adversarial networks (GANs) have attracted huge attention due to its capability to generate visual realistic images. However, most of the existing models suffer from the mode collapse or mode mixture problems. In this work, we give a theoretic explanation of the both problems by Figallis regularity theory of optimal transportation maps. Basically, the generator compute the transportation maps between the white noise distributions and the data distributions, which are in general discontinuous. However, DNNs can only represent continuous maps. This intrinsic conflict induces mode collapse and mode mixture. In order to tackle the both problems, we explicitly separate the manifold embedding and the optimal transportation; the first part is carried out using an autoencoder to map the images onto the latent space; the second part is accomplished using a GPU-based convex optimization to find the discontinuous transportation maps. Composing the extended OT map and the decoder, we can finally generate new images from the white noise. This AE-OT model avoids representing discontinuous maps by DNNs, therefore effectively prevents mode collapse and mode mixture.""","""The authors present a different perspective on the mode collapse and mode mixture problems in GAN based on some recent theoretical results. This is an interesting work. However, two reviewers have raised some concerns about the results and hence given a low rating of the paper. After reading the reviews and the rebuttal carefully I feel that the authors have addressed all the concerns of the reviewers. In particular, at least for one reviewer I felt that there was a slight misunderstanding on the reviewer's part which was clarified in the rebuttal. The concerns of R1 about a simpler baseline have also been addressed by the authors with the help of additional experiments. I am convinced that the original concerns of the reviewers are addressed. Hence, I recommend that this paper be accepted. Having said that, I strongly recommend that in the final version, the authors should be a bit more clear in motivating the problem. In particular, please make it clear that you are only dealing with the generator and do not have an adversarial component in the training. Also, as suggested by R3 add more intuitive descriptions to make the paper accessible to a wider audience. """
2157,"""Learning Explainable Models Using Attribution Priors""","['Deep Learning', 'Interpretability', 'Attributions', 'Explanations', 'Biology', 'Health', 'Computational Biology']","""Two important topics in deep learning both involve incorporating humans into the modeling process: Model priors transfer information from humans to a model by regularizing the model's parameters; Model attributions transfer information from a model to humans by explaining the model's behavior. Previous work has taken important steps to connect these topics through various forms of gradient regularization. We find, however, that existing methods that use attributions to align a model's behavior with human intuition are ineffective. We develop an efficient and theoretically grounded feature attribution method, expected gradients, and a novel framework, attribution priors, to enforce prior expectations about a model's behavior during training. We demonstrate that attribution priors are broadly applicable by instantiating them on three different types of data: image data, gene expression data, and health care data. Our experiments show that models trained with attribution priors are more intuitive and achieve better generalization performance than both equivalent baselines and existing methods to regularize model behavior.""","""This work claims two primary contributions: first a new saliency method ""expected gradients"" is proposed, and second the authors propose the idea of attribution priors to improve model performance by integrating domain knowledge during training. Reviewers agreed that the expected gradients method is interesting and novel, and experiments such as Table 1 are a good starting point to demonstrate the effectiveness of the new method. However, the claimed ""novel framework, attribution priors"" has large overlap with prior work [1]. One suggestion for improving the paper is to revise the introduction and experiments to support the claim ""expected gradients improve model explainability and yield effective attribution priors"" rather than claiming to introduce attribution priors as a new framework. One possibility for strengthening this claim is to revisit experiments in [1] and related follow-up work to demonstrate that expected gradients yield improvements over existing saliency methods. Additionally, current experiments in Table 1 only consider integrated gradients as a baseline saliency method, there are many others worth considering, see for example the suite of methods explored in [2]. Finally, I would add that the current section on distribution shift provides an overly narrow perspective on model robustness by only considering robustness to additive Gaussian noise. It is known that it is easy to improve robustness to Gaussian noise by biasing the model towards low frequency statistics in the data, however this typically results in degraded robustness to other kinds of noise types. See for example [3], where it was observed that adversarial training degrades model robustness to low frequency noise and the fog corruption. If the authors wish to pursue using attribution priors for improving robustness to distribution shift, it is important that they evaluate on a more varied suite of corruptions/noise types [4]. Additionally, one should compare against strong baselines in this area [5]. 1. pseudo-url 2. pseudo-url 3. pseudo-url 4. pseudo-url 5. pseudo-url """
2158,"""Escaping Saddle Points Faster with Stochastic Momentum""","['SGD', 'momentum', 'escaping saddle point']","""Stochastic gradient descent (SGD) with stochastic momentum is popular in nonconvex stochastic optimization and particularly for the training of deep neural networks. In standard SGD, parameters are updated by improving along the path of the gradient at the current iterate on a batch of examples, where the addition of a ``momentum'' term biases the update in the direction of the previous change in parameters. In non-stochastic convex optimization one can show that a momentum adjustment provably reduces convergence time in many settings, yet such results have been elusive in the stochastic and non-convex settings. At the same time, a widely-observed empirical phenomenon is that in training deep networks stochastic momentum appears to significantly improve convergence time, variants of it have flourished in the development of other popular update methods, e.g. ADAM, AMSGrad, etc. Yet theoretical justification for the use of stochastic momentum has remained a significant open question. In this paper we propose an answer: stochastic momentum improves deep network training because it modifies SGD to escape saddle points faster and, consequently, to more quickly find a second order stationary point. Our theoretical results also shed light on the related question of how to choose the ideal momentum parameter--our analysis suggests that \in [0,1)$ should be large (close to 1), which comports with empirical findings. We also provide experimental findings that further validate these conclusions.""","""This paper studies the impact of using momentum to escape saddle points. They show that a heavy use of momentum improves the convergence rate to second order stationary points. The reviewers agreed that this type of analysis is interesting and helps understand the benefits of this standard method in deep learning. The authors were able to address most of the concerns of the reviewers during rebutal, but is borderline due to lingering concerns about the presentation of the results. We encourage the authors to give more thought to the presentation before publication."""
2159,"""The Probabilistic Fault Tolerance of Neural Networks in the Continuous Limit""","['Robustness', 'theory of neural networks', 'fault tolerance', 'continuous limit', 'Taylor expansion', 'error bound', 'neuromorphic computing', 'continuous networks', 'functional derivative']","""The loss of a few neurons in a brain rarely results in any visible loss of function. However, the insight into what few means in this context is unclear. How many random neuron failures will it take to lead to a visible loss of function? In this paper, we address the fundamental question of the impact of the crash of a random subset of neurons on the overall computation of a neural network and the error in the output it produces. We study fault tolerance of neural networks subject to small random neuron/weight crash failures in a probabilistic setting. We give provable guarantees on the robustness of the network to these crashes. Our main contribution is a bound on the error in the output of a network under small random Bernoulli crashes proved by using a Taylor expansion in the continuous limit, where close-by neurons at a layer are similar. The failure mode we adopt in our model is characteristic of neuromorphic hardware, a promising technology to speed up artificial neural networks, as well as of biological networks. We show that our theoretical bounds can be used to compare the fault tolerance of different architectures and to design a regularizer improving the fault tolerance of a given architecture. We design an algorithm achieving fault tolerance using a reasonable number of neurons. In addition to the theoretical proof, we also provide experimental validation of our results and suggest a connection to the generalization capacity problem.""",""" This paper focuses on the problem of robustness in the network with random loss of neurons. However, reviewers had issues with insufficient clarity of the presentation, and lack of discussion about closely related dropout approach. """
2160,"""Learning a Spatio-Temporal Embedding for Video Instance Segmentation""","['computer', 'vision', 'video', 'instance', 'segmentation', 'metric', 'learning']","""Understanding object motion is one of the core problems in computer vision. It requires segmenting and tracking objects over time. Significant progress has been made in instance segmentation, but such models cannot track objects, and more crucially, they are unable to reason in both 3D space and time. We propose a new spatio-temporal embedding loss on videos that generates temporally consistent video instance segmentation. Our model includes a temporal network that learns to model temporal context and motion, which is essential to produce smooth embeddings over time. Further, our model also estimates monocular depth, with a self-supervised loss, as the relative distance to an object effectively constrains where it can be next, ensuring a time-consistent embedding. Finally, we show that our model can accurately track and segment instances, even with occlusions and missed detections, advancing the state-of-the-art on the KITTI Multi-Object and Tracking Dataset.""","""This paper proposes a spatio-temporal embedding loss for video instance segmentation. The proposed model (1) learns a per-pixel embedding such that the embeddings of pixels from the same instance are closer than embeddings of pixels from other instances, and (2) learns depth in a self-supervised way using a photometric reconstruction loss which operates under the assumption of a moving camera and a static scene. The resulting loss is a weighted sum of these attraction, repulsion, regularisation and geometric view synthesis losses. The reviewers agree that the paper is well written and that the problem is well motivated. In particular, there is consensus that the 3D geometry and 2D instance representation should be considered jointly. However, due to the lack of technical novelty, the complexity of the final model, and the issues with the empirical validation of the proposed approach, we feel that the work is slightly below the acceptance bar."""
2161,"""Sparse Weight Activation Training""","['Sparsity', 'Training', 'Acceleration', 'Pruning', 'Compression']","""Training convolutional neural networks (CNNs) is time consuming. Prior work has explored how to reduce the computational demands of training by eliminating gradients with relatively small magnitude. We show that eliminating small magnitude components has limited impact on the direction of high-dimensional vectors. However, in the context of training a CNN, we find that eliminating small magnitude components of weight and activation vectors allows us to train deeper networks on more complex datasets versus eliminating small magnitude components of gradients. We propose Sparse Weight Activation Training (SWAT), an algorithm that embodies these observations. SWAT reduces computations by 50% to 80% with better accuracy at a given level of sparsity versus the Dynamic Sparse Graph algorithm. SWAT also reduces memory footprint by 23% to 37% for activations and 50% to 80% for weights.""","""The paper is proposed a rejection based on majority reviews."""
2162,"""Learning Semantically Meaningful Representations Through Embodiment""","['reinforcement learning', 'deep learning', 'embodied', 'embodiment', 'embodied cognition', 'representation learning', 'representations', 'sparse coding']","""How do humans acquire a meaningful understanding of the world with little to no supervision or semantic labels provided by the environment? Here we investigate embodiment and a closed loop between action and perception as one key component in this process. We take a close look at the representations learned by a deep reinforcement learning agent that is trained with visual and vector observations collected in a 3D environment with sparse rewards. We show that this agent learns semantically meaningful and stable representations of its environment without receiving any semantic labels. Our results show that the agent learns to represent the action relevant information extracted from pixel input in a wide variety of sparse activation patterns. The quality of the representations learned shows the strength of embodied learning and its advantages over fully supervised approaches with regards to robustness and generalizability.""","""What is investigated is what kind of representations are formed by embodied agents; it is argued that these are different than from non-embodied arguments. This is an interesting question related to foundational AI and Alife questions, such as the symbol grounding problem. Unfortunately, the empirical investigations are insufficient. In particular, there is no comparison with a non-embodied control condition. The reviewers point this out, and the authors propose a different control condition, which unfortunately is not sufficient to test the hypothesis. This paper should be rejected in its current form, but the question is interesting and hopefully the authors will do the missing experiments and submit a new version of the paper."""
2163,"""Role of two learning rates in convergence of model-agnostic meta-learning""","['meta-learning', 'convergence']","""Model-agnostic meta-learning (MAML) is known as a powerful meta-learning method. However, MAML is notorious for being hard to train because of the existence of two learning rates. Therefore, in this paper, we derive the conditions that inner learning rate pseudo-formula and meta-learning rate pseudo-formula must satisfy for MAML to converge to minima with some simplifications. We find that the upper bound of pseudo-formula depends on \alpha in contrast to the case of using the normal gradient descent method. Moreover, we show that the threshold of pseudo-formula increases as pseudo-formula approaches its own upper bound. This result is verified by experiments on various few-shot tasks and architectures; specifically, we perform sinusoid regression and classification of Omniglot and MiniImagenet datasets with a multilayer perceptron and a convolutional neural network. Based on this outcome, we present a guideline for determining the learning rates: first, search for the largest possible pseudo-formula ; next, tune pseudo-formula based on the chosen value of pseudo-formula .""","""This paper theoretically and empirically studies the inner and outer learning rate of the MAML algorithm and their role in convergence. While the paper presents some interesting ideas and add to our theoretical understanding of meta-learning algorithms, the reviewers raised concerns about the relevance of the theory. Further the empirical study is somewhat preliminary and doesn't compare to prior works that also try to stabilize the MAML algorithm, further bringing into question its usefulness. As such, the current form of the paper doesn't meet the bar for ICLR."""
2164,"""Neural Operator Search""","['deep learning', 'autoML', 'neural architecture search', 'image classification', 'attention learning', 'dynamic convolution']","""Existing neural architecture search (NAS) methods explore a limited feature-transformation-only search space while ignoring other advanced feature operations such as feature self-calibration by attention and dynamic convolutions. This disables the NAS algorithms to discover more advanced network architectures. We address this limitation by additionally exploiting feature self-calibration operations, resulting in a heterogeneous search space. To solve the challenges of operation heterogeneity and significantly larger search space, we formulate a neural operator search (NOS) method. NOS presents a novel heterogeneous residual block for integrating the heterogeneous operations in a unified structure, and an attention guided search strategy for facilitating the search process over a vast space. Extensive experiments show that NOS can search novel cell architectures with highly competitive performance on the CIFAR and ImageNet benchmarks. ""","""This paper proposes an extension of the search space of neural architecture search to include dynamic convolutions, teacher nets among others. The method is evaluated on CIFAR-10 and Imagenet with a similar setup as other architecture search methods. The reviewers found that the results did not convincingly show that the proposed improvements were better than other ways of improving neural architecture search such as rroxylessNAS."""
2165,"""Intrinsically Motivated Discovery of Diverse Patterns in Self-Organizing Systems""","['deep learning', 'unsupervised Learning', 'self-organization', 'game-of-life']","""In many complex dynamical systems, artificial or natural, one can observe self-organization of patterns emerging from local rules. Cellular automata, like the Game of Life (GOL), have been widely used as abstract models enabling the study of various aspects of self-organization and morphogenesis, such as the emergence of spatially localized patterns. However, findings of self-organized patterns in such models have so far relied on manual tuning of parameters and initial states, and on the human eye to identify interesting patterns. In this paper, we formulate the problem of automated discovery of diverse self-organized patterns in such high-dimensional complex dynamical systems, as well as a framework for experimentation and evaluation. Using a continuous GOL as a testbed, we show that recent intrinsically-motivated machine learning algorithms (POP-IMGEPs), initially developed for learning of inverse models in robotics, can be transposed and used in this novel application area. These algorithms combine intrinsically-motivated goal exploration and unsupervised learning of goal space representations. Goal space representations describe the interesting features of patterns for which diverse variations should be discovered. In particular, we compare various approaches to define and learn goal space representations from the perspective of discovering diverse spatially localized patterns. Moreover, we introduce an extension of a state-of-the-art POP-IMGEP algorithm which incrementally learns a goal representation using a deep auto-encoder, and the use of CPPN primitives for generating initialization parameters. We show that it is more efficient than several baselines and equally efficient as a system pre-trained on a hand-made database of patterns identified by human experts.""","""The authors introduce a framework for automatically detecting diverse, self-organized patterns in a continuous Game of Life environment, using compositional pattern producing networks (CPPNs) and population-based Intrinsically Motivated Goal Exploration Processes (POP-IMGEPs) to find the distribution of system parameters that produce diverse, interesting goal patterns. This work is really well-presented, both in the paper and on the associated website, which is interactive and features source code and demos. Reviewers agree that its well-written and seems technically sound. I also agree with R2 that this is an under-explored area and thus would add to the diversity of the program. In terms of weaknesses, reviewers noted that its quite long, with a lengthy appendix, and could be a bit confusing in areas. Authors were responsive to this in the rebuttal and have trimmed it, although its still 29 pages. My assessment is well-aligned with those of R2 and thus Im recommending accept. In the rebuttal, the authors mentioned several interesting possible applications for this work; itd be great if these could be included in the discussion. Given the impressive presentation and amazing visuals, I think it could make for a fun talk. """
2166,"""Analyzing the Role of Model Uncertainty for Electronic Health Records""","['medicine', 'uncertainty', 'neural networks', 'Bayesian', 'electronic health records']","""In medicine, both ethical and monetary costs of incorrect predictions can be significant, and the complexity of the problems often necessitates increasingly complex models. Recent work has shown that changing just the random seed is enough for otherwise well-tuned deep neural networks to vary in their individual predicted probabilities. In light of this, we investigate the role of model uncertainty methods in the medical domain. Using RNN ensembles and various Bayesian RNNs, we show that population-level metrics, such as AUC-PR, AUC-ROC, log-likelihood, and calibration error, do not capture model uncertainty. Meanwhile, the presence of significant variability in patient-specific predictions and optimal decisions motivates the need for capturing model uncertainty. Understanding the uncertainty for individual patients is an area with clear clinical impact, such as determining when a model decision is likely to be brittle. We further show that RNNs with only Bayesian embeddings can be a more efficient way to capture model uncertainty compared to ensembles, and we analyze how model uncertainty is impacted across individual input features and patient subgroups.""","""The paper considers an important problem in medical applications of deep learning, such as variability/stability of model's predictions in face of various perturbations in the model (e.g., random seed), and evaluates different approaches to capturing model uncertainty. However, it appears to be little innovation in terms of machine-learning methodology, so ICLR might not be the best venue for this work, while perhaps other venues focused more on medical applications might be a better fit. """
2167,"""Federated Learning with Matched Averaging""",['federated learning'],"""Federated learning allows edge devices to collaboratively learn a shared model while keeping the training data on device, decoupling the ability to do model training from the need to store the data in the cloud. We propose Federated matched averaging (FedMA) algorithm designed for federated learning of modern neural network architectures e.g. convolutional neural networks (CNNs) and LSTMs. FedMA constructs the shared global model in a layer-wise manner by matching and averaging hidden elements (i.e. channels for convolution layers; hidden states for LSTM; neurons for fully connected layers) with similar feature extraction signatures. Our experiments indicate that FedMA not only outperforms popular state-of-the-art federated learning algorithms on deep CNN and LSTM architectures trained on real world datasets, but also reduces the overall communication burden.""","""The authors presented a Federate Learning algorithm which constructs the global model layer-wise by matching and averaging hidden representations. They empirically demonstrate their method outperforms existing federated learning algorithms This paper has received largely positive reviews. Unfortunately one reviewer wrote a very short review but was generally appreciative of the work. Fortunately, R1 wrote a detailed review with very specific questions and suggestions. The authors have addresses most of the concerns of the reviewers and I have no hesitation in recommending that this paper should be accepted. I request the authors to incorporate all suggestions made by the reviewers. """
2168,"""White Noise Analysis of Neural Networks""","['Classification images', 'spike triggered analysis', 'deep learning', 'network visualization', 'adversarial attack', 'adversarial defense', 'microstimulation', 'computational neuroscience']","""A white noise analysis of modern deep neural networks is presented to unveil their biases at the whole network level or the single neuron level. Our analysis is based on two popular and related methods in psychophysics and neurophysiology namely classification images and spike triggered analysis. These methods have been widely used to understand the underlying mechanisms of sensory systems in humans and monkeys. We leverage them to investigate the inherent biases of deep neural networks and to obtain a first-order approximation of their functionality. We emphasize on CNNs since they are currently the state of the art methods in computer vision and are a decent model of human visual processing. In addition, we study multi-layer perceptrons, logistic regression, and recurrent neural networks. Experiments over four classic datasets, MNIST, Fashion-MNIST, CIFAR-10, and ImageNet, show that the computed bias maps resemble the target classes and when used for classification lead to an over two-fold performance than the chance level. Further, we show that classification images can be used to attack a black-box classifier and to detect adversarial patch attacks. Finally, we utilize spike triggered averaging to derive the filters of CNNs and explore how the behavior of a network changes when neurons in different layers are modulated. Our effort illustrates a successful example of borrowing from neurosciences to study ANNs and highlights the importance of cross-fertilization and synergy across machine learning, deep learning, and computational neuroscience.""","""All the reviewers found the paper to contain an interesting idea with insightful experiments. The rebuttal further improved confidence of the reviewers. The paper is accepted."""
2169,"""Low-dimensional statistical manifold embedding of directed graphs""","['graph embedding', 'information geometry', 'graph representations']","""We propose a novel node embedding of directed graphs to statistical manifolds, which is based on a global minimization of pairwise relative entropy and graph geodesics in a non-linear way. Each node is encoded with a probability density function over a measurable space. Furthermore, we analyze the connection of the geometrical properties of such embedding and their efficient learning procedure. Extensive experiments show that our proposed embedding is better preserving the global geodesic information of graphs, as well as outperforming existing embedding models on directed graphs in a variety of evaluation metrics, in an unsupervised setting.""","""The paper proposes an embedding for nodes in a directed graph, which takes into account the asymmetry. The proposed method learns an embedding of a node as an exponential distribution (e.g. Gaussian), on a statistical manifold. The authors also provide an approximation for large graphs, and show that the method performs well in empirical comparisons. The authors were very responsive in the discussion phase, providing new experiments in response to the reviews. This is a nice example where a good paper is improved by several extra suggestions by reviewers. I encourage the authors to provide all the software for reproducing their work in the final version. Overall, this is a great paper which proposes a new graph embedding approach that is scalable and provides nice empirical results."""
2170,"""Recurrent Neural Networks are Universal Filters""","['Recurrent Neural Networks', 'Expressive Power', 'Deep Learning Theory']","""Recurrent neural networks (RNN) are powerful time series modeling tools in ma- chine learning. It has been successfully applied in a variety of fields such as natural language processing (Mikolov et al. (2010), Graves et al. (2013), Du et al. (2015)), control (Fei & Lu (2017)) and traffic forecasting (Ma et al. (2015)), etc. In those application scenarios, RNN can be viewed as implicitly modelling a stochastic dy- namic system. Another type of popular neural network, deep (feed-forward) neural network has also been successfully applied in different engineering disciplines, whose approximation capability has been well characterized by universal approxi- mation theorem (Hornik et al. (1989), Park & Sandberg (1991), Lu et al. (2017)). However, the underlying approximation capability of RNN has not been fully understood in a quantitative way. In our paper, we consider a stochastic dynamic system with noisy observations and analyze the approximation capability of RNN in synthesizing the optimal state estimator, namely optimal filter. We unify the recurrent neural network into Bayesian filtering framework and show that recurrent neural network is a universal approximator of optimal finite dimensional filters under some mild conditions. That is to say, for any stochastic dynamic systems with noisy sequential observations that satisfy some mild conditions, we show that (informal) > 0, RNN-based filter, s.t. lim sup x k|k E[x k |Y k ] < , k where x k|k is RNN-based filters estimate of state x k at step k conditioned on the observation history and E[x k |Y k ] is the conditional mean of x k , known as the optimal estimate of the state in minimum mean square error sense. As an interesting special case, the widely used Kalman filter (KF) can be synthesized by RNN.""","""Based on the Bayesian approach to filtering problem, the paper proves that RNN are universal approximators for the filtering problem. Two reviewers, however, have doubts about the novelty and difficulty to get the result. Although I do not fully agree that Reviewer3 that the proof is just ""DNN can fit anything"" - it is not this case, but the concerns of Reviewer2 are more strong, especially about the usage of the term ""recurrent neural network"". The paper is purely theoretical and does not have any numerical experiments, which probably makes it too weak for ICLR in this form. However, I encourage the authors to continue to work on the subject, since the approach looks very interesting but it still very far from practice."""
2171,"""Learning Human Postural Control with Hierarchical Acquisition Functions""","['Human Postural Control Model', 'Hierarchical Bayesian Optimization', 'Acquisition Function']","""Learning control policies in robotic tasks requires a large number of interactions due to small learning rates, bounds on the updates or unknown constraints. In contrast humans can infer protective and safe solutions after a single failure or unexpected observation. In order to reach similar performance, we developed a hierarchical Bayesian optimization algorithm that replicates the cognitive inference and memorization process for avoiding failures in motor control tasks. A Gaussian Process implements the modeling and the sampling of the acquisition function. This enables rapid learning with large learning rates while a mental replay phase ensures that policy regions that led to failures are inhibited during the sampling process. The features of the hierarchical Bayesian optimization method are evaluated in a simulated and physiological humanoid postural balancing task. We quantitatively compare the human learning performance to our learning approach by evaluating the deviations of the center of mass during training. Our results show that we can reproduce the efficient learning of human subjects in postural control tasks which provides a testable model for future physiological motor control tasks. In these postural control tasks, our method outperforms standard Bayesian Optimization in the number of interactions to solve the task, in the computational demands and in the frequency of observed failures. ""","""The paper proposes hierarchical Bayesian optimization (HiBO) for learning control policies from a small number of environment interaction and applies it to the postural control of a humanoid. Both reviewers raised issues with the clarity of presentation, as well as contribution and overall fit to this venue. The authors response helped to clarify these issues only marginally. Therefore, primarily due to lack of clarity, I recommend rejecting this paper, but encourage the authors to improve the presentation as per the reviewers suggestions and resubmitting."""
2172,"""Improving the Gating Mechanism of Recurrent Neural Networks""","['recurrent neural networks', 'LSTM', 'GRUs', 'gating mechanisms', 'deep learning', 'reinforcement learning']","""In this work, we revisit the gating mechanisms widely used in various recurrent and feedforward networks such as LSTMs, GRUs, or highway networks. These gates are meant to control information flow, allowing gradients to better propagate back in time for recurrent models. However, to propagate gradients over very long temporal windows, they need to operate close to their saturation regime. We propose two independent and synergistic modifications to the standard gating mechanism that are easy to implement, introduce no additional hyper-parameters, and are aimed at improving learnability of the gates when they are close to saturation. Our proposals are theoretically justified, and we show a generic framework that encompasses other recently proposed gating mechanisms such as chrono-initialization and master gates . We perform systematic analyses and ablation studies on the proposed improvements and evaluate our method on a wide range of applications including synthetic memorization tasks, sequential image classification, language modeling, and reinforcement learning. Empirically, our proposed gating mechanisms robustly increase the performance of recurrent models such as LSTMs, especially on tasks requiring long temporal dependencies.""","""This submission proposes a new gating mechanism to improve gradient information propagation during back-propagation when training recurrent neural networks. Strengths: -The problem is interesting and important. -The proposed method is novel. Weaknesses: -The justification and motivation of the UGI mechanism was not clear and/or convincing. -The experimental validation is sometimes hard to interpret and the proposed improvements of the gating mechanism are not well-reflected in the quantitative results. -The submission was hard to read and some images were initially illegible. The authors improved several of the weaknesses but not to the desired level. AC agrees with the majority recommendation to reject."""
2173,"""Extreme Triplet Learning: Effectively Optimizing Easy Positives and Hard Negatives""","['Triplet Learning', 'Easy Positive', 'Hard Negatives']","""The Triplet Loss approach to Distance Metric Learning is defined by the strategy to select triplets and the loss function through which those triplets are optimized. During optimization, two especially important cases are easy positive and hard negative mining which consider, the closest example of the same and different classes. We characterize how triplets behave based during optimization as a function of these similarities, and highlight that these important cases have technical problems where standard gradient descent behaves poorly, pulling the negative example closer and/or pushing the positive example farther away. We derive an updated loss function that fixes these problems and shows improvements to the state of the art for CUB, CAR, SOP, In-Shop Clothes datasets.""","""The authors propose a novel distance metric learning approach. Reviews were mixed, and while the discussion was interesting to follow, some issues, including novelty, comparison with existing approaches, and impact, remain unresolved, and overall, the paper does not seem quite ready for publication. """
2174,"""Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack""","['adversarial attacks', 'adversarial robustness']","""The evaluation of robustness against adversarial manipulations of neural networks-based classifiers is mainly tested with empirical attacks as the methods for the exact computation, even when available, do not scale to large networks. We propose in this paper a new white-box adversarial attack wrt the pseudo-formula -norms for \in \{1,2,\infty\} aiming at finding the minimal perturbation necessary to change the class of a given input. It has an intuitive geometric meaning, yields quickly high quality results, minimizes the size of the perturbation (so that it returns the robust accuracy at every threshold with a single run). It performs better or similarly to state-of-the-art attacks which are partially specialized to one pseudo-formula -norm.""","""This work presents a method for generating an (approximately) minimal adversarial perturbation for neural networks. During the discussion period, the AC raised additional concerns that were not originally addressed by the reviewers. The method is an iterative first order method for solving constrained optimization problems, however when considered as a new first order optimization method the contribution seems minimal. Most of the additions are rather straightforward---e.g. using a line search at each step to determine the optimal step size---and the reported gains over PGD are unconvincing. PGD can be considered as a ""universal"" first order optimizer [1], as such we should be careful that the reported gains are substantial and not just a question of tuning. Given that using a line search at each step increases the computational cost by a multiplicative factor, the comparison with PGD should take this into account. The AC notes several plots in the Appendix show PGD having better performance (particularly on restricted Imagenet), and for others there remain questions on how PGD is tuned (for example the CIFAR-10 plots in Figure 5). One of two things explains the discrepancies in Figure 5: either PGD is finding a worse local optimum than FAB, or PGD has not converged to a local optimum. There needs to be provided experiments to rule out the second possibility, as this is evidence that PGD is not being tuned properly. Some standard things to check are the step size and number of steps. Additionally, enforcing a constant step size after projection is an easy way to improve the performance of PGD. For example, if the gradient of the loss is approximately equal to the normal vector of the constraint, then proj(x_i+ lambda * g) ~ x_i will result in an effective step size that is too low to make progress. Finally, it is unclear what practical use there is for a method that finds an approximately minimum norm perturbation. There are no provable guarantees so this cannot be used for certification. Additionally, in order to properly assess the security and reliability of ML systems, it is necessary to consider larger visual distortions, occlusions, and corruptions (such as the ones in [2]) as these will actually be encountered in practice. 1. pseudo-url 2. pseudo-url"""
2175,"""Autoencoders and Generative Adversarial Networks for Imbalanced Sequence Classification""",['imbalanced multivariate time series classification'],"""We introduce a novel synthetic oversampling method for variable length, multi- feature sequence datasets based on autoencoders and generative adversarial net- works. We show that this method improves classification accuracy for highly imbalanced sequence classification tasks. We show that this method outperforms standard oversampling techniques that use techniques such as SMOTE and autoencoders. We also use generative adversarial networks on the majority class as an outlier detection method for novelty detection, with limited classification improvement. We show that the use of generative adversarial network based synthetic data improves classification model performance on a variety of sequence data sets. ""","""This paper presents a synthetic oversampling method for sequence-to-sequence classification problems based on autoencoders and generative adversarial networks. All reviews reject the paper for two main reasons: 1 The novelty of the paper is not enough for ICLR as the idea of utilizing GAN for data sampling is common now. 2 The experimental is not convincing as authors did not compare with other leading oversampling methods. The rebuttal did not well answer these two questions; thus I choose to reject the paper. """
2176,"""NeuralUCB: Contextual Bandits with Neural Network-Based Exploration""","['contextual bandits', 'neural network', 'upper confidence bound']","""We study the stochastic contextual bandit problem, where the reward is generated from an unknown bounded function with additive noise. We propose the NeuralUCB algorithm, which leverages the representation power of deep neural networks and uses the neural network-based random feature mapping to construct an upper confidence bound (UCB) of reward for efficient exploration. We prove that, under mild assumptions, NeuralUCB achieves O(\sqrt{T}) regret bound, where pseudo-formula is the number of rounds. To the best of our knowledge, our algorithm is the first neural network-based contextual bandit algorithm with near-optimal regret guarantee. Preliminary experiment results on synthetic data corroborate our theory, and shed light on potential applications of our algorithm to real-world problems.""","""As the reviewers have pointed out and the authors have confirmed, the original version of this paper was not a significant leap beyond combining recent understanding of Neural Tangent Kernels and previous techniques for kernelized bandits. In a revision, the authors updated their draft to allow the point at which gradients are centered around, theta_0, to now equal theta_t. This seems like a more reasonable algorithm and it is satisfying that the authors were able to maintain their regret bound for this dynamic setting. However, the revision is substantial and it seems unreasonable to expect reviewers to read the revised results in detail--the reviewers also felt it may be unfair to other ICLR submissions. All reviewers believe the paper has introduced valuable contributions to the area but should go under a full review process at a future venue. A reviewer would also like to see a comparison to Kernel UCB run on the true NTK (or a good approximation thereof). """
2177,"""Low Rank Training of Deep Neural Networks for Emerging Memory Technology""","['low rank training', 'kronecker sum', 'emerging memory', 'non-volatile memory', 'rram', 'reram', 'federated learning']","""The recent success of neural networks for solving difficult decision tasks has incentivized incorporating smart decision making ""at the edge."" However, this work has traditionally focused on neural network inference, rather than training, due to memory and compute limitations, especially in emerging non-volatile memory systems, where writes are energetically costly and reduce lifespan. Yet, the ability to train at the edge is becoming increasingly important as it enables applications such as real-time adaptability to device drift and environmental variation, user customization, and federated learning across devices. In this work, we address four key challenges for training on edge devices with non-volatile memory: low weight update density, weight quantization, low auxiliary memory, and online learning. We present a low-rank training scheme that addresses these four challenges while maintaining computational efficiency. We then demonstrate the technique on a representative convolutional neural network across several adaptation problems, where it out-performs standard SGD both in accuracy and in number of weight updates.""","""The reviewers generally agreed that the novelty of the work was very limited. This is not necessarily a deal-breaker for a largely applied contribution, but for an applied paper, the evaluation of the actual application on edge devices is not present. So if the main contribution is the application, and there is no evaluation of this application, then it does not seem like the paper is really complete. As such, I cannot recommend it for acceptance."""
2178,"""Pipelined Training with Stale Weights of Deep Convolutional Neural Networks""","['Distributed CNN Training', 'Pipelined Backpropagation', 'Training with Stale Weights']","""The growth in the complexity of Convolutional Neural Networks (CNNs) is increasing interest in partitioning a network across multiple accelerators during training and pipelining the backpropagation computations over the accelerators. Existing approaches avoid or limit the use of stale weights through techniques such as micro-batching or weight stashing. These techniques either underutilize of accelerators or increase memory footprint. We explore the impact of stale weights on the statistical efficiency and performance in a pipelined backpropagation scheme that maximizes accelerator utilization and keeps memory overhead modest. We use 4 CNNs (LeNet-5, AlexNet, VGG and ResNet) and show that when pipelining is limited to early layers in a network, training with stale weights converges and results in models with comparable inference accuracies to those resulting from non-pipelined training on MNIST and CIFAR-10 datasets; a drop in accuracy of 0.4%, 4%, 0.83% and 1.45% for the 4 networks, respectively. However, when pipelining is deeper in the network, inference accuracies drop significantly. We propose combining pipelined and non-pipelined training in a hybrid scheme to address this drop. We demonstrate the implementation and performance of our pipelined backpropagation in PyTorch on 2 GPUs using ResNet, achieving speedups of up to 1.8X over a 1-GPU baseline, with a small drop in inference accuracy.""","""The paper proposed a new pipelined training approach to better utilize the memory and computation power to speed up deep convolutional neural network training. The authors experimentally justified that the proposed pipeline training, using stale weights without weights stacking or micro-batching, is simpler and does converge on a few networks. The main concern for this paper is the missing of convergence analysis of the proposed method as requested by the reviewers. The authors brought up the concern of the limited space in the paper, which can be addressed by putting convergence analysis into appendix. From a reader perspective, knowing the convergence property of the methods is much more important than knowing it works for a few networks on a particular dataset. """
2179,"""Decentralized Deep Learning with Arbitrary Communication Compression""",[],"""Decentralized training of deep learning models is a key element for enabling data privacy and on-device learning over networks, as well as for efficient scaling to large compute clusters. As current approaches are limited by network bandwidth, we propose the use of communication compression in the decentralized training context. We show that Choco-SGD achieves linear speedup in the number of workers for arbitrary high compression ratios on general non-convex functions, and non-IID training data. We demonstrate the practical performance of the algorithm in two key scenarios: the training of deep learning models (i) over decentralized user devices, connected by a peer-to-peer network and (ii) in a datacenter. ""","""The authors present an algorithm CHOCO-SGD to make use of communication compression in a decentralized setting. This is an interesting problem, and the paper is well-motivated and well-written. On the theoretical side, the authors prove the convergence rate of the algorithm on non-convex smooth functions, which shows a nearly linear speedup. The experimental results on several benchmark datasets validate the algorithm achieves better performance than baselines. These can be made more convincing by comparing with more baselines (including DeepSqueeze and other centralized algorithms with a compression scheme), and on larger datasets. The authors should also clarify results on consensus."""
2180,"""Adaptive Learned Bloom Filter (Ada-BF): Efficient Utilization of the Classifier""","['Ada-BF', 'Bloom filter', 'machine learning', 'memory efficient']","""Recent work suggests improving the performance of Bloom filter by incorporating a machine learning model as a binary classifier. However, such learned Bloom filter does not take full advantage of the predicted probability scores. We proposed new algorithms that generalize the learned Bloom filter by using the complete spectrum of the scores regions. We proved our algorithms have lower False Positive Rate (FPR) and memory usage compared with the existing approaches to learned Bloom filter. We also demonstrated the improved performance of our algorithms on real-world datasets.""","""The paper improves the Bloom filter learning by utilizing the complete spectrum of the scores regions. The paper is nicely written with strong motivation and theoretical analysis of the proposed model. The evaluation could be improved: all the experiments are only tested on the small datasets, which makes it hard to assess the practicality of the proposed method. The paper could lead to a strong publication in the future if the issue on evaluation can be addressed. """
2181,"""Copy That! Editing Sequences by Copying Spans""","['span copying', 'sequence generation', 'editing', 'code repair']","""Neural sequence-to-sequence models are finding increasing use in editing of documents, for example in correcting a text document or repairing source code. In this paper, we argue that existing seq2seq models (with a facility to copy single tokens) are not a natural fit for such tasks, as they have to explicitly copy each unchanged token. We present an extension of seq2seq models capable of copying entire spans of the input to the output in one step, greatly reducing the number of decisions required during inference. This extension means that there are now many ways of generating the same output, which we handle by deriving a new objective for training and a variation of beam search for inference that explicitly handle this problem. In our experiments on a range of editing tasks of natural language and source code, we show that our new model consistently outperforms simpler baselines.""","""This paper proposes an addition to seq2seq models to allow the model to copy spans of tokens of arbitrary length in one step. The authors argue that this method is useful in editing applications where long spans of the output sequence will be exact copies of the input. Reviewers agreed that the problem is interesting and the solution technically sound. However, during the discussion phase there were concerns that the method was too incremental to warrant publication at ICLR. The work would be strengthened with a more thorough discussion of related work and additional experiments comparing with the relevant baselines as suggested by Reviewer 2."""
2182,"""Graph Residual Flow for Molecular Graph Generation""","['deep generative model', 'normalizing flow', 'graph generation', 'cheminformatics']","""Statistical generative models for molecular graphs attract attention from many researchers from the fields of bio- and chemo-informatics. Among these models, invertible flow-based approaches are not fully explored yet. In this paper, we propose a powerful invertible flow for molecular graphs, called Graph Residual Flow (GRF). The GRF is based on residual flows, which are known for more flexible and complex non-linear mappings than traditional coupling flows. We theoretically derive non-trivial conditions such that GRF is invertible, and present a way of keeping the entire flows invertible throughout the training and sampling. Experimental results show that a generative model based on the proposed GRF achieve comparable generation performance, with much smaller number of trainable parameters compared to the existing flow-based model. ""","""The authors propose a graph residual flow model for molecular generation. Conceptual novelty is limited since it is simple extension and there isn't much improvement over state of art."""
2183,"""Learning to Optimize via Dual space Preconditioning""","['Optimization', 'meta-learning']","""Preconditioning an minimization algorithm improve its convergence and can lead to a minimizer in one iteration in some extreme cases. There is currently no analytical way for finding a suitable preconditioner. We present a general methodology for learning the preconditioner and show that it can lead to dramatic speed-ups over standard optimization techniques. ""","""Thanks for the detailed replies to the reviewers, which significantly helped us understand your paper better. However, after all, we decided not to accept your paper due to weak justification and limited experimental validation. Writing should also be improved significantly. We hope that the feedback from the reviewers help you improve your paper for potential future submission. """
2184,"""Physics-aware Difference Graph Networks for Sparsely-Observed Dynamics""","['physics-aware learning', 'spatial difference operators', 'sparsely-observed dynamics']","""Sparsely available data points cause numerical error on finite differences which hinders us from modeling the dynamics of physical systems. The discretization error becomes even larger when the sparse data are irregularly distributed or defined on an unstructured grid, making it hard to build deep learning models to handle physics-governing observations on the unstructured grid. In this paper, we propose a novel architecture, Physics-aware Difference Graph Networks (PA-DGN), which exploits neighboring information to learn finite differences inspired by physics equations. PA-DGN leverages data-driven end-to-end learning to discover underlying dynamical relations between the spatial and temporal differences in given sequential observations. We demonstrate the superiority of PA-DGN in the approximation of directional derivatives and the prediction of graph signals on the synthetic data and the real-world climate observations from weather stations.""","""All reviewers agree that this research is novel and well carried out, so this is a clear accept. Please ensure that the final version reflect the reviewer comments and the new information provided during the rebuttal"""
2185,"""Improved Mutual Information Estimation""","['mutual information', 'variational bound', 'kernel methods', 'Neural estimators', 'mutual information maximization', 'self-supervised learning']","""We propose a new variational lower bound on the KL divergence and show that the Mutual Information (MI) can be estimated by maximizing this bound using a witness function on a hypothesis function class and an auxiliary scalar variable. If the function class is in a Reproducing Kernel Hilbert Space (RKHS), this leads to a jointly convex problem. We analyze the bound by deriving its dual formulation and show its connection to a likelihood ratio estimation problem. We show that the auxiliary variable introduced in our variational form plays the role of a Lagrange multiplier that enforces a normalization constraint on the likelihood ratio. By extending the function space to neural networks, we propose an efficient neural MI estimator, and validate its performance on synthetic examples, showing advantage over the existing baselines. We then demonstrate the strength of our estimator in large-scale self-supervised representation learning through MI maximization.""","""This paper centers on an unbiased variant of the Mutual Information Neural Estimation (procedure), using the so-called ""eta trick"" applied to the Donsker-Varadhan lower bound on the KL divergence. The paper's contribution is mainly theoretical though experiments are presented on synthetic Gaussian-distributed data as well as CIFAR10 and STL10 classification experiments (from learned representations). R1's criticism of the theoretical contributions centers on fundamental limitations on finite sample estimation of the MI, contending that the bounds simply aren't meaningful in high-dimensional settings, and that the empirical work centers on synthetic data and self-generated baselines rather than comparisons to reported numbers in the literature; they were unswayed by the author response, which contended that these criticisms were based on pessimistic worst-case analysis and that ""mild assumptions on the mutual information and function class"" could render better finite-sample bounds. Some of R3's concerns were addressed by the author rebuttal and associated updates, but remained critical of the presentation, in particular regarding the dual function, and downgraded their score. Because R2 disclosed that they were outside of their area of strong expertise, a 4th reviewer was sought (by this stage, the paper was the revised version). Concerns about clarity persisted, with R4 remarking that a section was ""a collection of different remarks without much coherence, some of which are imprecisely stated"". R4 felt variance and sample complexity should be dealt with experimentally, though this was not directly addressed in the author response. R4 also remarked that the plots were difficult to read and questioned the utility of supervised representation learning benchmarks at assessing the quality of MI estimation, given recent evidence in the literature. The theoretical contributions of this submission are slightly outside the bounds of my own expertise, but consensus among three expert reviewers appears to be that the clarity of exposition leaves much to be desired, and I concur with their assessment that the empirical investigation is insufficiently rigorous and does not draw clear comparisons to existing work in this area. I therefore recommend rejection."""
2186,"""Additive Powers-of-Two Quantization: An Efficient Non-uniform Discretization for Neural Networks""","['Quantization', 'Efficient Inference', 'Neural Networks']","""We propose Additive Powers-of-Two~(APoT) quantization, an efficient non-uniform quantization scheme for the bell-shaped and long-tailed distribution of weights and activations in neural networks. By constraining all quantization levels as the sum of Powers-of-Two terms, APoT quantization enjoys high computational efficiency and a good match with the distribution of weights. A simple reparameterization of the clipping function is applied to generate a better-defined gradient for learning the clipping threshold. Moreover, weight normalization is presented to refine the distribution of weights to make the training more stable and consistent. Experimental results show that our proposed method outperforms state-of-the-art methods, and is even competitive with the full-precision models, demonstrating the effectiveness of our proposed APoT quantization. For example, our 4-bit quantized ResNet-50 on ImageNet achieves 76.6% top-1 accuracy without bells and whistles; meanwhile, our model reduces 22% computational cost compared with the uniformly quantized counterpart.""","""This paper presents a quantization scheme with the advantage of high computational efciency. The experimental results show that the proposed scheme outperforms SOTA methods and is competitive with the full-precision models. The reviewers initially raised some concerns including baseline ResNet performance, detailed comparison of the quantization size, and comparison with ResNet50. Authors addressed these concerns in the rebuttal and revised the draft to accommodate the requested items. The reviewers appreciated the revision and find it highly improved. Their overall recommendation is toward accept, which I also support."""
2187,"""Balancing Cost and Benefit with Tied-Multi Transformers""","['tied models', 'encoder-decoder', 'multi-layer softmaxing', 'depth prediction', 'model compression']","""This paper proposes a novel procedure for training multiple Transformers with tied parameters which compresses multiple models into one enabling the dynamic choice of the number of encoder and decoder layers during decoding. In sequence-to-sequence modeling, typically, the output of the last layer of the N-layer encoder is fed to the M-layer decoder, and the output of the last decoder layer is used to compute loss. Instead, our method computes a single loss consisting of NxM losses, where each loss is computed from the output of one of the M decoder layers connected to one of the N encoder layers. A single model trained by our method subsumes multiple models with different number of encoder and decoder layers, and can be used for decoding with fewer than the maximum number of encoder and decoder layers. We then propose a mechanism to choose a priori the number of encoder and decoder layers for faster decoding, and also explore recurrent stacking of layers and knowledge distillation to enable further parameter reduction. In a case study of neural machine translation, we present a cost-benefit analysis of the proposed approaches and empirically show that they greatly reduce decoding costs while preserving translation quality.""","""The paper proposed a method for training multiple transformers with tied parameters and enabling dynamic choice of the number of encoder and decoder layers. The method is evaluated in neural machine translation and shown to reduce decoding costs without compromising translation quality. The reviewers generally agreed that the proposed method is interesting, but raised issues regarding the significance of the claimed benefits and the quality of overall presentation of the paper. Based on a consensus reached in a post rebuttal discussion with the reviewers, I am recommending rejecting this paper."""
2188,"""Carpe Diem, Seize the Samples Uncertain ""at the Moment"" for Adaptive Batch Selection""","['batch selection', 'uncertain sample', 'acceleration', 'convergence']","""The performance of deep neural networks is significantly affected by how well mini-batches are constructed. In this paper, we propose a novel adaptive batch selection algorithm called Recency Bias that exploits the uncertain samples predicted inconsistently in recent iterations. The historical label predictions of each sample are used to evaluate its predictive uncertainty within a sliding window. By taking advantage of this design, Recency Bias not only accelerates the training step but also achieves a more accurate network. We demonstrate the superiority of Recency Bias by extensive evaluation on two independent tasks. Compared with existing batch selection methods, the results showed that Recency Bias reduced the test error by up to 20.5% in a fixed wall-clock training time. At the same time, it improved the training time by up to 59.3% to reach the same test error.""","""The authors propose a new mini-batch selection method for training deep NNs. Rather than random sampling, selection is based on a sliding window of past model predictions for each sample and uncertainty about those samples. Results are presented on MNIST and CIFAR. The reviewers agreed that this is an interesting idea which was clearly presented, but had concerns about the strength of the experimental results, which showed only a modest benefit on relatively simple datasets. In the rebuttal period, the authors added an ablation study and additional results on Tiny-ImageNet. However, the results on the new dataset seem very marginal, and R1 did not feel that all of their concerns were addressed. Im inclined to agree that more work is required to prove the generalizability of this approach before its suitable for acceptance. """
2189,"""Simplicial Complex Networks""","['topological data analysis', 'supervised learning', 'simplicial approximation']","""Universal approximation property of neural networks is one of the motivations to use these models in various real-world problems. However, this property is not the only characteristic that makes neural networks unique as there is a wide range of other approaches with similar property. Another characteristic which makes these models interesting is that they can be trained with the backpropagation algorithm which allows an efficient gradient computation and gives these universal approximators the ability to efficiently learn complex manifolds from a large amount of data in different domains. Despite their abundant use in practice, neural networks are still not well understood and a broad range of ongoing research is to study the interpretability of neural networks. On the other hand, topological data analysis (TDA) relies on strong theoretical framework of (algebraic) topology along with other mathematical tools for analyzing possibly complex datasets. In this work, we leverage a universal approximation theorem originating from algebraic topology to build a connection between TDA and common neural network training framework. We introduce the notion of automatic subdivisioning and devise a particular type of neural networks for regression tasks: Simplicial Complex Networks (SCNs). SCN's architecture is defined with a set of bias functions along with a particular policy during the forward pass which alternates the common architecture search framework in neural networks. We believe the view of SCNs can be used as a step towards building interpretable deep learning models. Finally, we verify its performance on a set of regression problems.""","""The aper introduces simplicial complex networks, a new class of neural networks based on the idea of the subdivision of a simplicial complex. The paper is interesting and brings ideas of algebraic topology to inform the design of new neural network architectures. Reviewer 1 was positive about the ideas of this paper, but had several concerns about clarity, scalablity and the sense that the paper might still be in an early phase. Reviewer 2 had similar concerns about clarity, comparisons, and usefulness. Although there were no responses form the author, the discussion explored the paper further, but continued to think the idea is still in its early phase. The paper is not currently ready for acceptance, and we hope the authors will find useful feedback for their ongoing reasearch. """
2190,"""Gram-Gauss-Newton Method: Learning Overparameterized Neural Networks for Regression Problems""","['Deep learning', 'Optimization', 'Second-order method', 'Neural Tangent Kernel regression']","""First-order methods such as stochastic gradient descent (SGD) are currently the standard algorithm for training deep neural networks. Second-order methods, despite their better convergence rate, are rarely used in practice due to the pro- hibitive computational cost in calculating the second-order information. In this paper, we propose a novel Gram-Gauss-Newton (GGN) algorithm to train deep neural networks for regression problems with square loss. Our method draws inspiration from the connection between neural network optimization and kernel regression of neural tangent kernel (NTK). Different from typical second-order methods that have heavy computational cost in each iteration, GGN only has minor overhead compared to first-order methods such as SGD. We also give theoretical results to show that for sufficiently wide neural networks, the convergence rate of GGN is quadratic. Furthermore, we provide convergence guarantee for mini-batch GGN algorithm, which is, to our knowledge, the first convergence result for the mini-batch version of a second-order method on overparameterized neural net- works. Preliminary experiments on regression tasks demonstrate that for training standard networks, our GGN algorithm converges much faster and achieves better performance than SGD.""","""The article considers Gauss-Newton as a scalable second order alternative to train neural networks, and gives theoretical convergence rates and some experiments. The second order convergence results rely on the NTK and very wide networks. The reviewers pointed out that the method is of course not new, and suggested that comparison not only with SGD but also with methods such as Adam, natural gradients, KFAC, would be important, as well as additional experiments with other types of losses for classification problems and multidimensional outputs. The revision added preliminary experiments comparing with Adam and KFAC. Overall, I think that the article makes an interesting and relevant case that Gauss-Newton can be a competitive alternative for parameter optimization in neural networks. However, the experimental section could still be improved significantly. Therefore, I am recommending that the paper is not accepted at this time but revised to include more extensive experiments. """
2191,"""Smart Ternary Quantization""",[],"""Neural network models are resource hungry. Low bit quantization such as binary and ternary quantization is a common approach to alleviate this resource requirements. Ternary quantization provides a more flexible model and often beats binary quantization in terms of accuracy, but doubles memory and increases computation cost. Mixed quantization depth models, on another hand, allows a trade-off between accuracy and memory footprint. In such models, quantization depth is often chosen manually (which is a tiring task), or is tuned using a separate optimization routine (which requires training a quantized network multiple times). Here, we propose Smart Ternary Quantization (STQ) in which we modify the quantization depth directly through an adaptive regularization function, so that we train a model only once. This method jumps between binary and ternary quantization while training. We show its application on image classification.""","""This paper studies mixed-precision quantization in deep networks where each layer can be either binarized or ternarized. The proposed regularization method is simple and straightforward. However, many details and equations are not stated clearly. Experiments are performed on small-scale image classification data sets. It will also be more convincing to try larger networks or data sets. More importantly, many recent methods that can train mixed-precision networks are not cited nor compared. Figures 3 and 4 are difficult to interpret, and sensitivity on the new hyper-parameters should be studied. The use of ""best validation accuracy"" as performance metric may not be fair. Finally, writing can be improved. Overall, the proposed idea might have merit, but does not seem to have been developed enough."""
2192,"""Learning Neural Causal Models from Unknown Interventions""","['deep learning', 'graphical models', 'meta learning']","""Meta-learning over a set of distributions can be interpreted as learning different types of parameters corresponding to short-term vs long-term aspects of the mechanisms underlying the generation of data. These are respectively captured by quickly-changing \textit{parameters} and slowly-changing \textit{meta-parameters}. We present a new framework for meta-learning causal models where the relationship between each variable and its parents is modeled by a neural network, modulated by structural meta-parameters which capture the overall topology of a directed graphical model. Our approach avoids a discrete search over models in favour of a continuous optimization procedure. We study a setting where interventional distributions are induced as a result of a random intervention on a single unknown variable of an unknown ground truth causal model, and the observations arising after such an intervention constitute one meta-example. To disentangle the slow-changing aspects of each conditional from the fast-changing adaptations to each intervention, we parametrize the neural network into fast parameters and slow meta-parameters. We introduce a meta-learning objective that favours solutions \textit{robust} to frequent but sparse interventional distribution change, and which generalize well to previously unseen interventions. Optimizing this objective is shown experimentally to recover the structure of the causal graph. Finally, we find that when the learner is unaware of the intervention variable, it is able to infer that information, improving results further and focusing the parameter and meta-parameter updates where needed.""","""This paper proposes a metalearning objective to infer causal graphs from data based on masked neural networks to capture arbitrary conditional relationships. While the authors agree that the paper contains various interesting ideas, the theoretical and conceptual underpinnings of the proposed methodology are still lacking and the experiments cannot sufficiently make up for this. The method is definitely worth exploring more and a revision is likely to be accepted at another venue."""
2193,"""Attentive Sequential Neural Processes""","['meta-learning', 'neural processes', 'attention', 'sequential modeling']","""Sequential Neural Processes (SNP) is a new class of models that can meta-learn a temporal stochastic process of stochastic processes by modeling temporal transition between Neural Processes. As Neural Processes (NP) suffers from underfitting, SNP is also prone to the same problem, even more severely due to its temporal context compression. Applying attention which resolves the problem of NP, however, is a challenge in SNP, because it cannot store the past contexts over which it is supposed to apply attention. In this paper, we propose the Attentive Sequential Neural Processes (ASNP) that resolve the underfitting in SNP by introducing a novel imaginary context as a latent variable and by applying attention over the imaginary context. We evaluate our model on 1D Gaussian Process regression and 2D moving MNIST/CelebA regression. We apply ASNP to implement Attentive Temporal GQN and evaluate on the moving-CelebA task.""","""This manuscript outlines a method to improve the described under-fitting issues of sequential neural processes. The primary contribution is an attention mechanism depending on a context generated through an RNN network. Empirical evaluation indicates empirical results on some benchmark tasks. In reviews and discussion, the reviewers and AC agreed that the results look promising, albeit on somewhat simplified tasks. It was also brought up in reviews and discussions that the technical contributions seem to be incremental. This combined with limited empirical evaluation suggests that this work might be preliminary for conference publication. Overall, the manuscript in its current state is borderline and would be significantly improved wither by additional conceptual contributions, or by a more thorough empirical evaluation."""
2194,"""Computation Reallocation for Object Detection""","['Neural Architecture Search', 'Object Detection']","""The allocation of computation resources in the backbone is a crucial issue in object detection. However, classification allocation pattern is usually adopted directly to object detector, which is proved to be sub-optimal. In order to reallocate the engaged computation resources in a more efficient way, we present CR-NAS (Computation Reallocation Neural Architecture Search) that can learn computation reallocation strategies across different feature resolution and spatial position diectly on the target detection dataset. A two-level reallocation space is proposed for both stage and spatial reallocation. A novel hierarchical search procedure is adopted to cope with the complex search space. We apply CR-NAS to multiple backbones and achieve consistent improvements. Our CR-ResNet50 and CR-MobileNetV2 outperforms the baseline by 1.9% and 1.7% COCO AP respectively without any additional computation budget. The models discovered by CR-NAS can be equiped to other powerful detection neck/head and be easily transferred to other dataset, e.g. PASCAL VOC, and other vision tasks, e.g. instance segmentation. Our CR-NAS can be used as a plugin to improve the performance of various networks, which is demanding.""","""The submission applies architecture search to object detection architectures. The work is fairly incremental but the results are reasonable. After revision, the scores are 8, 6, 6, 3. The reviewer who gave ""3"" wrote after the authors' responses and revision that ""Authors' responses partly resolved my concerns on the experiments. I have no object to accept this paper. [sic]"". The AC recommends adopting the majority recommendation and accepting the paper."""
2195,"""Benefit of Interpolation in Nearest Neighbor Algorithms""","['Data Interpolation', 'Multiplicative Constant', 'W-Shaped Double Descent', 'Nearest Neighbor Algorithm']","""The over-parameterized models attract much attention in the era of data science and deep learning. It is empirically observed that although these models, e.g. deep neural networks, over-fit the training data, they can still achieve small testing error, and sometimes even outperform traditional algorithms which are designed to avoid over-fitting. The major goal of this work is to sharply quantify the benefit of data interpolation in the context of nearest neighbors (NN) algorithm. Specifically, we consider a class of interpolated weighting schemes and then carefully characterize their asymptotic performances. Our analysis reveals a U-shaped performance curve with respect to the level of data interpolation, and proves that a mild degree of data interpolation strictly improves the prediction accuracy and statistical stability over those of the (un-interpolated) optimal pseudo-formula NN algorithm. This theoretically justifies (predicts) the existence of the second U-shaped curve in the recently discovered double descent phenomenon. Note that our goal in this study is not to promote the use of interpolated-NN method, but to obtain theoretical insights on data interpolation inspired by the aforementioned phenomenon.""","""The authors show that data interpolation in the context of nearest neighbor algorithms, can sometime strictly improve performance. The paper is poorly written for an ICLR audience and the added value compared to extensive prior work in the area is not clearly demonstrated."""
2196,"""Thieves on Sesame Street! Model Extraction of BERT-based APIs""","['model extraction', 'BERT', 'natural language processing', 'pretraining language models', 'model stealing', 'deep learning security']","""We study the problem of model extraction in natural language processing, in which an adversary with only query access to a victim model attempts to reconstruct a local copy of that model. Assuming that both the adversary and victim model fine-tune a large pretrained language model such as BERT (Devlin et al., 2019), we show that the adversary does not need any real training data to successfully mount the attack. In fact, the attacker need not even use grammatical or semantically meaningful queries: we show that random sequences of words coupled with task-specific heuristics form effective queries for model extraction on a diverse set of NLP tasks, including natural language inference and question answering. Our work thus highlights an exploit only made feasible by the shift towards transfer learning methods within the NLP community: for a query budget of a few hundred dollars, an attacker can extract a model that performs only slightly worse than the victim model. Finally, we study two defense strategies against model extractionmembership classification and API watermarkingwhich while successful against some adversaries can also be circumvented by more clever ones.""","""Two knowledgable reviewers recommend accepting the paper, and the less familiar reviewer is also positive. The final decision is to accept the paper. It's an interesting and timely topic with insightful results."""
2197,"""Branched Multi-Task Networks: Deciding What Layers To Share""","['Multi-Task Learning', 'Neural Network Architectures', 'Deep learning', 'Efficient Architectures']","""In the context of multi-task learning, neural networks with branched architectures have often been employed to jointly tackle the tasks at hand. Such ramified networks typically start with a number of shared layers, after which different tasks branch out into their own sequence of layers. Understandably, as the number of possible network configurations is combinatorially large, deciding what layers to share and where to branch out becomes cumbersome. Prior works have either relied on ad hoc methods to determine the level of layer sharing, which is suboptimal, or utilized neural architecture search techniques to establish the network design, which is considerably expensive. In this paper, we go beyond these limitations and propose a principled approach to automatically construct branched multi-task networks, by leveraging the employed tasks' affinities. Given a specific budget, i.e. number of learnable parameters, the proposed approach generates architectures, in which shallow layers are task-agnostic, whereas deeper ones gradually grow more task-specific. Extensive experimental analysis across numerous, diverse multi-tasking datasets shows that, for a given budget, our method consistently yields networks with the highest performance, while for a certain performance threshold it requires the least amount of learnable parameters.""","""The authors present an approach to multi-task learning. Reviews are mixed. The main worries seem to be computational feasibility and lack of comparison with existing work. Clearly, one advantage to Cross-stitch networks over the proposed approach is that their approach learns sharing parameters in an end-to-end fashion and scales more efficiently to more tasks. Note: The authors mention SluiceNets in their discussion, but I think it would be appropriate to directly compare against this architecture - or DARTS [pseudo-url], maybe - since the offline RSA computations only seem worth it if better than *anything* you can do end-to-end. I would encourage the authors to map out this space and situate their proposed method properly in the landscape of existing work. I also think it would be interesting to think of their approach as an ensemble learning approach and look at work in this space on using correlations between representations to learn what and how to combine. Finally, some work has suggested that benefits from MTL are a result of easier optimization, e.g., [3]; if that is true, will you not potentially miss out on good task combinations with your approach? Other related work: [0] pseudo-url [1] pseudo-url [2]pseudo-url - a somewhat similar two-stage approach [3] pseudo-url"""
2198,"""Scale-Equivariant Steerable Networks""","['Scale Equivariance', 'Steerable Filters']","""The effectiveness of Convolutional Neural Networks (CNNs) has been substantially attributed to their built-in property of translation equivariance. However, CNNs do not have embedded mechanisms to handle other types of transformations. In this work, we pay attention to scale changes, which regularly appear in various tasks due to the changing distances between the objects and the camera. First, we introduce the general theory for building scale-equivariant convolutional networks with steerable filters. We develop scale-convolution and generalize other common blocks to be scale-equivariant. We demonstrate the computational efficiency and numerical stability of the proposed method. We compare the proposed models to the previously developed methods for scale equivariance and local scale invariance. We demonstrate state-of-the-art results on the MNIST-scale dataset and on the STL-10 dataset in the supervised learning setting.""","""This work presents a theory for building scale-equivariant CNNs with steerable filters. The proposed method is compared with some of the related techniques . SOTA is achieved on MNIST-scale dataset and gains on STL-10 is demonstrated. The reviewers had some concern related to the method, clarity, and comparison with related works. The authors have successfully addressed most of these concerns. Overall, the reviewers are positive about this work and appreciate the generality of the presented theory and its good empirical performance. All the reviewers recommend accept."""
2199,"""Ranking Policy Gradient""","['Sample-efficient reinforcement learning', 'off-policy learning.']","""Sample inefficiency is a long-lasting problem in reinforcement learning (RL). The state-of-the-art estimates the optimal action values while it usually involves an extensive search over the state-action space and unstable optimization. Towards the sample-efficient RL, we propose ranking policy gradient (RPG), a policy gradient method that learns the optimal rank of a set of discrete actions. To accelerate the learning of policy gradient methods, we establish the equivalence between maximizing the lower bound of return and imitating a near-optimal policy without accessing any oracles. These results lead to a general off-policy learning framework, which preserves the optimality, reduces variance, and improves the sample-efficiency. We conduct extensive experiments showing that when consolidating with the off-policy learning framework, RPG substantially reduces the sample complexity, comparing to the state-of-the-art.""","""The paper introduces a novel and effective approach to policy optimization. The overall contribution is sufficient to merit acceptance. Nevertheless, the authors should improve the presentation and experimental evaluation in line with the reviewer criticisms. The criticisms of AnonReviewer2 in particular should not be neglected. Regarding the theory, I agree with AnonReviewer3 that the UNOP assumption is too limiting. The paper would be much stronger if this assumption could be significantly weakened, or better justified."""
2200,"""Compressing BERT: Studying the Effects of Weight Pruning on Transfer Learning""","['compression', 'pruning', 'pre-training', 'BERT', 'language modeling', 'transfer learning', 'ML', 'NLP']","""Universal feature extractors, such as BERT for natural language processing and VGG for computer vision, have become effective methods for improving deep learning models without requiring more labeled data. A common paradigm is to pre-train a feature extractor on large amounts of data then fine-tune it as part of a deep learning model on some downstream task (i.e. transfer learning). While effective, feature extractors like BERT may be prohibitively large for some deployment scenarios. We explore weight pruning for BERT and ask: how does compression during pre-training affect transfer learning? We find that pruning affects transfer learning in three broad regimes. Low levels of pruning (30-40%) do not affect pre-training loss or transfer to downstream tasks at all. Medium levels of pruning increase the pre-training loss and prevent useful pre-training information from being transferred to downstream tasks. High levels of pruning additionally prevent models from fitting downstream datasets, leading to further degradation. Finally, we observe that fine-tuning BERT on a specific task does not improve its prunability. We conclude that BERT can be pruned once during pre-training rather than separately for each task without affecting performance.""","""This work explores weight pruning for BERT in three broad regimes of transfer learning: low, medium and high. Overall, the paper is well written and explained and the goal of efficient training and inference is meaningful. Reviewers have major concerns about this work is its technical innovation and value to the community: a reuse of pruning to BERT is not new in technical perspective, the marginal improvement in pruning ratio compared to other compression method for BERT, and the introduced sparsity that hinders efficient computation for modern hardware such as GPU. The rebuttal failed to answer a majority of these important concerns. Hence I recommend rejection."""
2201,"""Variance Reduction With Sparse Gradients""","['optimization', 'variance reduction', 'machine learning', 'deep neural networks']","""Variance reduction methods such as SVRG and SpiderBoost use a mixture of large and small batch gradients to reduce the variance of stochastic gradients. Compared to SGD, these methods require at least double the number of operations per update to model parameters. To reduce the computational cost of these methods, we introduce a new sparsity operator: The random-top-k operator. Our operator reduces computational complexity by estimating gradient sparsity exhibited in a variety of applications by combining the top-k operator and the randomized coordinate descent operator. With this operator, large batch gradients offer an extra benefit beyond variance reduction: A reliable estimate of gradient sparsity. Theoretically, our algorithm is at least as good as the best algorithm (SpiderBoost), and further excels in performance whenever the random-top-k operator captures gradient sparsity. Empirically, our algorithm consistently outperforms SpiderBoost using various models on various tasks including image classification, natural language processing, and sparse matrix factorization. We also provide empirical evidence to support the intuition behind our algorithm via a simple gradient entropy computation, which serves to quantify gradient sparsity at every iteration.""","""Congratulations on getting your paper accepted to ICLR. Please make sure to incorporate the reviewers' suggestions for the final version."""
2202,"""Learning to Move with Affordance Maps""","['navigation', 'exploration']","""The ability to autonomously explore and navigate a physical space is a fundamental requirement for virtually any mobile autonomous agent, from household robotic vacuums to autonomous vehicles. Traditional SLAM-based approaches for exploration and navigation largely focus on leveraging scene geometry, but fail to model dynamic objects (such as other agents) or semantic constraints (such as wet floors or doorways). Learning-based RL agents are an attractive alternative because they can incorporate both semantic and geometric information, but are notoriously sample inefficient, difficult to generalize to novel settings, and are difficult to interpret. In this paper, we combine the best of both worlds with a modular approach that {\em learns} a spatial representation of a scene that is trained to be effective when coupled with traditional geometric planners. Specifically, we design an agent that learns to predict a spatial affordance map that elucidates what parts of a scene are navigable through active self-supervised experience gathering. In contrast to most simulation environments that assume a static world, we evaluate our approach in the VizDoom simulator, using large-scale randomly-generated maps containing a variety of dynamic actors and hazards. We show that learned affordance maps can be used to augment traditional approaches for both exploration and navigation, providing significant improvements in performance.""","""This paper presents a framework for navigation that leverages learning spatial affordance maps (ie what parts of a scene are navigable) via a self-supervision approach in order to deal with environments with dynamics and hazards. They evaluate on procedurally generated VizDoom levels and find improvements over frontier and RL baseline agents. Reviewers all agreed on the quality of the paper and strength of the results. Authors were highly responsive to constructive criticism and the engagement/discussion appears to have improved the paper overall. After seeing the rebuttal and revisions, I believe this paper will be a useful contribution to the field and Im happy to recommend accept. """
2203,"""Differentiable Architecture Compression""",[],"""In many learning situations, resources at inference time are significantly more constrained than resources at training time. This paper studies a general paradigm, called Differentiable ARchitecture Compression (DARC), that combines model compression and architecture search to learn models that are resource-efficient at inference time. Given a resource-intensive base architecture, DARC utilizes the training data to learn which sub-components can be replaced by cheaper alternatives. The high-level technique can be applied to any neural architecture, and we report experiments on state-of-the-art convolutional neural networks for image classification. For a WideResNet with 97.2% accuracy on CIFAR-10, we improve single-sample inference speed by 2.28X and memory footprint by 5.64X, with no accuracy loss. For a ResNet with 79.15% Top-1 accuracy on ImageNet, we improve batch inference speed by 1.29X and memory footprint by 3.57X with 1% accuracy loss. We also give theoretical Rademacher complexity bounds in simplified cases, showing how DARC avoids over-fitting despite over-parameterization.""","""The paper is on the borderline. A rejection is proposed due to the percentage limitation of ICLR. """
2204,"""Unsupervised Progressive Learning and the STAM Architecture""","['continual learning', 'unsupervised learning', 'online learning']","""We first pose the Unsupervised Progressive Learning (UPL) problem: learning salient representations from a non-stationary stream of unlabeled data in which the number of object classes increases with time. If some limited labeled data is also available, those representations can be associated with specific classes, thus enabling classification tasks. To solve the UPL problem, we propose an architecture that involves an online clustering module, called Self-Taught Associative Memory (STAM). Layered hierarchies of STAM modules learn based on a combination of online clustering, novelty detection, forgetting outliers, and storing only prototypical representations rather than specific examples. The goal of this paper is to introduce the UPL problem, describe the STAM architecture, and evaluate the latter in the UPL context. """,""" The paper presents a semi-supervised data streaming approach. The proposed architecture is made of a layer-wise k-means structure (more specifically a epsilon-means approach, where the epsilon is adaptively defined from the distortion percentile). Each layer is associated a scope (patch dimensions); each patch of the image is associated its nearest cluster center (or a new cluster is created if needed); new cluster centers are adjusted to fit the examples (Short Term Memory); clusters that have been visited sufficiently many time are frozen (Long Term Memory). Each cluster is associated a label distribution from the labelled examples. The label for each new image is obtained by a vote of the clusters and layers. Some reviews raise some issues about the robustness of the approach, and its sensitivity w.r.t. hyper-parameters. Some claims (""the distribution associated to a class may change with time"") are not experimentally confirmed; it seems that in such a case, the LTM size might grow along time; a forgetting mechanism would then be needed to enforce the tractability of classification. Some claims (the mechanism is related to how animal learn) are debatable, as noted by Rev#1; see hippocampal replay. The area chair thinks that a main issue with the paper is that the Unsupervised Progressive Learning is considered to be a new setting (""none of the existing approaches in the literature are directly applicable to the UPL problem""), preventing the authors from comparing their results with baselines. However, after a short bibliographic search, some related approaches exist under another name: * Incremental Semi-supervised Learning on Streaming Data, Pattern Recognition 88, Li et al., 2018; * Incremental Semi-Supervised Learning from Streams for Object Classification, Chiotellis et al., 2018; * Online data stream classification with incremental semi-supervised learning, Loo et al., 2015. The above approaches seem able to at least accommodate the Uniform UPL scenario. I therefore encourage the authors to consider some of the above as baselines and provide a comparative validation of STAM."""
2205,"""Once-for-All: Train One Network and Specialize it for Efficient Deployment""","['Efficient Deep Learning', 'Specialized Neural Network Architecture', 'AutoML']","""We address the challenging problem of efficient inference across many devices and resource constraints, especially on edge devices. Conventional approaches either manually design or use neural architecture search (NAS) to find a specialized neural network and train it from scratch for each case, which is computationally prohibitive (causing pseudo-formula emission as much as 5 cars' lifetime) thus unscalable. In this work, we propose to train a once-for-all (OFA) network that supports diverse architectural settings by decoupling training and search, to reduce the cost. We can quickly get a specialized sub-network by selecting from the OFA network without additional training. To efficiently train OFA networks, we also propose a novel progressive shrinking algorithm, a generalized pruning method that reduces the model size across many more dimensions than pruning (depth, width, kernel size, and resolution). It can obtain a surprisingly large number of sub-networks ( 10^{19} that can fit different hardware platforms and latency constraints while maintaining the same level of accuracy as training independently. On diverse edge devices, OFA consistently outperforms state-of-the-art (SOTA) NAS methods (up to 4.0% ImageNet top1 accuracy improvement over MobileNetV3, or same accuracy but 1.5x faster than MobileNetV3, 2.6x faster than EfficientNet w.r.t measured latency) while reducing many orders of magnitude GPU hours and pseudo-formula emission. In particular, OFA achieves a new SOTA 80.0% ImageNet top-1 accuracy under the mobile setting ( pseudo-formula 600M MACs). OFA is the winning solution for the 3rd Low Power Computer Vision Challenge (LPCVC), DSP classification track and the 4th LPCVC, both classification track and detection track. Code and 50 pre-trained models (for many devices & many latency constraints) are released at pseudo-url. ""","""The authors propose a new method for neural architecture search, except it's not exactly that because model training is separated from architecture, which is the main point of the paper. Once this network is trained, sub-networks can be distilled from it and used for specific tasks. The paper as submitted missed certain details, but after this was pointed out by reviewers the details were satisfactorily described by the authors. The idea of the paper is original and interesting. The paper is correct and, after the revisions by authors, complete. In my view, this is sufficient for acceptance."""
2206,"""GroSS Decomposition: Group-Size Series Decomposition for Whole Search-Space Training""","['architecture search', 'block term decomposition', 'network decomposition', 'network acceleration', 'group convolution']","""We present Group-size Series (GroSS) decomposition, a mathematical formulation of tensor factorisation into a series of approximations of increasing rank terms. GroSS allows for dynamic and differentiable selection of factorisation rank, which is analogous to a grouped convolution. Therefore, to the best of our knowledge, GroSS is the first method to simultaneously train differing numbers of groups within a single layer, as well as all possible combinations between layers. In doing so, GroSS trains an entire grouped convolution architecture search-space concurrently. We demonstrate this with a proof-of-concept exhaustive architecure search with a performance objective. GroSS represents a significant step towards liberating network architecture search from the burden of training and finetuning.""","""The authors use a Tucker decomposition to represent the weights of a network, for efficient computation. The idea is natural, and preliminary results promising. The main concern was lack of empirical validation and comparisons. While the authors have provided partial additional results in the rebuttal, which is appreciated, a thorough set of experiments and comparisons would ideally be included in a new version of the paper, and then considered again in review. """
2207,"""Temporal Difference Weighted Ensemble For Reinforcement Learning""","['reinforcement learning', 'ensemble', 'deep q-network']","""Combining multiple function approximators in machine learning models typically leads to better performance and robustness compared with a single function. In reinforcement learning, ensemble algorithms such as an averaging method and a majority voting method are not always optimal, because each function can learn fundamentally different optimal trajectories from exploration. In this paper, we propose a Temporal Difference Weighted (TDW) algorithm, an ensemble method that adjusts weights of each contribution based on accumulated temporal difference errors. The advantage of this algorithm is that it improves ensemble performance by reducing weights of Q-functions unfamiliar with current trajectories. We provide experimental results for Gridworld tasks and Atari tasks that show significant performance improvements compared with baseline algorithms.""","""The paper proposes a method to combine the decision of an ensemble of RL agents. It uses an uncertainty measure based on the TD error, and suggests a weighted average or weighted voting mechanism to combine their policy or value functions to come up with a joint decision. The reviewers raised several concerns, including whether the method works in the stochastic setting, whether it favours deterministic parts of the state space, its sensitivity to bias, and unfair comparison to a single agent setting. There is also a relevant PhD dissertation (Elliot, 2017), which the authors surprisingly refused to discuss and cite because apparently it was not published at any conference. A PhD dissertation is a citable reference, if it is relevant. If it is, a good scholarship requires proper citation. Overall, even though the proposed method might potentially be useful, it requires further investigations. Two out of three reviewers are not positive about the paper in its current form. Therefore, I cannot recommend acceptance at this stage. Elliott, Daniel L., The Wisdom of the crowd : reliable deep reinforcement learning through ensembles of Q-functions, PhD Dissertation, Colorado State University, 2017"""
2208,"""Jelly Bean World: A Testbed for Never-Ending Learning""",[],"""Machine learning has shown growing success in recent years. However, current machine learning systems are highly specialized, trained for particular problems or domains, and typically on a single narrow dataset. Human learning, on the other hand, is highly general and adaptable. Never-ending learning is a machine learning paradigm that aims to bridge this gap, with the goal of encouraging researchers to design machine learning systems that can learn to perform a wider variety of inter-related tasks in more complex environments. To date, there is no environment or testbed to facilitate the development and evaluation of never-ending learning systems. To this end, we propose the Jelly Bean World testbed. The Jelly Bean World allows experimentation over two-dimensional grid worlds which are filled with items and in which agents can navigate. This testbed provides environments that are sufficiently complex and where more generally intelligent algorithms ought to perform better than current state-of-the-art reinforcement learning approaches. It does so by producing non-stationary environments and facilitating experimentation with multi-task, multi-agent, multi-modal, and curriculum learning settings. We hope that this new freely-available software will prompt new research and interest in the development and evaluation of never-ending learning systems and more broadly, general intelligence systems.""","""This paper proposes a flexible environment for studying never ending learning. During the discussion period, all reviewers found the paper to be borderline. Pros: - we don't have good lifelong or never-ending RL environments, and this paper seems to provide one - includes a number of interesting features such as multiple input modalities, non-episodic interactions, flexible task definitions Cons: - procedurally generated, toy environment - unclear if the environment reflects the characteristics of real world NEL problems In the balance, I think the environments add value to the RL community, and being presented at ICLR would increase its visibility."""
2209,"""One-Shot Neural Architecture Search via Compressive Sensing""","['deep learning', 'autoML', 'neural architecture search', 'image classification', 'language modeling']","""Neural architecture search (NAS), or automated design of neural network models, remains a very challenging meta-learning problem. Several recent works (called ""one-shot"" approaches) have focused on dramatically reducing NAS running time by leveraging proxy models that still provide architectures with competitive performance. In our work, we propose a new meta-learning algorithm that we call CoNAS, or Compressive sensing-based Neural Architecture Search. Our approach merges ideas from one-shot NAS approaches with iterative techniques for learning low-degree sparse Boolean polynomial functions. We validate our approach on several standard test datasets, discover novel architectures hitherto unreported, and achieve competitive (or better) results in both performance and search time compared to existing NAS approaches. Further, we provide theoretical analysis via upper bounds on the number of validation error measurements needed to perform reliable meta-learning; to our knowledge, these analysis tools are novel to the NAS literature and may be of independent interest.""","""This paper proposed to use a compressive sensing approach for neural architecture search, similar to Harmonica for hyperparameter optimization. In the discussion, the reviewers noted that the empirical evaluation is not comparing apples to apples; the authors could not provide a fair evaluation. Code availability is not mentioned. The proof of theorem 3.2 was missing in the original submission and was only provided during the rebuttal. All reviewers gave rejecting scores, and I also recommend rejection. """
2210,"""Label Cleaning with Likelihood Ratio Test""",['Deep Learning'],"""To collect large scale annotated data, it is inevitable to introduce label noise, i.e., incorrect class labels. A major challenge is to develop robust deep learning models that achieve high test performance despite training set label noise. We introduce a novel approach that directly cleans labels in order to train a high quality model. Our method leverages statistical principles to correct data labels and has a theoretical guarantee of the correctness. In particular, we use a likelihood ratio test(LRT) to flip the labels of training data. We prove that our LRT label correction algorithm is guaranteed to flip the label so it is consistent with the true Bayesian optimal decision rule with high probability. We incorporate our label correction algorithm into the training of deep neural networks and train models that achieve superior testing performance on multiple public datasets.""","""This paper addresses a very interesting topic, and the authors clarified various issues raised by the reviewers. However, given the high competition of ICLR2020, this paper is unfortunately still below the bar. We hope that the detailed comments from the reviewers help you improve the paper for potential future submission."""
2211,"""Adversarial Robustness Against the Union of Multiple Perturbation Models""","['adversarial', 'robustness', 'multiple perturbation', 'MNIST', 'CIFAR10']","""Owing to the susceptibility of deep learning systems to adversarial attacks, there has been a great deal of work in developing (both empirically and certifiably) robust classifiers, but the vast majority has defended against single types of attacks. Recent work has looked at defending against multiple attacks, specifically on the MNIST dataset, yet this approach used a relatively complex architecture, claiming that standard adversarial training can not apply because it ""overfits"" to a particular norm. In this work, we show that it is indeed possible to adversarially train a robust model against a union of norm-bounded attacks, by using a natural generalization of the standard PGD-based procedure for adversarial training to multiple threat models. With this approach, we are able to train standard architectures which are robust against l_inf, l_2, and l_1 attacks, outperforming past approaches on the MNIST dataset and providing the first CIFAR10 network trained to be simultaneously robust against (l_inf, l_2, l_1) threat models, which achieves adversarial accuracy rates of (47.6%, 64.3%, 53.4%) for (l_inf, l_2, l_1) perturbations with epsilon radius = (0.03,0.5,12).""","""Thanks to the authors for submitting the paper and providing further explanations and experiments. This paper aims to ensure robustness against several perturbation models simultaneously. While the authors' response has addressed several issues raised by the reviewers, the concern on the lack of novelty remains. Overall, there is not enough support among the reviewers for the paper to be accepted."""
2212,"""Wasserstein Robust Reinforcement Learning""","['Reinforcement Learning', 'Robustness', 'Wasserstein distance']","""Reinforcement learning algorithms, though successful, tend to over-fit to training environments, thereby hampering their application to the real-world. This paper proposes pseudo-formula -- a robust reinforcement learning algorithm with significant robust performance on low and high-dimensional control tasks. Our method formalises robust reinforcement learning as a novel min-max game with a Wasserstein constraint for a correct and convergent solver. Apart from the formulation, we also propose an efficient and scalable solver following a novel zero-order optimisation method that we believe can be useful to numerical optimisation in general. We empirically demonstrate significant gains compared to standard and robust state-of-the-art algorithms on high-dimensional MuJuCo environments""","""While the reviewers agreed that the problem of learning robust policies is an important one, there were a number of major concerns raised about the paper, and as a result I would recommend that the paper not be accepted at this time. The important points are: (1) limited novelty in light of prior work in this area (see R2 and R3); (2) a number of missing comparisons (see R2). There is also a bit of confusion in the reviews, which I think stems from a somewhat unclear statement in the paper of the problem formulation. While there is nothing wrong with assuming access to a parameterized simulator and studying robustness under parametric variation, this is of course a much stronger assumption than some prior work on robust reinforcement learning. Clarity on this point is crucial, and there are a large number of prior methods that can likely do well in this setting (e.g., based on system ID, etc.)."""
2213,"""Diagnosing the Environment Bias in Vision-and-Language Navigation""","['vision-and-language navigation', 'generalization', 'environment bias diagnosis']","""Vision-and-Language Navigation (VLN) requires an agent to follow natural-language instructions, explore the given environments, and reach the desired target locations. These step-by-step navigational instructions are extremely useful in navigating new environments which the agent does not know about previously. Most recent works that study VLN observe a significant performance drop when tested on unseen environments (i.e., environments not used in training), indicating that the neural agent models are highly biased towards training environments. Although this issue is considered as one of major challenges in VLN research, it is still under-studied and needs a clearer explanation. In this work, we design novel diagnosis experiments via environment re-splitting and feature replacement, looking into possible reasons of this environment bias. We observe that neither the language nor the underlying navigational graph, but the low-level visual appearance conveyed by ResNet features directly affects the agent model and contributes to this environment bias in results. According to this observation, we explore several kinds of semantic representations which contain less low-level visual information, hence the agent learned with these features could be better generalized to unseen testing environments. Without modifying the baseline agent model and its training method, our explored semantic features significantly decrease the performance gap between seen and unseen on multiple datasets (i.e., 8.6% to 0.2% on R2R, 23.9% to 0.1% on R4R, and 3.74 to 0.17 on CVDN) and achieve competitive unseen results to previous state-of-the-art models.""","""The submission is a detailed and extensive examination of overfitting in vision-and-language navigation domains. The authors evaluate several methods across multiple environments, using different splits of the environment data into training, validation-seen, and validation-unseen. The authors also present an approach using semantic features which is shown to have little or no gap between training and validation performance. The reviewers had mixed reviews and there was substantial discussion about the merits of the paper. However, a significant issue was observed and confirmed with the authors, relating to tuning the semantic features and agent model on the unseen validation data. This is an important flaw, since the other methods were not tuned in this way, and there was no 'test' performance given in the paper. For this reason, the recommendation is to reject the paper. The authors are encouraged to fairly compare all models and resubmit their paper at another venue."""